Risk & Portfolio Construction: A smart-beta framework
Martin Steward speaks with Lombard Odier’s Jérôme Teiletche, co-author of a new paper that compares alternative equity portfolio construction techniques within a common framework
The propensity to maximise errors in return forecasts has turned many investment practitioners off of standard Markowitz mean-variance portfolio optimisation towards alternatives like minimum variance, maximum diversification, risk parity and equal-weighting. Of these, the first two move us away from the problem of forecasting returns, but not from estimation and optimisation.
Pure minimum variance assumes that the return to every asset is the same and therefore optimises towards the least-volatile. Pure maximum diversification assumes that risk is rewarded linearly by return and focuses on maximising the ratio of the portfolio’s weighted average volatility to its overall volatility. Practitioners still have to estimate future volatility or correlation, and optimise to those estimates.
The equally-weighted portfolio is, of course, estimation and optimisation-free. Risk parity, which aims to divide ex-ante volatility equally between each asset, sits somewhere between these two families of optimised and non-optimised solutions.
Which approach is best? Plenty of numbers have been crunched, but with very few clear conclusions. A recent paper by Jérôme Teiletche, global head of the solutions group at Lombard Odier Investment Managers, and co-authors Emmanuel Jurczenko and Thierry Michel, notes that empirical studies are split quite evenly because they are “highly contingent on the universe and the period of study”.
“This literature can be sample-dependent, particularly when the authors are proposing specific solutions,” says Teiletche. “If we could achieve some results in a common framework that is more theory-based that would be beneficial – and that was the starting point for our paper.”
‘Generalized Risk-Based Investing’ contends that all of these alternative weighting schemes can be defined on the sliding scales of two parameters: a “regularization parameter” (which measures sensitivity to changes in the variance-covariance matrix) and a “risk tolerance level coefficient” (which measures tolerance for individually-risky assets).
The second is the most intuitive. Apart from equal-weighting, all of these methodologies have a preference for lower-volatility and lower-beta assets – but minimum variance will clearly show the lowest beta, maximum diversification will be close behind, and risk parity will come much closer to the full market risk.
The result for risk parity might surprise those used to seeing the strategy applied in the multi-asset context.
“The reason is pretty simple: diversifying a risk and minimising a risk is not the same thing,” Teiletche explains. “In a heterogeneous universe you can significantly reduce risk with diversification, but within equities the effect is not so spectacular. I think it’s a very good long-term solution, particularly for emerging markets – you are not going there to minimise the risk, you are going there to take the risk and get the returns.”
The figure adds some texture. Minimum variance shows up as a big bet against the market, which investors hope the low-beta premium will compensate. Risk parity is not such a big bet against the market, and as a result it also picks up more exposure to the small-cap and value premia – which is intuitive, as it aims to diversify rather than minimise risk.
Maximum diversification looks more like minimum variance in its factor exposures, except that its focus on correlations relieves the need deliberately to tilt to low volatility. But Teiletche also concedes that the sensitivity of maximum diversification to the risk tolerance parameter is very dependent on its universe – and therefore something of a challenge to his claims for a ‘generalised’ schematic.
He offers the example of maximum diversification of his home stock market in Switzerland, where the listing of Transocean, a risky company that exhibits very low correlation with other Swiss-index companies, attracts a big allocation.
“It’s an unusual example, but it shows that going to more uncorrelated stocks does not necessarily deliver a more defensive portfolio,” he observes.
Given anomalies like this Teiletche prefers to focus on the regularisation parameter. Again minimum variance and equal-weighting represent the extremes, with the first highly sensitive to the variance-covariance matrix and the second completely independent of it. Maximum diversification is also highly-sensitive to this parameter.
“High sensitivity suggests that the strategy will react to changes in levels of risk, but it also implies huge turnover and potentially huge concentration,” says Teiletche.
To illustrate, he refers to changes in sector allocation over time. Minimum variance and maximum diversification show dramatic changes in capital allocation, with minimum variance ending up with exposure to just two sectors. This involves “huge bets”, Teiletche observes – although maximum diversification practitioners would counter that concentration merely reflects the lack of diversification available, rather than a failure to exploit diversification potential.
“This is good in the sense that the portfolio is reacting to the risk environment, but no-one is really pursuing these strategies in this pure way,” he says. “By contrast, risk parity’s allocation to each sector is much more stable, minimising the risk but always remaining allocated to all components of risk.”
In a world where we now have several solutions to passive investment – all return-agnostic and basically risk-reducing and yet behaving very differently in real-world contexts – Teiletche and his co-authors have taken useful steps towards a generalised schematic for assessing the nature of those solutions. They don’t tell us which is ‘best’ – that depends on your starting point – but their framework will help investors select the strategies that are best for their broader objectives.