The idea of portfolio optimisation and variance reduction has been the subject of very many academic studies over the past 50 years. Many interesting theoretical results have been obtained, starting with Markowitz’s celebrated approach. When it comes to practical matters, however, sophisticated methods are not of great help; it is often enough to follow a few simple ideas that we summarise here:

Diversify: The main lesson of Markowitz’s theory is that risk is reduced when a portfolio is correctly diversified. This means that, as many different stocks, representing different sectors of the economy, small caps and large caps, from different countries, should all be mixed together. If these stocks were uncorrelated, the risk of a portfolio containing 100 different stocks would be 10 times smaller than a portfolio containing a single stock. For example, the volatility of typical blue chip US stocks is about 50%, assuming that stock returns are uncorrelated, the volatility of the S&P should be 2.3%. In reality the volatility of this index is much higher at about 20%. Unfortunately, stocks tend to be rather strongly correlated, even at the international level. One should therefore seek extra diversification by including other investment vehicles, such as bonds, and also alternative investments relying on long/short strategies on futures or equity markets, whose aim is precisely to achieve a performance as weakly correlated with the stock market as possible. The crucial question, obviously, is: How should one choose the relative weights of these different assets in order to get the `best’ portfolio?

Do not use past returns: Markowitz’s answer to this question relies on the knowledge of the expected returns of these different assets. It is tempting to try to get an estimate of these by measuring the observed past returns and postulating that these will be relatively stable in the future. Unfortunately, this does not work at all: past returns are incredibly poor predictors of future returns. Forecasting these returns correctly is notoriously hard, because so many factors can influence the future return of a company, a sector, an index or a bond. Furthermore, one often observe that rather small differences in forecasted returns lead to an optimal portfolio that is highly concentrated on the few supposedly better assets. This overconcentration ruins the very idea of diversification, and is detrimental in terms of risk — not too mention the fact that these forecasts are usually quite far off.

Use correlations with care: The second important input of Markowitz’s theory is the covariance matrix, which measures how any two potentially eligible assets are correlated. In a universe of N assets, the number of pairs is N(N-1)/2. Therefore, in the case of the SP&500, the correlation matrix contains 500´499/2=124,750 different entries. In order to determine these numbers empirically, one usually relies on rather short time series, say two years of daily returns, or 500 days. The number of data points is thus 500´500=250,000, only a factor of two larger that the number of correlation coefficients. This means that the statistical precision on these coefficients is very bad: the largest part of the covariance matrix is just measurement noise that cannot be used to reduce the risk. On the contrary, a blind use of these empirical covariance matrices leads to very unstable (in time) portfolios, and to a systematic underestimation of the true risk. Here again, simple ideas like the one factor model, where the only relevant correlation of a stock is with the market as a whole, appear to be more reliable. Even the apparent increase of correlations during market crashes can be explained using a non-Gaussian one-factor model that allows for fat tails. This is rather important since a true increase of correlations during crisis periods would mean even less hope of diversification. Interestingly, any rational idea to clean up the noise in covariance matrices leads to an improved diversification, in the sense that the optimal weights are more evenly spread out.

A good benchmark: The EquiRisk portfolio. Assume that in fact one has no reliable information at all on the N different assets. The only rational choice for allocation would be to take all the weights equal to 1/N. Surprisingly, this is already a rather good choice, in terms of out-of-sample performance. We have mentioned above that past returns are useless and that correlations are hard to measure. A relatively reliable information contained in the past returns of a given asset is its risk, measured for example by its volatility, or, better, by the amplitude of the drop bound to happen every hundred days (say): this is called the Value-at-Risk (VaR). A better portfolio is what we have called the EquiRisk portfolio: allocate the money such that the potential loss (in dollars) on any of the eligible assets is equal across the board. This means that one does not want to suffer more from an investment in bonds that in internet stocks. Such a recipe obviously gives less weight to more risky investments. We have observed that this EquiRisk portfolio is actually very hard to beat out-of-sample, and constitutes a very interesting benchmark that one can try to improve using small, well-controlled and motivated perturbations.

Reducing the volatility is not reducing the VaR: The definition of risk itself can change the optimal composition of a portfolio. The Markowitz recipe aims at minimising the volatility of the portfolio. This might not be the real concern of the portfolio manager, who is usually more worried by the possibility of extreme drawfalls, which are better captured by the VaR. Minimising the VaR leads to portfolio that can be substantially different from Markowitz portfolios. For independent fat-tailed assets, the minimum VaR portfolio is very close to the EquiVaR portfolio mentioned above. The distinction between minimal volatility and minimal VaR portfolios is all the more important when the investment universe is very diverse, containing for example both interest rate products and emergent country stocks.

Know your risks. Following the same vein, it is very important to quantify in detail the risk of a portfolio. The volatility is just not precise enough. Different portfolios with the same volatility might have a completely different behaviour in crisis periods. For example the volatility of the Mexican peso in dollars is comparable to that of a US long-term bond, but when one looks at extreme risks, the peso turns out to be much more risky than any US bond. One would like to know, for example, what is the amplitude of the loss corresponding to 1% probability (one day out of a hundred) or to 0.1% probability (one day out of a thousand). One would also like to know, what is the amplitude of the worst cumulative drawdown, peak to valley, and what is the characteristic `trough time’ needed to recover from a drawdown. All these numbers are important to capture different aspects of the risk inherent to a certain investment scheme. This knowledge is important for the fund manager for at least two reasons. First, this sets precise targets and limits that help him not to panic in difficult times if the corresponding event was statistically foreseen. For example, if the observed drawdown is within the forecasted band, there is no need to reallocate and pay important transaction costs. Second, this is useful to communicate with clients, who know beforehand the possible dark scenarios, and therefore refrain from taking away their money in crisis periods. These crisis periods may just be part of the initial deal between the portfolio manager and the client — if this is clearly stated and correctly calculated. We have ourselves followed this policy within our funds for more than five years now. This has been made possible thanks to the software Profiler, devised by Science & Finance, the research group of CFM. For example, the worst drawdown in the case of CFM with 95% confidence is -15%.
Jean-Philippe Bouchaud is managing director and Marc Potters head of research of Capital Fund Management. They are the authors of Theory of Financial Risks, Cambridge (2000)