Quant investing is perceived as a formula for turning bright ideas into profitable investments. Take a couple of academics, add one portfolio theory , switch on the model and wait for the returns.
Occasionally this process can go badly wrong. The collapse of Long-Term Capital Management, with its Nobel laureates in economics, Robert Merton and Myron Scholes, in 1998 set back the cause of bright idea investing several years.
But generally quant investing has justified its claim to make investing in the financial markets both safer and more and more rewarding. This is certainly the view of Robert Litterman, managing director, quantitative resources at Goldman Sachs Asset Management: “You want not only to prevent disasters but to maximise rewards for a given amount of risk.”
Litterman, who describes himself as an ‘econometrician’, received a PhD in economics and taught at MIT before joining Goldman Sachs. He is the co-inventor of the Black-Litterman global asset allocation model. This model, published in 1990, solved a problem that had been puzzling portfolio managers – how to design a portfolio optimiser that produces sensible results.
In the traditional mean-variance approach developed by Harry Markowitz and others, the portfolio manager feeds in a complete set of expected returns, and the portfolio optimiser generates the optimal portfolio weights. However, managers found that their specification of expected returns produced output portfolio weights which did not square with their own expectations.
“It was a dirty little secret back in the 1980s that the traditional portfolio optimisers didn’t work very well. You put your inputs in and you would often get garbage outputs. For example, you could end up with a really long position in dollar-yen and a massively short position in Dutch government bonds and you would ask yourself what do Dutch government bonds have to do with dollar-yen?”
Litterman says that the optimisers were behaving badly because they were interpreting inconsistencies in the inputs as opportunities. “I first thought there was a bug in my program it was so badly behaved, I went to Fischer Black who was my colleague at the time and he said we should try and build an equilibrium in it.
“One of the benefits of having equilibrium embedded in your optimiser – which is basically what Black-Litterman does – is that it provides a neutral starting point for the expected returns for everything. Then we can say why would we think that this asset or that would have a return that is either higher or lower than the equilibrium. The optimiser will construct portfolios based on these views about deviations from equilibrium.”
In this way, the Black-Litterman model allows the portfolio manager to express view about portfolios, rather than a complex vector of expected return on all assets. “In the simplest of contexts – where there is no benchmark of constraints – the optimal portfolio is very intuitive. It is simply a set of deviations from market capitalisation weights in the directions of portfolios about which views are expressed.”
Litterman says this is what the quantitative approach does best – adding computer analysis to fundamental investing. “Quants aren’t different from fundamental managers. It’s just that we use sophisticated tools to do the same thing.”
Litterman and SAM’s quantitative resources group describe the practical application of equilibrium in their book ‘Modern investment management – and equilibrium approach’ to be published in September.
The problem with a successful quant approach is that everyone will adopt it, to the exclusion of every other app-roach. Eric Sorensen, director of quantitative research at Putnam Investments and a former professor of finance at the University of Arizona, has watched individual quant techniques go in and out of fashion over the past 15 years. “People will sometimes say quants don’t work. But when quants don’t work, it’s not because they’re quants, but because there is excess demand for a single quant strategy.”
He cites the craze for portfolio insurance, a hedging strategy adopted by US institutional investors in the late 1980s: “It wasn’t that the science was flawed or that there was a failure of someone’s genius. There was just one trade that everybody did. And one day somebody said ‘I’m going to unwind’ and there was no market.”
Another example was the use of earnings revision or earnings momentum models in the 1990s, he says. “This was a strategy of buying companies that had positive momentum that manifested in the consensus earnings coming out of the sell-side Wall Street firms. The strategy was phenomenal as an investment technique in 1994 to 2000, and if you didn’t have an element of earnings revision in your processes it was very difficult to keep up with the S&P or the FTSE.”
The craze ended when the technology stock bubble burst. There then followed what Sorensen describes as “Monday morning quarterbacking”, retrospective criticism of quant techniques and the academics who developed them. “People would say ‘well obviously it wouldn’t work for ever and here’s why it didn’t work’, and yet those pundits were not telling people before the fact that it had started to fail.”
Sorensen suggest that the real gains from quant strategies are likely to be steady rather than spectacular. “At the very root of quantitative active strategy there’s the hope that you can actually rank securities. Rather than picking the big winners or selling short the big losers, you try to operate over the entire spectrum of securities. It’s a bit like the tide. Waves arrive and things happen, but the underlying tide slowly comes in and slowly goes out and you want to be on the front end of that curve.”
One solution to the problem of picking winners from the entire spectrum of securities is to take a purely mathematical approach to equities selection. This technique has been developed by Robert Fernholz, founder and chief investment officer of Intech, now part of Janus International.
Fernholz’s paper on mathematical investment processes ‘Stochastic Portfolio Theory and Stock Market Equilibrium’ showed that portfolio managers could use the natural volatility of stocks to create portfolios producing an overall return greater than the return of the individual component stocks.
This process would eliminate the need for fundamental analysis and remove the subjectivity from stock picking. As with much bright idea investing this grew out of dissatisfaction with existing models – in this case with the arithmetic approach begin by Harry Markowitz started and developed by William Sharpe and others into the option pricing theory.
“There are two problems with arithmetic returns. One is that over the long term, for a risky investment they don’t give you a very good idea of what is going to happen. The expected arithmetic return of a risky investment over the long term is always greater than the expected compound growth rate or logarithmic rate of return.
“Secondly, from a mathematical point of view arithmetic returns go up exponentially. Curves are complicated. If you take the logarithmic rate of return you get straight lines. Straight lines are simple.
“It occurred to me that it would be a good idea to look at everything from the point of view of the logarithmic or compound growth rate. So then you start to put together the portfolio theory for this.

In the Markowitz theory it’s pretty straightforward. The mean for the portfolio is just the weighted average for the component means. But in the logarithmic model this weighted average ends up being augmented by the so-called excess growth turn – and that’s that formula that we keep using.”
Fernholz realised that the compound growth rate of all large caps must be broadly the same – otherwise some companies would have become world-beaters. “A passive portfolio obviously cannot grow any faster than the common growth rates of the individual stocks – there’s nothing happening. But if you re-balance the portfolio back to reasonable proportions you can capture this excess growth which could be a few per cent. Simply by re-balancing you should be able to make a portfolio outperform an index.”
This was possible but would involve a large tracking error, which would frighten off investors. “To make it interesting to investors you’re going to have to carry out some sort of optimisation. But the optimisation can be based on the volatilities, the variances, rather than on the rates of return which are notoriously difficult to calculate,” he says.
“So here, without doing anything with the rates of return, was a methodology by which one could start to optimise a portfolio.”
Fernholz sees mathematical investing as part of a broader process of moving from normative theories (typified by social science) of how things ought to happen to descriptive theories (typified by the natural sciences) of how things actually happen. “This is what we are striving for – some sort of descriptive theory of how the market looks.”
However, he warns that the financial markets still inhabit a ‘pre-Galileo’ world, where such ideas are regarded with suspicion. “When Galileo came up with a descriptive theory rather than a normative one some people weren’t very pleased with that.”