The European pensions industry faces major challenges, some of which will likely lead to significant, if not dramatic, changes. At the broadest level there is growing awareness of the issues raised by long-term demographic trends, especially in those countries where reliance continues to be primarily on the state for retirement provision. Meanwhile, those countries that have adopted a funded approach have seen their systems threatened by weak equity markets.
Coupled with this, there is a move towards mark to market regulation – a move which makes considerable sense but which, with hindsight, has suffered from unfortunate timing. This ‘mark to market’ trend applies in terms of both accounting standards (FRS17 in the UK is the direction in which standards are moving globally) and regulatory solvency tests (such as the changes that have already been introduced in Denmark and will be forthcoming in the Netherlands).
High on the agenda is the subject of benchmarks. This simple word encompasses a range of issues. One central question is whether commonly used ALM approaches really add value. After all, their extensive use in the UK has led the pensions system, which was so successful throughout the 1980s and 1990s, to suffer from serious underfunding more recently. In some cases the problem is such that the same funds that were in a state of healthy surplus five years ago now have deficits so large that they threaten not just the security of members benefits but also the existence of the sponsoring company. One can point the finger in many directions, but ultimately it is the system itself that has been deficient, with ALM being partly to blame.
The key problem has been that, one way or another, many ALM approaches have underestimated risk. In some cases, this can be traced back to insufficient focus on the true ‘market values’ of assets and liabilities, in others it is largely due to an assumption of high correlation between equities and bonds, which has not been borne out in recent years. Specifically, if pensions liabilities are considered to behave like bonds and bonds are highly correlated with equities then equities, by implication, will have a relatively high correlation with pension liabilities. One of the causes of the recent difficulties has been the breakdown in this equity-bond correlation – indeed, while equity values have fallen, pension liabilities have increased!
Nevertheless, it is clear that ALM has much to offer – it is, after all, still the tool that brings the assets and the liabilities together when setting strategy. So does it make sense to retain the typical ALM approach, using it to produce the traditional type of benchmark (split for example between equities / bonds / real estate / alternatives), but with closer attention to the key assumptions that have sometimes been neglected? Or should ALM be pointing us in a different direction – starting from the liabilities and considering how to add value relative to these?
The second route leads towards ‘liability-driven’ benchmarks, which are the currently subject of considerable discussion. Of course, ALM techniques are, by definition, liability-driven anyway, but in today’s jargon, liability-driven tends to refer to something that is more risk-averse and, in general, takes cash-flow matching or some variation of this theme as its starting point. This can be developed in several ways. For example, given a series of liability cash-flows it is then possible to use derivatives (primarily interest rate swaps) to construct a series of payments that match the timing and magnitude of the projected liabilities (assuming that the scheme is sufficiently well funded to facilitate this and making the generous assumption that mortality and other demographic factors will turn out as expected). Nowadays, the swap markets are sufficiently flexible to accommodate many different liability scenarios – for example, inflation-linked as well as fixed pensions.
The next stage in the process is to decide whether to match out these cash-flows passively or whether to try to add value. In the latter case, funds are turning to portable alpha strategies. For example, if a fund feels it is able to identify managers / structures that are likely to outperform with a high degree of consistency, this outperformance can often be converted (again using derivatives) into an addition to the liability driven cash-flow profile that has just been constructed.
A separate type of ‘benchmarking’ discussion focuses on the time horizon over which investment managers are measured. Most pension funds and consultants argue that managers tend to be short-term in their approach relative to the timescales implied by the underlying liabilities. Managers, justifiably, respond that this is largely because of the quarterly performance measurement cycle. This poses a dilemma for trustees – how should they carry out their monitoring in a manner which fulfils their responsibilities of due diligence yet encourages their managers to focus on the longer term?
Consider the all too familiar scenario. After a gruelling search process, a manager is hired with high expectations amid something of a fanfare. Twelve months later he has underperformed in each of the first four quarters and is down by a cumulative 5% versus benchmark. Statistically, it is unlikely that this underperformance has any significance – in other words, it could very well be due to bad luck, the manager’s style being out of favour or any of a host of factors not linked to the manager’s long-term ability.
Yet, at some point the trustees’ confidence will begin to wane. Doubts will creep in as they wonder whether, despite the thorough search process and the help of their consultant, they were mistaken in their confidence. They may remind themselves that, for all the manager’s skills, his interests are not exactly aligned with theirs. Whether it takes two or 12 quarters of cumulative underperformance for these doubts to surface, surface they will – and long before the performance numbers themselves become statistically significant. This is one of the real challenges of the ‘long-term’ discussion – closer dialogue between trustees and managers can help enormously, but some short-term pressures will always persist.
There is an exception to the short-termist model outlined above – a point that has been well brought out during the recent discussions surrounding the USS competition on long-term investing. There is one group of pension funds who already have a long history of mutual trust built up with their managers, where the managers’ interests are much more closely aligned with the trustees than is normally the case (although these interests are still not precisely the same). These, of course, are the internally managed pension funds. If 2004 becomes the ‘year of the long-term benchmark’, it could lead to some of Europe’s larger funds managing a greater proportion of their assets internally. The arguments for doing so are fairly strong, although it is obviously feasible only for the larger funds (unless smaller funds pool resources for this purpose).
Returning to the conflict between the trustees and manager’s business interests, how can medium and smaller sized funds prevent their managers from closet indexing? Several ideas have been put forward in this regard. One of the most innovative has been to replace the traditional benchmark index by a simulation process. The performance measurer would randomly generate a large number of portfolios and measure the performance of each. The manager would then be assessed according to his performance against the median, 60th percentile or some other level of the distribution. The argument runs that, since the manager doesn’t know the composition of the median scenario (for example), he can’t closet index against it. At first sight this is very appealing. However, there are fairly simple statistical techniques the manager can use that may defeat the purpose. Suppose each random portfolio is generated by drawing a sample of 100 stocks from the MSCI Europe Index, weighted in line with market cap. Suppose further that 10,000 such portfolios are randomly created, with the manager being set the task of outperforming the median of this distribution. Because the underlying process is random, the manager will not know what the composition of the median portfolio will be. However, depending on how the sample is put together, simple statistical theory may tell him that the return of this median portfolio is likely to be very close to that of the MSCI Europe index.
Here lies the problem – even though he doesn’t know the composition of the benchmark, by adopting a position close to the index from which the sample is drawn the manager may be able to ensure that his return will be close to his median-based benchmark. His attempts to closet index become a little more difficult if the portfolios are not cap weighted, but he can then simply bias his live portfolio in a manner consistent with the randomly generated portfolios. Likewise, if his target is the 40th or 30th percentile, he will have to take more active risk, but he can still approximate his target by the traditional (index + X%) approach, which defeats the object from the client’s point of view. Nevertheless, despite these practical challenges, this approach represents innovative thinking and deserves close attention.
To many outsiders, the world of pensions appears to move extremely slowly. However, some of these benchmarking discussions have considerable momentum behind them, especially those relating to cash-flow matching and the use of derivatives. Twelve months from now, benchmarking ‘best practice’ may look very different from what we are used to.
Gareth Derbyshire is executive director of Morgan Stanley’s European pensions group, based in London
No comments yet