Quantitative asset managers have been employing risk control models as a key component of their portfolio processes almost since Markowitz and Sharpe first described models of portfolio risk and return.
We want to describe some of the common features of these models, as well as some of the important distinguishing characteristics among them. Finally, we look at some of the more recent innovations, and speculate on what the future may hold.
Quantitative risk models provide a way to understand, measure and control the risks associated with portfolio investments. Investment risk is typically measured as deviations away from expected returns. For example, investors may expect a stock to return 12% over a given year, but the range of possible outcomes may be quite wide around this figure. If the company issuing the stock goes bankrupt, the return may be as low as –100%, while if the company outperforms all expectations, the return may be substantially higher than 12%. Though most investors prefer deviations that exceed expectations to those that fall short, in general, investors prefer a sure return of 12% to an uncertain return that equals 12% on average. This undesirable dispersion of outcomes is ‘risk’.
The most commonly used quantitative statistic to describe risk is standard deviation. While certainly not the only measure of risk, it has many attractive qualities and serves as the basis of the mean-variance analysis used by most quantitative managers.
Despite the widespread use of standard deviation, there are important differences in how it is estimated, and these differences can lead to substantial differences in perceptions about risk.
Portfolio-level risk is typically computed from the characteristics of the underlying securities in the portfolio. This involves understanding the volatilities of the securities and also how they interact with each other, or how they are correlated. Overall portfolio volatility will tend to be smaller when these correlations among returns are low. As a result, how one estimates correlation can have a big impact on the apparent riskiness of any particular portfolio. If the investment universe under consideration involves enough securities, then estimating correlations separately for every pair of securities can become infeasible, as the number of correlations that need to be estimated becomes overwhelming.
One common approach to estimating correlations among large numbers of securities is to use a factor model. Factor models decompose security returns into components that are ‘idiosyncratic’, or security-specific, and components that are driven by ‘common’ sources of risk that impact all securities. Examples of these common sources of risk include country shocks, sector shocks, interest rate shocks or style shocks.
Correlations among securities are determined entirely by the ‘common’ components of security returns, and much of the disagreement among quantitative managers about risk arises from differing opinions over what the relevant risk factors are. The rabbit in the hat behind any factor model is the assumption that idiosyncratic returns are uncorrelated among stocks. This makes the problem of estimating correlations more manageable, since one only needs to estimate correlations among a small number of risk factor returns as opposed to correlations among every pair of individual securities.
Factor models can help to measure and control the overall level of volatility in a portfolio, but they also have the side benefit of highlighting unintentional risks resulting from a particular investment strategy. For example, a portfolio manager may feel that he or she has particular expertise in Austrian securities, leading to an overweight in this market. An obvious implication of this will be increased exposure to unexpected volatility in Austria. A less obvious implication may be increased exposure to small cap stocks, since the Austrian market has an average market capitalisation below that of many other European markets. A good factor model will uncover and highlight these unintentional risks resulting from these tilts.
Estimating the annual volatility of equities by using a history of annual equity returns will give a very different risk estimate from annualising the volatility of daily returns. Significant differences in volatility estimates arise from differences in frequency of the underlying analysis. In general, it is probably best to focus on the frequency that corresponds most closely with a typical security’s average holding period. However, there is a trade-off between the length of the holding period and the number of historical data observations available for estimation. For example, an estimate of daily risk based on 250 observations worth of daily data will be much more precise than an estimate of annual risk based on a single observation of annual data. In this example, the misrepresentation of annual risk that occurs by annualising the daily risk number is a small price to pay compared to the lack of precision one incurs with so few annual observations. Monthly frequency seems to be the most common frequency used by practitioners.
Many people have found that weighting recent volatility more heavily than older volatility gives an improved forecast of future volatility. This finding has spawned increasing use of a variety of econometric techniques that fall into the family of ARCH (autoregressive conditional heteroskedasticity) models. These models have the advantage of allowing a quicker response to a changing risk environment. They have the disadvantage that direct application of these models is only appropriate under fairly restrictive assumptions. Furthermore, ARCH techniques are quite cumbersome to implement for purposes of forecasting the risk of several securities simultaneously. In many cases, equally effective approaches for measuring time-varying risk can be achieved by looking at high frequency data.
In certain instances, standard deviation alone may provide an inadequate description of risk for any one of many reasons:
q Asymmetric return patterns Marked asymmetries in the return patterns can produce circumstances where standard deviation alone is an insufficient summary of risk. With a long position in a call option, for example, increased volatility in the underlying asset is generally considered desirable since it enhances the likelihood of large profits, while losses remain capped at the initial option premium. Technically, equity returns are also asymmetric, since the downside is limited at –100%, whereas the upside can be much greater. For most equities, however, the return skewness is small enough for a symmetric risk measure like variance to capture the important elements of the uncertainty. For many other securities, however, a model of risk that incorporates asymmetries explicitly into the analysis is required. Examples of such securities include options and fixed income securities with option- like features. One approach to take under such circumstances is to build a discrete probability tree that specifies the payoffs of the portfolio over time under various outcomes. Fair values of the security can then be solved for by calculating probabilities weighted present values of all cash flows. This is the approach used in most fixed income option pricing models. Semivariance is another summary measure that one can use.
q Fat-tailed return distributions or excessive kurtosis The return distribution of equities is sometimes proxied by the normal distribution. However, extreme returns occur more frequently than should happen under the normal bell curve. Historical estimates of volatility may nonetheless fail to incorporate the impact of such extreme events, making standard deviation control a potentially incomplete form of risk control. Currencies and emerging market equities are two asset classes particularly prone to these extreme events.
All such cases underscore the importance of relying on risk controls beyond those that simply limit volatility. For example, one might impose position limits to protect against events that are completely unprecedented in historical data.
Value at risk (VAR) has gained much popularity is recent years as an alternative way of expressing risk in a portfolio. VAR measures risk in money terms as opposed to return terms. Roughly speaking, it answers the question: what wealth level can I be 95% sure to exceed in my portfolio? In cases where return patterns are symmetrically distributed and approximately normal, the VAR can be reasonably proxied by using the formula:

Mean – 1.65*standard deviations

Because of its reliance on risk measures, VAR approaches are subject to the same benefits and pitfalls as traditional risk approaches.
Quantitative risk control techniques and tools have developed significantly since quantitative investing came into being more than 40 years ago. They provide a useful way of gauging the overall level of risk in a portfolio and are now gaining wide acceptance, even among investors who don’t specialise in quantitative techniques. Partly, this has occurred through the development of popular commercial risk management packages. Most users can effectively exploit these packages without knowing all of the technical details. However, it is important to understand the assumptions that underlie these models in order to better assess their overall strengths and weaknesses.
John Capeci is managing partner, research, and Peter Rathjens is managing partner, chief investment officer, with Arrowstreet Capital in Cambridge, Massachusetts