Moody’s credit ratings are subjective opinions of the ability of individual entities – companies, countries, financial institutions – to pay back debt. Our sovereign ratings, in particular, have established a strong record of effectively ranking credit risk, a record detailed in numerous published reports (see for example, Moody’s Special Comment, Sovereign Default and Recovery Rates, 1983-2013, April 2014) and acknowledged by the IMF in its ‘Global Financial Stability Report’ of October 2010. 

How do you measure rating accuracy? 

The most direct test of a rating system measures how well its ratings line up against realised credit losses, and historical data show that countries Moody’s rated higher defaulted significantly less frequently than lower-rated countries. Moreover, no sovereign has ever defaulted within a year of holding an investment-grade rating, and no sovereign has ever defaulted within five, or even 10 years of holding a Aaa sovereign rating. 

The graph ‘sovereign default rates’ on page 67 shows the one and five-year default rates by rating category. For example, historically, sovereigns rated Ba have a 0.7% chance of defaulting within a year and a 6.1% chance of defaulting within five years. As should be the case, default risk increases sharply as one moves down the rating scale. Over a 12-month horizon, a B-rated sovereign has been almost five times more likely to default than a Ba sovereign, while a Caa-C sovereign has been 58 times more likely to default than a Ba sovereign. 

Ratings demonstrate similar power in rank ordering default risk over the longer term. Over five years, a Baa sovereign has been 1.3 times more likely to default than an A sovereign, a Ba sovereign 6.1 times more likely, a B sovereign 12.4 times more likely, and a Caa-C sovereign almost 53 times more likely than an A sovereign.

Over the past 30 years, Moody’s sovereign ratings have been even more accurate in discriminating defaulters from non-defaulters than our corporate ratings, whose record is generally recognised as the industry gold standard.

We also measure the effectiveness of the rating system according to the average position (AP) of defaulters – that is, by the location of the average defaulter in the distribution of credit ratings. A more powerful rating system will have most defaulters rated below most non-defaulters, meaning its AP will approach 100%. The typical sovereign defaulter carried a lower rating than 95% of all other sovereign ratings one year in advance of default (put differently, the average one-year AP for sovereigns was 95%); by comparison, the average one-year AP for corporate issuers was 87%. At the five-year horizon, the average sovereign AP was 85%, while the corporate AP was 81%.

But hasn’t the market been much more accurate than Moody’s? 

No. For the set of sovereigns that have both a Moody’s rating and a CDS spread (which can be mapped into pseudo-rating categories), the average one-year AP for the Moody’s ratings was 97.2%, which is, in fact, the same as the AP from the CDS market.

Besides simple accuracy, investors also value rating stability. We define ratings volatility as the average number of notches a rating will change over a 12-month period.

Sovereign default rates

Since 1983, sovereign ratings have been more stable than corporate ratings with volatility about half that of corporate ratings. More importantly, on a sample matched to the bond market implied ratings, the volatility of Moody’s sovereign ratings has been about one-twelfth that of the market-implied ratings.

Why the UniCredit report is flawed

The authors of the UniCredit report make no attempt to empirically challenge the facts, which demonstrate the strong predictive power and stability of Moody’s sovereign ratings. Instead they take issue with what they call the ‘subjective’ component of our ratings approach. However, the distinction they draw between ‘objective’ and subjective is entirely spurious. Essentially they regress credit ratings against a selection of economic variables. They then label the portion of ratings which they can explain with their regression objective and the part they cannot explain subjective – when, in fact, they are both just transformations of ratings, and ratings are always subjective opinions of relative credit risk.

At best, their regression analysis revealed the average weights our rating committees subjectively placed on a particular set of variables during the sample period. The authors nevertheless call this objective, and when they find that the objective component of our ratings were powerful in discriminating risk, they do not seem to realise that they are in fact validating the subjective decisions of our credit analysts.

To effectively demonstrate the point they would like to make, the authors would need to present a regression model explaining historical defaults (not ratings) that does a better job predicting future defaults out of sample than do Moody’s ratings, which of course they cannot demonstrate. Moody’s sovereign credit ratings have, by every empirical measure, proven themselves powerful measures of credit risk. The UniCredit study in no way challenges that conclusion.

Albert Metz is managing director in Credit Policy Research at Moody’s