What Do Rating Agency Announcements Signal? : Confirmation or New Information



Similar documents
SEC Working Papers Forum คร งท 2

THE EFFECT ON RIVALS WHEN FIRMS EMERGE FROM BANKRUPTCY

Bond Yield Spreads and Equity Prices - The Australian Experience

The Economics of Rating Watchlists: Evidence from Rating Changes

From Saving to Investing: An Examination of Risk in Companies with Direct Stock Purchase Plans that Pay Dividends

Revisiting Post-Downgrade Stock Underperformance: The Impact of Credit Watch Placements on Downgraded Firms Long-Term Recovery

Investment insight. Fixed income the what, when, where, why and how TABLE 1: DIFFERENT TYPES OF FIXED INCOME SECURITIES. What is fixed income?

On the Informativeness of Credit Watch Placements. Sugato Chakravarty Purdue University West Lafayette, Indiana 47906

Extending Factor Models of Equity Risk to Credit Risk and Default Correlation. Dan dibartolomeo Northfield Information Services September 2010

Credit ratings are vital inputs for structured

Discussion of Momentum and Autocorrelation in Stock Returns

The Stock Market s Reaction to Accounting Information: The Case of the Latin American Integrated Market. Abstract

A Review of Cross Sectional Regression for Financial Data You should already know this material from previous study

On Existence of An Optimal Stock Price : Evidence from Stock Splits and Reverse Stock Splits in Hong Kong

Market Implied Ratings FAQ Updated: June 2010

Earnings Announcement and Abnormal Return of S&P 500 Companies. Luke Qiu Washington University in St. Louis Economics Department Honors Thesis

The Determinants and the Value of Cash Holdings: Evidence. from French firms

Measuring The Performance Of Corporate Bond Ratings

Credit watch placement and security price behavior around bond rating revisions

Characteristics and Information Value of Credit Watches #

Online Appendix for. On the determinants of pairs trading profitability

Short sales constraints and stock price behavior: evidence from the Taiwan Stock Exchange

The timeliness of CDS spread changes in predicting corporate default, *

An Empirical Analysis of Insider Rates vs. Outsider Rates in Bank Lending

How credit analysts view and use the financial statements

T. Rowe Price International Bond Fund T. Rowe Price International Bond Fund Advisor Class

Emerging Markets Bond Fund Emerging Markets Corporate Bond Fund Emerging Markets Local Currency Bond Fund International Bond Fund

How To Predict The Anticipation Of A Credit Rating Downgrade

Saving and Investing. Chapter 11 Section Main Menu

Bond Mutual Funds. a guide to. A bond mutual fund is an investment company. that pools money from shareholders and invests

Credit Ratings and The Cross-Section of Stock Returns

THE RELATIONSHIP BETWEEN CREDIT DEFAULT SWAP SPREADS, BOND YIELDS, AND CREDIT RATING ANNOUNCEMENTS. John Hull, Mirela Predescu, and Alan White *

- Short term notes (bonds) Maturities of 1-4 years - Medium-term notes/bonds Maturities of 5-10 years - Long-term bonds Maturities of years

PROFESSIONAL FIXED-INCOME MANAGEMENT

Bond Market Perspectives

This article discusses issues in evaluating banks internal ratings

The Impact of Interest Rate Shocks on the Performance of the Banking Sector

BIS Working Papers No 207. The price impact of rating announcements: which announcements matter? Monetary and Economic Department.

Update on Mutual Company Dividend Interest Rates for 2013

Credit Implied Volatility

Determinants of trading activity after rating actions. in the Corporate Debt Market

Fixed-income opportunity: Short duration high yield

How much is too much? Debt Capacity and Financial Flexibility

M INTELLIGENCE. Dividend Interest Rates for Dividend Interest Rate. July Life insurance due. care requires an

Chapter 5 Option Strategies

for Analysing Listed Private Equity Companies

Today s bond market is riskier and more volatile than in several generations. As

Bond Fund of the TIAA-CREF Life Funds

The Relationship between Par Coupon Spreads and Credit Ratings in US Structured Finance

Beginner s Guide to Bonds

The Market Reaction to Stock Split Announcements: Earnings Information After All

AUSTRALIAN BANKS GLOBAL BOND FUNDING 1

3. LITERATURE REVIEW

Investor Guide to Bonds

Mr Duisenberg discusses the role of capital markets and financing in the euro area Speech by Willem F Duisenberg, President of the European Central

Bond Rating Analysis Yijia Mao Aug. 06, What s the importance of bond rating?

Is there Information Content in Insider Trades in the Singapore Exchange?

Life Cycle Asset Allocation A Suitable Approach for Defined Contribution Pension Plans

Measuring Loss Severity Rates Of Defaulted Residential Mortgage-Backed Securities: A Methodology

CFCM CENTRE FOR FINANCE AND CREDIT MARKETS

Banks Funding Costs and Lending Rates

Balanced Fund RPBAX. T. Rowe Price SUMMARY PROSPECTUS

Chapter 5: Analysis of The National Education Longitudinal Study (NELS:88)

BERYL Credit Pulse on High Yield Corporates

Short- and Long-Run Dynamics of Speculative-Grade Bond Yields

Expected default frequency

Cash Holdings and Mutual Fund Performance. Online Appendix

Referred to as the statement of financial position provides a snap shot of a company s assets, liabilities and equity at a particular point in time.

Determinants of Recovery Rates on Defaulted Bonds and Loans for North American Corporate Issuers:

estimated senior unsecured) issuer-level ratings rather than bond-level ratings? )

IASB/FASB Meeting Week beginning 11 April Top down approaches to discount rates

A guide to investing in hybrid securities

PROJECTION OF THE FISCAL BALANCE AND PUBLIC DEBT ( ) - SUMMARY

Corporate Bonds - The Best Retirement Investment?

Seix Total Return Bond Fund

Navigator Fixed Income Total Return

A comparison between different volatility models. Daniel Amsköld

CRISIL s Ratings and Rating Scales

Spectrum Insights. Time to float. Why invest in corporate bonds? - Value

To receive highlights of the information on this page delivered directly to your inbox, please register here.

Yukon Wealth Management, Inc.

Equity returns following changes in default risk: New insights. into the informational content of credit ratings.

Rating Action: Moody's reviews for downgrade the ratings of MBIA Inc. and of its lead insurance subsidiaries Global Credit Research - 21 Mar 2013

Understanding Fixed Income

Chapter 12. Page 1. Bonds: Analysis and Strategy. Learning Objectives. INVESTMENTS: Analysis and Management Second Canadian Edition

The Effects of Credit Rating Announcements on Shares in the Swedish Stock Market

Rating Update: Moody's upgrades Liberty University's (VA) bonds to Aa3; outlook stable

The Bright Start College Savings Program Direct-Sold Plan. Supplement dated January 30, 2015 to Program Disclosure Statement dated November 12, 2012

What s behind the liquidity spread? On-the-run and off-the-run US Treasuries in autumn

Asymmetry and the Cost of Capital

The recent volatility of high-yield bonds: Spreads widen though fundamentals stay strong

Lecture 8: Stock market reaction to accounting data

TRANSAMERICA SERIES TRUST Transamerica Vanguard ETF Portfolio Conservative VP. Supplement to the Currently Effective Prospectus and Summary Prospectus

Online Appendices to the Corporate Propensity to Save

Forgery, market liquidity, and demat trading: Evidence from the National Stock Exchange in India

How To Invest In High Yield Bonds

CDO Research Data Feed Glossary of Terms

STEWARD FUNDS MANAGING WEALTH, PROTECTING VALUES SOCIALLY RESPONSIBLE SCREENED FUNDS. PROSPECTUS August 28, 2015

2 11,455. Century Small Cap Select Instl SMALL-CAP as of 09/30/2015. Investment Objective. Fund Overview. Performance Overview

Stock market booms and real economic activity: Is this time different?

Transcription:

What Do Rating Agency Announcements Signal? : Confirmation or New Information Sander J.J. Konijn a,b, Herbert A. Rijken a (a) Department of Finance, VU University Amsterdam (b) Tinbergen Institute, Amsterdam Abstract Prior surveys and empirical research suggest that rating agencies respond slowly to changes in underlying credit-quality. We consider rating changes, watchlist additions and outlook assignments from 1991 to 2005. Like previous research, we find significant negative pre-announcement abnormal return reactions, potentially followed by positive post-announcement corrections. These patterns suggest it may be vital to account for alternative measures of creditworthiness used by market participants. We estimate point-in-time default prediction models to make a distinction between confirmed and unconfirmed announcements. We obtain no significant positive post-announcement returns if announcements are out of line with pre-announcement point-in-time credit model tendencies. Significant positive post-announcement returns typically materialize when announcements are in line with pre-announcement point-in-time credit quality deteriorations. In turn, pre-announcement return reactions become less severe when downgrades are preceded by watchlist additions. We finally find that downgrade announcement abnormal return reactions are predominantly related to severity of the downgrade signal. This not only means number of notches downgraded. Returns are also larger when we either observe a concomitant rating change by Standard & Poor s in close range, or the company is newly added to the watchlist when it is downgraded. Key words: Rating Agencies, Rating changes, Watchlist, Outlook, Event study, Stock returns, Default prediction, Credit-scoring. JEL classification: tbw. Corresponding author: Herbert A. Rijken, FEWEB/FIN, VU University Amsterdam, De Boelelaan 1105, 1081 HV Amsterdam, The Netherlands. Phone: +31 20 59??????; fax: +31 20 59??????; e-mail: hrijken@feweb.vu.nl.

1 Introduction Information disclosure by rating agencies has been examined in two alternative ways. A first approach looks at the accuracy and consistency of rating levels, and timeliness of rating changes vis-à-vis alternative measures of creditworthiness. Ederington et al. (1987) and Perraudin and Taylor (2004) relate bond ratings to yield spreads and bond prices, respectively. Altman and Rijken (2005) and Löffler (2005) shed light on the underlying rating process, trying to explain why rating changes are rare, serially dependent and predictable using borrower fundamentals. Surveys by Ellis (1998) and Baker and Mansi (2002) reveal that market participants believe that agency ratings adjust slowly to changes in corporate credit quality. Secondly, one may examine return reactions surrounding rating agency announcements. Using daily stock return data Holthausen and Leftwich (1986) obtain significant abnormal return reactions surrounding downgrades and watches for down- and upgrade. Hand, Holthausen and Leftwich (1992) find significant excess returns when watches for downgrade and downgrades are announced. Goh and Ederington (1993)(1999) obtain significant negative return reactions surrounding downgrades, but insignificant and small responses in case of upgrades. When larger event windows are considered, previous research typically reveals that announcements tend to be preceded by large and significant return responses. Instead of equity markets, several authors look at bond market reactions, see Nordon and Weber (2004) for a brief outline. The main disadvantage of bond data in event study analysis is potential illiquidity. Corporate bonds are frequently bought and held to maturity, such that actual trading volume can be quite low, see Alexander, Edwards and Ferri (1998). Nonetheless, the overall pattern is quite similar to equity markets. Studies find significant (pre-)announcement window abnormal returns (WAR) surrounding downgrades and watches for downgrade. The impact of positive announcements seems to be lower. Recent studies have given more attention to credit default swaps (CDS). CDSs are directly related to credit risk. As a result, they seem to be very suitable to determine the importance of rating agency announcements. What is more, Blanco, Brennan and Marsh (2005) show 2

that most pricing relevant information flows from CDS to bond markets. Conditioning on rating events, Hull, Predescu and White (2004) reveal that index adjusted CDS spreads (i.e., adjusted by a spread index of similarly rated CDSs) widen significantly before downgrades, watches for downgrade and outlooks. Considering the announcement window itself, the authors only find a significant response in case of watches for downgrade. Nordon and Weber (2004) study the response of equity and CDS markets to watchlist additions and rating changes. The authors obtain significant announcement effects in both markets with respect to both event types. Moreover, results once again indicate that both markets anticipate downgrades as well as watches for downgrade. In general stock markets anticipate downgrades more steadily than CDS markets. Inspired by previous empirical findings, we combine both strands of research to more accurately assess the information content of Moody s rating announcements. Significant preannouncement abnormal returns suggest it may be vital to account for alternative measures of creditworthiness used by market participants. For the latter are unobserved, we estimate point-in-time default prediction models, using only publicly available information. Pointin-time models should by definition respond quickly to changes in underlying fundamentals. This in turn allows us to identify announcements that might predominantly confirm an underlying tendency. We subsequently take an event study approach to determine whether we observe a different return response surrounding the events considered. Our sample period runs from 1991 to 2005. Besides rating changes and watchlist additions we consider outlook assignments as well. Unconditional estimation results confirm that sharp negative equity returns already materialize prior to negative rating announcements. In case of downgrades, and somewhat less so in case of watches for downgrade, negative announcement and pre-announcement window abnormal returns (WAR) are partially annihilated by positive average post-announcement abnormal returns. The latter is in line with Glascock, Davidson and Henderson (1987), Nordon and Weber (2004) and, considering bond market reactions, Heinke and Steiner (2001). We obtain no significant positive post-announcement returns if announcements do not 3

confirm a pre-announcement point-in-time credit model tendency. This suggests that new information is fully absorbed once it is revealed. On the other hand, significant positive post-announcement returns typically materialize when announcements are in line with preannouncement point-in-time credit quality deteriorations. In this case, formal rating agency announcements might predominantly resolve underlying uncertainty, which puts the market at ease. In case of downgrades, we find that pre-announcement return reactions are mostly related to watchlist precedence. Pre-announcement negative abnormal returns are less severe when downgrades are preceded by watchlist additions. Immediate significant negative abnormal return reactions are already obtained once the watchlist addition is announced. We finally find that announcement window abnormal returns are predominantly determined by severity of the downgrade signal. This not only means number of notches downgraded. Returns are also stronger when we either observe a concomitant rating change by Standard & Poor s in close range, or the company is newly added to the watchlist when it is downgraded. The remainder of this paper is organized as follows. To better understand the rating process, section 2 gives an overview of Moody s watchlist and outlook assignment and resolution (periods). Section 3 presents estimated window abnormal returns related to rating changes, watchlist additions and outlook assignments. This section is subdivided in subsections related to unconditional WARs, and conditional WARs In the latter case we condition on attributes related to individual announcements. Section 4 concludes. 2 Signaling Creditworthiness: Rating, Watchlist and Outlook assignments Before examining the information content of announcements, we first look more closely at the rating process itself, which is described in Moody s (2004). 1 We consider the total rating 1 See also Standard and Poor s (2005). 4

history of companies, obtained from the July 2005 version of Moody s DRS database. An extended version of watchlist and outlook information was provided separately by Moody s Investors Service. Rating changes are the ultimate means of rating agencies to publicly express their opinion regarding relative changes in underlying creditworthiness of rated issuers. This in turn reflects the issuers ability and willingness to stick with contractual obligations. More recently Moody s started to supplement its rating assignments by rating reviews and outlooks. Though Moody s already published watchlist assignments back in 1985, they have only become a formal part of the credit quality designation process since October 1991. Somewhat later, January 1995, Moody s also started to assign rating outlooks. As a result our data set stretches from October 1991 to February 2005. In assigning ratings, agencies make use of a through-the-cycle rating methodology. That is, ratings are only revised when there is a significant permanent, long term component in the underlying credit quality change, see also Cantor and Mann (2003). Watchlist and outlook assignments seem to deal with the continuous struggle of agencies to strike a balance between rating timeliness and rating stability. Outlook and watchlist assignments have different implications, see Cantor and Hamilton (2004). A rating outlook represents an opinion regarding the likely direction of an issuer s credit quality. Watches, also called rating reviews, can be considered as a subset of rating outlooks, giving a much stronger indication about a possible future rating change. Looking at fractions of rating changes preceded by a watch or outlook confirms this difference in interpretation. From October 1991 to February 2005, 36 (31) percent of observed downgrades (upgrades) were preceded by a watch for downgrade (upgrade). The corresponding numbers in case of negative (positive) outlooks are 16 (10) percent, respectively. Cantor and Hamilton (2004) underline the importance of watches and outlooks in terms of signaling. They show that it becomes much harder to predict future rating changes from past rating changes once one controls for watchlist and outlook status. 5

2.1 Watchlist and Outlook: Resolution and Resolution period Table 1 provides the total number of watchlist and outlook assignments within our sample period. We will focus on those assignments with a clear indicated direction. In particular, positive and negative outlooks and watches for upgrade or downgrade. Insert Table 1 It is clear that negative watchlist and outlook indications are always larger than their positive counterparts from 1995 onwards. The fraction of positive to negative watches (outlooks) actually pretty much resembles the fraction of upgrades to downgrades. Watchlist and outlook assignments are by and large equally divided between investmentand speculative-grade issuers, with the exception of watches for downgrade. In the latter case the number of assignments in the investment-grade category is almost twice as large. This is in line with the fraction of speculative-grade to total number of issuers, which ranges from 30 to 39 percent in the period considered. However, the result is surprising if one recognizes that the number of downgrades are almost equally divided between investmentand speculative-grade issuers. Table 2 gives an overview of watchlist and outlook resolutions. Patterns are similar across positive and negative watchlist (outlook) assignments. More than 2/3 of the times a company was added to the watchlist it eventually experienced a rating change in similar direction. Rating changes in the opposite direction are rare. Though still relatively small, the latter is observed more frequently in case of outlooks. The fraction of outlooks that directly led to rating changes in the indicated direction is much smaller as compared to watchlist resolutions, somewhat more than 1/4. Intended and empirical resolution periods of outlooks exceeds those of watchlist assignments. As a result, it comes as no surprise that a significant part of outlooks are still unresolved at the end of our sample period (i.e., right censored). Insert Table 2 6

The second part of the table shows that subdivisions of watchlist resolutions between investment- and speculative-grade issuers reflect those of initial assignments, see table 1. This is not true in case of outlooks. As far as oulooks are concerned this, in a sense, is only part of the story. The lower part of table 2 indicates that about half of the outlooks that did not lead to an immediate rating change were succeeded by a watch or outlook in similar direction. The numbers in parentheses next to these counts denote the cases that led to a rating change in similar direction. One may consider outlooks followed by a watch, and henceforth followed by a rating change in similar direction, as resolved. Incorporating these cases, resolution fractions of positive and negative outlooks increase to about 0.42. This is still smaller than watchlist assignments. Taking this broader view, speculative and investment-grade subdivision at resolution (in intended direction) more closely resembles the corresponding subdivision at inception. We finally consider resolution periods, defined as the length of time from inception to outlook or watchlist resolution. Moody s (2004) states that an outlook should be interpreted as an opinion regarding the likely direction of a rating change over the medium term, typically 18 to 36 months. Watches on the other hand indicate that the rating is under review for possible rating change on the short term, usually within 90 days. 2 Figure 1 provides frequency distributions of watchlist and outlook resolution periods, excluding right censored cases. The numbers on top of the bars refer to the percentage of cases within each bin that experienced a rating change in the intended direction. For example, looking at the negative outlook plot, 56 out of 200 (i.e., 28 percent) negative outlooks with a resolution period between 500 and 600 days ended up in a rating downgrade. 2 Standard & Poors (2005) use a similar distinction. Outlooks are supposed to assess the potential direction of a long term credit rating and are typically resolved within 6 months to 2 years. Credit watches indicate the potential direction of a rating change in a short- or long term rating and are normally completed within 90 days. 7

Insert Figure 1 The plotted distributions of positive and negative outlooks are grossly of similar shape. The largest dissimilarity seems to be the relatively large number of negative outlooks resolved within 100 days. Given these cases, the fraction of negative outlooks leading to a rating downgrade is relatively high as well. Dissimilarities between watches for downgrade and upgrade are more pronounced. Resolution periods of watches for downgrade are relatively more concentrated at the lower end of the frequency distribution. Short resolution periods are associated with slightly higher actual downgrade fractions as well. 3 Measuring Window Abnormal Returns (WAR) 3.1 Data Set Given previous insights, we next examine WARs surrounding rating agency announcements. To arrive at our sample we combine data from several sources. Companies rating histories are obtained from the July 2005 version of Moody s DRS database. An extended version of watchlist and outlook information is provided separately by Moody s Investors Service. Standard & Poor s issuer ratings are obtained from the June 2005 version of Standard & Poor s CREDITPRO 7.0 database. 3 Daily stock returns, and stock index return data, are taken from Thomson Financial / Datastream. The latter refer to the theoretical growth in value of a share holding position over a one day period, assuming dividends are re-invested to purchase additional units of stock at the closing price applicable on the ex-dividend date, ignoring tax and reinvestment charges. As a first step we tried to match Moody s rated companies with U.S. companies that are, or once were, traded on either the NYSE or the NASDAQ. To facilitate our own ranking of 3 We did not collect data on smaller players (e.g., Fitch, Duff & Phelps). Norden and Weber (2004) find that market respons to rating events by Fitch are considerably weaker than those of Standard and Poors and Moody s. 8

companies in terms of default probability later on, we narrow our sample further to those companies for which we were able to obtain company specific variables. The latter are obtained from Standard & Poor s COMPUSTAT database. 4 Our final sample period runs from October 1991 to February 2005. Matching the different data sources leaves us with 1099 U.S. companies. Calculating a rating transition matrix reveals that most rating activity is concentrated along the main diagonal. This confirms that ratings most frequently change only gradually (i.e., notch by notch). Calculating a similar transition matrix with respect to Moody s entire rated universe, reveals that the number of rating changes at both the upper and lower ends of the rating spectrum are relatively sparse. This is especially true in case of upgrades at the upper end of the rating scale, which might have been caused by exclusion of financial companies. Apart from this the sample seems to be a reasonable reflection of the type of rating changes observed within the sample period. 5 3.2 Unconditional WAR To measure the impact of rating agency announcements we make use of a common approach in event study analysis, relating the return of company i, R it, to the market portfolio, R mt (i.e., the market or one factor model): 6 R it = α i + R mt β i + u it (1) The model is estimated using a 400 trading day window, equally divided between the preand post-event period. 7 We take such an approach because it is not uncommon for events to 4 We consider non-financial companies only. COMPUSTAT data on financials, like banks, is frequently non-available. Moreover, in contrast with non-financial companies, financial companies are highly regulated, making them kind of special compared to non-financials. 5 These results are available from the corresponding author upon request. 6 Though a multifactor model should provide a better fit, Campbell, Lo and MacKinlay (1997) note that the gains from employing multifactor models for event study analysis is limited. 7 Results tend to be insensitive to the length of the estimation window. 9

be surrounded by other announcements. Indeed, the very existence of watchlist and outlook assignments might be explained by a willingness to disseminate some information as early as possible. By using a reasonably large estimation window, and considering data from the preand post-event periods, estimates will be less susceptible to announcements surrounding the actual event. Moreover, including post-event data allows us to incorporate potential changes in return variability as a result of the announcement itself. We consider a 60 trading day event window to estimate excess returns surrounding Moody s announcements. In line with other studies, we find that the overall impact of rating agency announcements is largest in case of negative announcements. Due to space considerations we therefore predominantly report results on the latter, commenting on positive announcements only when appropriate. The unshaded columns of table 3 give an overview of window abnormal returns (WAR) related to negative announcements. The watchlist and outlook sample composition seems reasonable given information provided in table 1. The skewed division of watches for downgrade between investment- and speculative-grade ratings is comparable to table 1. Insert Table 3 In case of downgrades we obtain an average total event WAR of 1.6 percent. The largest event WAR is associated with watches for downgrade, -5 percent. More than 2/3 of them eventually wind up in a rating downgrade. Considering the relatively short resolution period as well, the market seems to be fully aware of the seriousness of this signal. The return impact of negative outlooks is much less severe. This is in accordance with a longer resolution period, and a lower likelihood of an eventual rating change. Excess return patterns seems to be somewhat similar across announcements types. A significant part of abnormal returns materializes prior to the announcement day window, subsequently being followed by a correction in the opposite direction, especially in case of downgrades and negative outlooks. The impact of positive announcements turns out to be small and insignificant. However, watches for upgrade are a clear exception to this rule, with a total event WAR of almost 10

4 percent. Like their negative counterparts, a significant part of this excess return materializes prior to and within the announcement day window. However, we do not obtain a significant post-announcement reaction. All information seems to be incorporated after the announcement has been made. We note that results reported so far are possibly contaminated. There might be concomitant announcements surrounding the event considered. One could think of a watch or outlook preceding the actual rating change within close range. To deal with the problem of contamination we exclude observations if they are preceded by an announcement in similar direction within the pre-announcement event window. In case of watches and outlooks we moreover exclude observations if we observe a concomitant rating change at the announcement day. We did not follow the same practice the other way round. Rating changes are considered to be the ultimate signaling device. It seems odd to exclude a rating change due to an outlook assignment at the same date. 8 Figure 2 graphically depicts estimated cumulative abnormal returns of the uncontaminated samples, including positive announcements. The gray columns in table 3 report corresponding WAR statistics with respect to negative announcements. In general not many observations are lost in case of positive and negative watchlist announcements. This implies that they are relatively stand alone announcements. Observational losses within other categories are relatively more pronounced, about 1/3 (1/5) in case of negative (positive) outlook and downgrade (upgrade) announcements. Insert Figure 2 In broad lines results are similar to the contaminated samples, especially so in case of positive announcements, though there are some differences. Returns prior to the announcement day window decrease in magnitude and significance across all announcement types, but do not vanish. The significant post-announcement return in case of watches for downgrade diminishes both in magnitude and significance. The largest difference is obtained in case of 8 In a cross-sectional regression later on we determine the additional impact of watchlist and outlook assignments on excess returns within the announcement day window as far as rating changes ae concerned. 11

negative outlooks. Results in the first interval of the event window, and the positive correction afterwards are driven by concomitant downgrades at announcement dates. The latter are responsible for 80 percent of the total sample loss of 1/3. Abnormal returns immediately prior to and within the announcement day window remain significant. 3.2.1 Robustness As a robustness check, we first determine whether we have to account for beta shifts surrounding rating agency announcements. If default risk is systematic it will be priced, see Denis and Denis (1995) and Vassalou and Xing (2004). On the other hand, the likelihood of default might be foremostly related to idiosyncratic factors, see Asquith, Gertner and Scharfstein (1994), Opler and Titman (1994), Dichev and Piotroski (2001). To test for beta stationarity surrounding agency announcements we use the testing procedure of Impson, Glascock and Karafiath (1992), which is briefly described in the appendix. Table 4 reports an unanimous rejection of systematic beta shifts surrounding agency announcements. This result in itself does not necessarily imply that default risk is predominantly idiosyncratic. 9 However, for our purposes it at least implies we do not have to account for beta shifts. Insert Table 4 Secondly, Corhay and Tourani Rad (1996) indicate results may be significantly affected by inefficiencies in the estimation procedure. As a result we allowed for a (skewed-t) GARCH(1,1) specification with respect to the normal performance return model, see also Abad-Romero and Robles-Fernandez (2006). We did not find significant differences in estimated WARs. 10 For comparability reasons we stick with ordinary estimation of WARs in 9 For example, if rating agencies are relatively slow in information dissemination a split up based on announcement times might be incorrect. Changes in required rates of return could have materialized before the actual announcement. 10 The parameters are estimated by maximum likelihood using the G@RCH-package of the Ox programming language, see Laurent and Peters (2002)). These results are available from the corresponding author upon request. 12

the remainder. Finally, to make our results less susceptible to outliers, we consider two nonparametric tests which are fully specified in the appendix. The generalized sign test examines whether, within a specific event window, the number of stocks with positive WARs differs significantly from the number expected in the absence of abnormal performance. The latter is based on the average fraction of positive abnormal returns observed in the estimation period. In case of negative announcements we would expect a lower number of positive WARs than expected, leading to a negative statistic. As an alternative, we transform the time series of residuals into their respective ranks. The nonparametric rank test examines whether the mean rank obtained within a specific window differs significantly from the average rank of the time series as a whole. The last two lines of tables 5 reveal that both nonparametric tests are grossly in line with prior results. For example, in case of watches for downgrade sign tests clearly indicate that the reported number of negative WARs is significantly larger than expected. The same holds true with respect to the announcement day window and the window immediately preceding it. The rank tests in turn confirm that the average rank obtained in these windows is significantly lower than the overall mean rank. Overall, we confirm that sharp negative abnormal returns already materialize prior to negative rating events. In case of downgrades, and somewhat less so in case of watches for downgrade, negative announcement and pre-announcement window abnormal returns are partially annihilated by positive post-announcement abnormal returns. The latter is in line with Glascock et al. (1987), Nordon and Weber (2004) and, considering bond market reactions, Heinke and Steiner (2001). 3.3 Conditional WAR 3.3.1 Investment- versus Speculative-Grade Up till now announcements related to specific rating events have been treated similarly. Jorion and Zhang (2005) underline that return reactions to rating announcements may be 13

higher once ratings decline. The structural model of Merton (1974) implies that changes in equity value, due to underlying changes in default probability, are larger once default probability is at a higher level to start with. Moreover, differences in default probabilities between adjacent rating classes become larger once ratings decline, see Cantor, Hamilton, Ou and Varma (2006). We first determine whether (uncontaminated) results in table 3 are predominantly driven by investment- or speculative-grade issuers. This is not only interesting in itself, it is also important to know whether differences between alternative subsamples should predominantly be ascribed to alternative rating compositions. To focus more clearly on differences between prior-, post- and announcement window returns we adjust our windows accordingly. Table 5 reveals that speculative-grade total event WARs are generally larger than their investment-grade counterparts. WAR patterns across specific windows are similar for both rating categories in case of negative outlooks. Looking at watches for downgrade and rating changes instead, we obtain larger (pre-)announcement WARs. Only in case of watches for downgrade speculative-grade post-announcement WARs are larger as well, such that differences between total event WARs are less pronounced. Insert Table 5 As a robustness check, we again consider asymptotic versions of two nonparametric tests, which are outlined in the appendix. The Mann-Whitney U test first orders the combined sample of WARs in each window, and subsequently compares the mean rank of populations that are to be compared. The Kolmogorov-Smirnov test examines the maximum distance between underlying empirical distribution functions. Kolmogorov-Smirnov statistics at the lower end of table 5 almost uniformly reject similarity of WAR distributions across subsamples. Though there are differences in terms of significance levels, this casts some doubt on testing power. Mann-Whitney U tests only confirm rank differences in case of rating change announcements. 14

3.3.2 Confirmed versus Unconfirmed Announcements Keeping differences across investment- and speculative-grade rating classes in mind, we turn to WAR patterns. Surveys by Ellis (1998) and Baker and Mansi (2002) reveal that market participants believe that agency ratings adjust slowly to changes in corporate credit quality. Recent studies by Nordon and Weber (2004) and Hull et al. (2004) document preannouncement CDS spread responses to negative rating agency announcements, and predictability of rating events given CDS spread changes. Together with insurance companies, and the recent increasing share of hedge funds, banks represent the majority of market participants in CDS markets, see British Bankers Association (2006). This implies that CDS market participants are generally well informed. As an indication, Blanco et al. (2005) show that most pricing relevant information flows from CDS to bond markets. Surveys and empirical results suggest it would be vital to explicitly account for opinions held by market participants that are unrelated to possible sluggish information provisioning by rating agencies. For opinions are unobserved, we estimate point-in-time default prediction models, which respond quickly to changes in underlying fundamentals. This allows us to differentiate between announcements that are in line with an underlying tendency (confirmed), and those that are not (unconfirmed). Some studies have tried to differentiate between expected and unexpected rating changes as well. Hand et al. (1992) relate corporate bond yields to the median yield of bonds within similar rating classes. They do find a stronger announcement WAR when watches for downgrade are classified as unexpected. Goh and Ederington (1993) look at underlying causes of downgrades. Downgrades due to deteriorations or improvements in a firm s prospect or performance, like cash flow generation, are considered to be forward looking whilst others are classified as backward looking (i.e., expected). The authors find a larger announcement WAR in case of forward looking announcements. However, they obtain no clear differences in pre- and post-announcement WARs. 15

Point-in-Time Ratings To proxy for point-in-time default risk assessment we estimate logit models in a panel data setting along the lines of Altman and Rijken (2005)(2006) 11 : L(β x, y) = t [Λ(β T x it )] 1 y i [1 Λ(β T x it )] y i (2) i where Λ(β T x it ) = (1+e βt x it ) 1. The (transformed) company specific variables x it used are: Net working capital (scaled by total assets, T A), W K/T A, retained earnings, ln(1 RE/T A), earnings before interest and taxes, ln(1 EBIT/TA), and market value of equity to book value of liabilities, 1+ln(ME/BL). Net working capital proxies for short term liquidity. The other three variables are related to past, current and future profitability. The last variable is of course also a measure of financial leverage. We moreover include the too-big-to-fail proxy Size, defined as total liabilities scaled by the total value of the U.S. equity market, 1 + ln(bl/mkt), the firm s stock return vis-à-vis the equally weighted market return in the 12 months preceding t, AR, and σ(ar), the standard deviation of monthly abnormal returns in the 12 months preceding t. The definition of y i depends on the estimated model. We estimate a long term default prediction model (ldp) and a marginal default prediction model (mdp). In the former case, y i equals 0 if company i defaults before t + T, where T is set equal to 6 years. In case of the mdp model y i is equal to 0 if company i defaults in a future period (t+t 1, t+t 1 + T), where both T 1 and T are set equal to 3 years. 12 As a result the mdp model focuses exclusively on the long term, in a sense looking through-the-cycle, whilst the ldp model also accounts for short term default risk. At the end of each month company rating data is linked to company specific model variables. The estimation period stretches from April 1982, when Moody s began to add rating modifiers within broad rating classes, to the beginning of 2005, resulting in an average 11 We could have considered a structural or intensity based model as well. Lando (2004) Chapter 4 gives an overview of alternative statistical techniques. 12 The parameter estimates of the mdp model do not change substantially when T 1 is varied between 3 and 6 years and T is allowed to vary between 1 and 3 years. 16

time series of 85 months per issuer. For we make use of a longer estimation period, and there is no need for daily stock data, the credit models are estimated using a sample of 2239 U.S. companies. This is significantly larger than the sample obtained to estimate WARs surrounding rating agency announcements (i.e., 1099 U.S. companies). Table 6 reports estimation results on the ldp and mdp models. Though working capital has a negative coefficient, coefficient signs are generally as expected. The lower part of the table reports relative weights, RW i = β iσ i Σ j β j σ j, where σ i denotes the standard error of variable j in the pooled sample. This gives an indication of the relative importance of the variables considered. Both models give most weight to retained earnings, leverage and size. Insert Table 6 Given estimated default prediction models, we obtain a monthly ranking of companies in terms of estimated default probabilities. Each month we designate companies to rating classes based on this relative ranking. The number of companies assigned to a specific rating classes are comensurate with the actual number of companies in these rating classes, as suggested by agency ratings. In the end we are left with what will be called credit model ratings. We next define an announcement as confirmed if we observe a credit model rating change in similar direction within a fixed window prior to announcement. The latter is defined as the difference between the first and last credit model rating of the window considered. For example, if the credit model rating deteriorates prior to a negative rating agency announcement, it is classified as confirmed. When no credit model rating change, or even an opposite tendency, is observed, the rating change is classified as unconfirmed. The constructed credit model ratings are plain point-in-time ratings. As a result these models do not suffer from possible conservatism in information provisioning by rating agencies. Conditioning on agency rating changes, Altman and Rijken (2006) note that, in a 4 year interval surrounding rating changes, about 80-90% of the credit model rating change occurs in the 2 year period prior to an actual agency rating change. On average credit model 17

ratings anticipate agency rating changes by about 3/4 (1/2) year in case of upgrades (downgrades). Given data availability, our fixed credit model window runs from one year prior to announcement to the announcement itself. Confirmed versus Unconfirmed WARs Figure 3 graphically depicts cumulative abnormal returns of announcements that were in line with a downward moving ldp credit model rating (confirmed), and those where this was not the case (unconfirmed). 13 Table 7 reports corresponding WARs. We again exclude watches and outlooks if Moody s or Standard & Poor s changes the company s rating at the same date. Insert Figure 3 and Table 7 Table 7 reveals that subsamples do not differ that much in size, with the exception of downgrade announcements. The relative magnitudes of investment- and speculative-grade companies are similar in corresponding subsamples. Results are therefore not a priori driven by differences in rating composition. We obtain clear differences in abnormal return patterns. Firstly, looking at downgrades and watches for downgrade, announcement WARs are less severe in case of confirmed announcements. Lacking confirmation apparently leads to a lower immediate response. Secondly, downgrades classified as unconfirmed show no significant pre- and post-announcement WARs. Their confirmed counterparts experience a significant negative abnormal return prior to the downgrade announcement, followed by a positive correction afterwards. Similar differences in post-announcements WARs are obtained in case of watches for downgrade and negative outlooks. Except for negative outlooks, these findings are almost uniformly confirmed by non-parametric Kolmogorov-Smirnov and Mann-Whitney U tests. We checked the previous results for robustness, focusing on pre- and post-announcement windows. Due to space considerations these results are not reported. 14 13 In general, subdividing our sample based on the mdp model gives results that are very close to the ones obtained in the ldp case. In the remainder we only report results related to the ldp model. 14 Results are available from the corresponding author upon request. 18

As a first check we shortened the interval used to subdivide our samples. Instead of using the full one year period prior to announcement, we temporarily only considered credit model ratings assigned strictly outside the event window. This alternative setup only had an impact on the pre-announcement WAR difference in case of downgrades, which became somewhat smaller. Secondly, we excluded company WARs that were smaller (larger) than the 10 (90) percent quantile of the empirical WAR distributions. Results resembled our base case very closely. Like nonparametric inference, this confirms that our results are not unduly driven by extreme outliers. Finally, we experimented with exclusion of observations if they were preceded by an announcement in similar direction within the entire pre-announcement event window. Once again, this did not affect our overall finding. To conclude, we obtain no significant post-announcement WARs when announcements do not confirm a deteriorating pre-announcement credit model tendency. This suggests that new information is fully absorbed once it is revealed. Positive post-announcement WARs materialize when announcements are consistent with prior point-in-time credit quality deteriorations. This suggests a pre-announcement excessive response once concerns about companies grow. Formal rating agency announcements might then predominantly resolve some underlying uncertainty, which puts the market at ease. 3.3.3 Watchlist Resolution And (Un)confirmed Announcements In section 3.3.2 we obtain a pre-announcement return differential only in case of downgrades. This differential might to a large extent be related to prior signaling by rating agencies. This is confirmed if we subdivide our downgrade sample by watchlist precedence. The average downgrade pre-announcement WAR related to watchlist precedence, -1 percent, is significantly smaller than similar reactions with no watchlist precedence, -5 percent. As shown in table 2, watchlist precedence and rating levels are intimately related. Splitting the downgrade sample up by rating grade first, and estimating WARs conditional on watchlist precedence gives similar results. No matter whether we are dealing with speculative-grade or investment-grade companies, pre-announcement WARs are significantly 19

more negative when downgrades were not preceded by a watchlist addition. We finally subdivide our downgrade sample into four subsamples, based on watchlist precedence on the one hand and credit model indication on the other. Table 8 reveals that, after conditioning on watchlist precedence first, the relative sample decompositions in terms of speculative- and investment-grade companies is quite similar across confirmed and unconfirmed subsamples (i.e., vertical direction). However, if we condition on credit model indication first, the distribution between investment- and speculative grade companies is turned upside down if we subsequently condition on watchlist precedence (i.e., horizontal direction). This is in accordance with the intimate link between watchlist additions and the investment-grade rating category. Insert Table 8 As expected the total event WAR is highest and statistically very significant in the unpreceded (i.e., no watch), unconfirmed subsample. Once again, two observations stand out, and are consistently backed by nonparametric tests at the 1% significance level. Firstly, across subsamples pre-announcement abnormal returns are more severe when downgrades are not preceded by watchlist additions. Secondly, when downgrades are classified as confirmed, we obtain significant and large positive post-announcement abnormal return reactions, whether or not downgrades were preceded by watchlist additions. This is not true in case of their unconfirmed counterparts. 3.3.4 Announcement WARs: Multivariate Regression Previous results give additional insight into pre- and post-announcement WARs, but reveal less with respect to announcement WARs. To more accurately control for additional information, we consider a multivariate regression framework in case of downgrade announcement WARs. Our main interest still centers on the impact of rating agency signals (i.e., outlooks and watches), and prior changes in market participants point-in-time default risk assessment. The latter is again captured by ldp credit model rating changes one year prior to downgrades. 20

To assess the importance of watchlist (outlook) additions, we include watch period (outlook period ). If the rating change was a resolution of a watchlist (outlook) assignment, the variable is defined as the natural log of one plus the watchlist (outlook) period. If not, the variable equals the maximum value of this subsample. Moreover, we include watch at (outlook at ), which is equal to 1 if a company s rating changes but the watch (outlook) was not lifted, or we observed a concomitant new watchlist (outlook) assignment. Apart from these variables, we consider several control variables: Bound Inv/Spec, indicates whether the company s rating surpasses the investment-/speculative-grade boundary; #notch, number of notches downgraded; default, equal to 1 if we observed a default within one year after the rating change, which could have been anticipated; broad, equal to 1 when the rating change was across broad rating classes; and S&P (t0,t 1 ), which indicates whether we observed a rating change in similar direction by Standard & Poor s within (t 0, t 1 ). We expect to obtain a negative coefficient with respect to all variables. When Standard & Poor s rating change is announced (much) earlier than Moody s we might also obtain a positive coefficient, as Moody s downgrade comes as no surprise then. We also include WAR ( 30, 2), the WAR in the full pre-announcement window. Model 1 in table 9 indicates that control variables predominantly enter with their expected sign. We obtain significant coefficients with respect to default, #notch and WAR ( 30, 2). Though it seems that negative pre-announcement WARs strengthen announcement WARs, the actual impact is modest. Prior changes in point-in-time default risk assessment, as captured by ldp, seems to have no bearing on announcement WARs. The estimation result predominantly indicates that announcement WARs are more severe when additional information is revealed. This can either be a concomitant rating change by Standard & Poor s in the same time interval, S&P ( 1,1), or a new/non-lifted watchlist (outlook) assignment. The length of watchlist and outlook resolution periods have no significant impact on announcement WARs. Insert Table 9 In model 2 we replace plain resolution periods by watch res, 90, outlook res, 180 and watch res,+90, 21

outlook res,+180. These variables simply indicate whether the rating change was a resolution to a watch or outlook in similar direction within or beyond 90 or 180 days, corresponding by and large to the median resolution period given an eventual downgrade. Since the watch at and outlook at variables turned out to be very significant, we split these cases further up between watches (outlooks) that are not lifted when ratings change, watch at,old (outlook at,old ), and new watchlist (outlook) additions, watch at,new (outlook at,new ). Parameter estimates reveal that announcement WARs are unaffected by watchlist precedence. Results confirm that announcement WARs are predominantly affected by downgrade severity. This not only means number of notches downgraded. Returns are also larger when we either observe a concomitant rating change by Standard & Poor s in the announcement window, or the company is newly added to the watchlist when it is downgraded. Direct outlook resolution (i.e., no watchlist addition) might be an important signal as well. We indeed find relatively large, but insignificant, coefficients related to outlook resolution dummies. 4 Conclusion Prior surveys and benchmark studies indicate that agency ratings respond slowly to changes in underlying credit-quality. Empirical research on return reactions on the other hand frequently report abnormal return reactions prior to negative announcements. We confirm the latter, where we consider three types of announcements: rating changes, watchlist additions and outlook assignments. We also obtain significant positive post-announcement returns in case of negative announcements, which is in line with several event studies. Given previous empirical findings we estimate poin-in-time default prediction models. These models do not suffer from possible conservatism in information provisioning by rating agencies. This allows us to make a distinction between confirmed and unconfirmed announcements. We obtain no significant positive post-announcement returns if announcements are out of line with pre-announcement point-in-time credit model tendencies. This indicates that 22

new information is fully absorbed once it is revealed. On the other hand, significant positive post-announcement returns typically materialize when announcements are in line with preannouncement point-in-time credit quality deteriorations. This suggests a pre-announcement excessive response once market participants concerns about specific companies grow. Formal rating agency announcements might then predominantly resolve some underlying uncertainty, which puts the market at ease. In case of downgrades, we find that pre-announcement return reactions are intimately related to watchlist precedence. Pre-announcement negative abnormal returns are less severe when downgrades are preceded by watchlist additions. Assigning a watch for downgrade to a specific company already leads to an immediate significant negative abnormal return reaction once the watchlist addition is announced. We finally find that announcement window abnormal return reactions are predominantly determined by severity of the downgrade signal. This not only means number of notches downgraded. Returns are also larger when we either observe a concomitant rating change by Standard & Poor s in close range, or the company is newly added to the watchlist when it is downgraded. 23

5 Appendix 5.1 Beta stationarity See Impson et al. (1992). We write: R ki = X ki β ki + u ki (3) where β ik denotes the estimated parameter vector of the market model of the k th company, out of K companies, in period i. The latter in this case being either the period prior to the rating change (i = 1) or the period after the rating change (i = 2). The shift in parameter vector then equals: δ k = β k2 β k1. Denoting δ = (δ 1, δ 2,..., δ K ) and R = ( ) 0 1 0 1... 0 1, we then have: with V (ˆδ) being a block diagonal matrix with l th block: ( ) T ]( Rˆδ [RV (ˆδ)R Rˆδ) T a χ 2 (1) (4) ˆσ 2 k1 ( X T k1 X k1 ) 1 + ˆσ 2 k2 ( X T k2 X k2 ) 1 (5) 5.2 Nonparametric Tests w.r.t. WARs 5.2.1 Sign Test See Cowan (1992). Define N = n est + n event, and K = # firms, where n est (n event ) denotes the number of return observations in the estimation (event) window. The Sign Test is given by: where: Sign Test = w Kˆp [Kˆp(1 ˆp)] 1/2 a N(0, 1) (6) w = K Σ k=1 1 {WARk >0} (7) 24

and ˆp = 1 K K Σ k=1 1 n est n est Σ t=1 1 {AR kt >0} (8) 5.2.2 Rank Test See Cowan (1992). If we define M kt, the abnormal return of company k at time t: the Rank Test is given by: Rank Test = (n event ) 1/2 [ M kt = rank(ar kt ) (9) n event 1 1 n event Σ K t=1 N 1 Σ N t=1 wherem and M denotes the mean rank, (N + 1)/2. ( 1 K K Σ k=1 ( Mkt M ) ) ] K 2 1/2 a N(0, 1) (10) Σ M kt M k=1 5.3 Non-Parametric Tests for Two Independent Samples 5.3.1 Mann-Whitney U See Sheskin (2003). Suppose we have two samples, with sample sizes n 1 and n 2. Define: U 1 = n 1 n 2 + n 1(n 1 + 1) 2 U 2 = n 1 n 2 + n 2(n 2 + 1) 2 + Σ i M i1 (11) + Σ i M i2 (12) where n i equals the number of observations in subsample i whilst ΣM ij denotes summation of the ranks of the j th subsample. Then: z = (U 1 U 2 ) n 1n 2 2 n 1 n 2 (n 1 +n 2 +1) 12 a N(0, 1) (13) 25