In November 2019, the National Bureau of Economic Research published a paper in its working paper series: “Don’t Take Their Word for It: The Misclassification of Bond Mutual Funds.”
The authors reported key findings that, if presumed to be true, raise some tricky questions for the mutual fund analysis and reporting industry. Specifically, the authors stated that bond fund managers are prone to misclassifying their holdings, to the extent that these misclassifications have a real and significant impact on investor capital flows—and on the amount of risk bond investors are taking.
The study compared Morningstar bond fund reports with various funds’ actual portfolio holdings, as the authors studied and identified them. The authors found “significant misclassification of fund riskiness across the universe of all bond funds, with up to 31.4% of all funds misclassified in recent years.”
“Many funds report more investment-grade assets than are actually held in their portfolios to important information intermediaries, making these funds appear significantly less risky,” the report warns.
According to the researchers, the purported goal of these misclassifications was to earn a higher credit-quality rating in the oft-consulted Morningstar Fixed-Income Style Box. As an example, if a given fund held mostly BB-rated bonds, this would tend to anchor it in Morningstar’s low credit-quality tier. On the other hand, if the fund manager reported a BBB-portfolio—while still holding many BB-bonds—the fund could move into the medium credit-quality tier. Assuming the usual positive correlation between risk and return, the misclassified BBB-rated fund could potentially show a higher yield and more upside potential than its correctly classified BB-peers. The authors also maintain that “misclassified funds receive significantly more Morningstar stars than other funds.”
In the retirement planning world, star ratings matter. Plan advisers contacted for this article say they frequently use Morningstar’s analyses, including star ratings, when considering which bond funds to propose for a retirement plan’s lineup. Other research shows participants tend to rely “blindly” on such ratings when evaluating the bond funds available within their plans.
For its part, the 2019 working paper attracted attention after its publication, and the authors and Morningstar’s research staff held a conference call to discuss the findings. Morningstar subsequently published a series of critiques, which the authors rebutted. Their disagreements remain unresolved, and the paper’s final draft was published in the peer-reviewed August 2021 “Journal of Finance.”
A Closer Look at Self-Reported Data
Bond funds often hold hundreds of securities with a wide range of covenants, maturity dates, yields and other features, even among bonds from the same issuer. Additionally, credit-rating agencies can differ on a security’s creditworthiness, assuming a bond is rated at all.
The key point for the authors is that Morningstar asks each bond fund it tracks to provide a monthly summary report of its holdings. Jeff Westergaard, Morningstar’s fixed-income data director, explains that the reports contain a core set of information related to fixed income investments. These consist of four portfolio average data points, including a duration data point that contributes to the style box and a data point that considers the distribution of the portfolio across seven rating categories (ranging from AAA to below-B).
Westergaard says he is confident in the data’s quality: “We receive literally tens of thousands of these surveys, and we believe that the vast majority of them accurately reflect the credit ratings of the funds that they purport to be.”
The authors propose that self-reporting opens the system to abuse, however. They say Morningstar is “overly reliant” on the provided summary metrics by basing “its credit risk summaries solely on the self-reported data.”
Per the paper: “We provide robust evidence that funds, on average, report significantly safer portfolios than they actually (i.e., verifiably) hold. … Due to this misreporting, funds are then misclassified by Morningstar into safer style boxes than they otherwise should be.”
They add that the impact doesn’t stop with style box misclassification and star ratings. The authors also claim that misclassified funds can charge significantly higher expenses and attract more investor flows.
Hashing It Out
In November 2019, the same month the authors published their report, Morningstar published a brief initial response to the working paper, followed by a more in-depth reply in December 2019. Two key Morningstar assertions from the December publication are:
- Credit-quality differences from self-reported data almost always stemmed from bonds that Morningstar’s calculation engine didn’t recognize or couldn’t associate with a credit rating. The authors assumed these “not rated” bonds were low quality, but this often isn’t the case; and
- The authors misunderstood how Morningstar classifies funds by mistaking the fixed-income Morningstar Style Box assignments for Morningstar Category classifications. Morningstar uses the categories to peer-group, rank and assign ratings to funds, while the authors’ sole focus was on the fixed-income style-box assignments.
The “not rated” discussion quickly gets into the weeds of bond credit-rating reporting, but Morningstar provides an illustration for a specific fund showing that unrated fixed-income securities are not always low quality. The authors subsequently replied that an analysis of funds that omitted unrated bonds still supported their finding of significant misclassification.
Morningstar’s reply also discussed the style box versus category analysis. The company classifies bond funds into multiple categories: corporate bond, multisector bond, ultrashort bond, etc. A fund’s risk-adjusted performance in its category, not in its style-box classification, determines its star rating. From Morningstar’s reply: “Once we’ve assigned funds to Morningstar Categories, we can compare and rank them on measures such as past performance. Indeed, to assign star rating to funds, we rank funds’ trailing risk-adjusted returns against those of other funds in their Morningstar Category (the Morningstar Risk-Adjusted Return Rank).”
Jeffrey Ptak, Morningstar’s chief ratings officer, says style boxes and categories use “completely separate” classification arrangements. “If somebody was trying to game the style box, they might succeed with that, but that might have no bearing on their Morningstar category classification because the category classification rules are completely different,” he explains.
Morningstar emphasized the importance of style box assignment versus fund category assignment in its published commentaries and in a conference call with PLANADVISER. According to Morningstar’s December reply, the firm asked the authors for a list of funds that they considered misclassified and what they believed to be those funds’ correct categories at the time of analysis. According to the reply, though, “The authors confirmed on a conference call held November 11, 2019, that they could not furnish such a list, as they’d defined ‘misclassification’ based on funds’ style box, not Morningstar Category, assignments.”
Morningstar’s published conclusion: “We have found no evidence that bond funds have been incorrectly assigned to Morningstar Categories in the widespread way the authors allege.” The authors’ rebuttal: “Our findings still hold when we compare the funds against the Morningstar category (as opposed to risk peer group.)”
An Ongoing Argument?
As of early September, the situation remains a stalemate. Westergaard says Morningstar still doesn’t have a clear idea of what data or methodology the authors used to reach their conclusions. “We did ask them to share this, and they declined to do so,” he adds. The author’s published response: “We are using Morningstar’s data (from the Morningstar Direct product), along with Morningstar’s published formulas for calculating all of the weightings and classifications in the paper.”
Two of the authors responded positively to PLANADVISER expressing interest in discussing their findings, but subsequent messages with several specific questions went unanswered by deadline.
Several advisers called upon to discuss the controversy expressed frustration with the lack of resolution.
Tolen Teigen, chief investment officer (CIO) of wealth management and workplace benefits consulting firm FinDec, in Stockton, California, says his firm uses both Morningstar and Fi360 research when evaluating bond funds for clients’ plans. Teigen says the lack of specific misclassification examples in the paper are a drawback.
“From what I’m gathering from this research, there do not seem to be any definitive data points to confirm the findings,” he says. “We would need to do further research and analysis to see specific details about how Morningstar analyzes its data and see if the end result truly makes a meaningful difference from what is being reported.”
Other advisers said that while they use Morningstar’s style box and star ratings, the bond fund reports are only one factor they consider in the fund selection process—and they’re not necessarily the dominant factor. Matt Ogden, head of fixed income manager research with CAPTRUST, in Raleigh, North Carolina, says his firm maintains a list of recommended funds in each asset class. The process of vetting managers is both qualitative and quantitative, he explains. The firm reviews several investment manager databases, including Morningstar and other sources, to identify prospective managers.
Due to CAPTRUST’s large size, the firm also can arrange meetings with the investment managers being considered to “understand what they are doing at a more granular level,” he adds.
Like Teigen, Ogden says he wished the authors had provided specific examples where misclassification caused a fund to move between credit quality ratings or fund categories. Furthermore, he adds that he believes Morningstar “put together a fully reasoned response.”