Oct 25, 2022

I was recently discussing day-count and month-count methodologies for annualizing returns with performance professionals and picked up on a recurring question from the group, “How different can the results really be?”

It was a great question ─ and one that I didn’t have an answer to. But it got me thinking more deeply about annualization calculations.

Here are some other annualization calculation questions that frequently come up:

  • Will these two widely used methodologies lead to wildly different results?
  • Which other pieces of performance measurement have multiple, commonly accepted practices?
  • Are all these options leading to data mismatches that could be skewing results
  • Am I overthinking this? Does it really matter?

I started to reflect on other situations that have occurred over the years, each requiring a performance specialist to step away from their day-to-day duties to investigate and help interpret a story that aggregated datasets were telling. Boiling these down, here were some of the nuances at play that need to be considered:

Daily vs. monthly annualization methods

  • Effective date vs. valuation date
  • 5-day vs. 7-day accounting
  • Beginning-of-day vs. end-of-day vs. hybrid cash flow timing assumptions
  • Fee application for net-of-fee calculation
  • Geometric vs. arithmetic risk statistics
  • Buy-and-hold vs. transaction-based attribution results
  • Reference portfolios vs. benchmarks

You get the picture …

The Challenge Goes Beyond Performance Calculation

Whether we’re talking about portfolio returns, attribution analyses, benchmark performance, or risk statistics, it may come as a surprise that truly singular, industry-wide employed practices don’t exist. Instead, what we have are accepted practices that can differ by metric. The reasons behind the generation of these disparate figures may not always be straightforward and include the following:

  • Configuration options on vendors’ platforms may differ vendor to vendor
  • Various divisions within a firm may utilize separate systems for similar calculations
  • A single performance team may use one platform for composites and another for funds, for example
  • Multiple stakeholders within a single organization could well have different performance requirements ─ and hence ─ lenses for measurement

When all the pieces come together, this can lead to a massive headache for business partners, or even worse, clients trying to decipher the data presented to them.

The Annualization Mismatch Challenge Illustrated

Let’s dig into an example that illustrates this point. Annualization is a core piece of performance reporting and a familiar concept to performance specialists. As ubiquitous as this may be, the materiality of the differences between various methodologies may be less commonly known.

Consider a hypothetical fund whose first NAV is struck on Jan 31, 2021. On Feb 28, 2022, the fund achieved a 25% cumulative return and has now hit a 13-month track record and the firm will be presenting annualized since-inception performance in its monthly reporting. Alongside this fund return, the benchmark will also be presented, which in this scenario also achieved a 25% return over the same period.

However, the firm happens to calculate fund and index returns within separate systems, where the fund returns utilize a month-count annualization methodology and benchmark indexes use day-count. The results are then merged for downstream consumption by reporting teams.

Using the formula below, the annualized returns are as follows:

​As we can see from these results, this mismatch in methodologies leads to a false material underperformance of the fund compared to its benchmark. If I were in the portfolio manager’s shoes, I think I’d have something to say about that!

The Challenge Goes Beyond Performance Calculation

“The results are only as good as their inputs” is a well-known adage in the performance measurement space. We should add another: “Methodologies vary, so make sure you know which ones you’re using.”

This doesn’t end with reconciling a return’s underlying inputs, such as pricing, cash flows, corporate actions, and market data. Performance tells a story so two datasets that are reconciled and signed off independently may result in an unanticipated and confusing narrative.

We should think beyond making sure the words on the page are spelled correctly and ensure they come together to form full sentences and coherent chapters. A beneficial exercise for any performance teams is understanding the scope of differences that may exist within data across your firm, as well as the scale to which those differences materialize.

That said, here are some performance calculation best practices that might help your firm mitigate these discrepancies:

  • Document and provide a central source highlighting these differences for consumers to reference, leaving performance analysts to focus on core functions.
  • Highlight opportunities to streamline calculations by reducing systems producing performance results and simplifying the reconciliation process.
  • Highlight the need for performance specialists to be included in data governance as data owners, ensuring authorized sources of data are used for the right purpose.

Now here’s the thing … I chose the above annualization example as an extreme use case to make a statement. This isn’t necessarily a warning to stop what you’re doing and clean up your data, but rather a call to step back, understand the data, and ask yourself, “Does this make sense?”

Firms don’t have to view differences like these as scary monsters coming to get them. Knowing you can shine a light under the bed to illuminate them and educate your team, your consumers, and highlight reasons for acceptable differences might just be enough to help you sleep at night.

The results from Cutter’s Performance, Risk & Portfolio Analytics Benchmarking survey are now available. Check them out.