Greetings my fellow followers of the ‘New Fund Order’ where fund management meets futurism. Where Marx meets Metropolis. Since my last column I became unexpectedly bionic. Alas no mind implants or symbiotic algorithmic Uber Fund Selection nodes (yet). A humble accident led to a leg operation and I find myself a cyborg with a bionic adjustable leg cast. I can set the range of movement but after that the cast is deciding how far my leg muscles can extend my knee. Ironically to date, this is how we program Algos but for how much longer? Restricted to being an armchair fund analyst does afford time to observe the latest developments and time away from managers has recharged my capacitors. This year just marked one of the biggest changes in fund research in the US but hasn’t been widely covered as yet.
Publishing ‘#NewFundOrder’ in 2015, I have been writing about the future of Fund Selection for over 10 years. To many they are my science fiction ramblings. Indeed the Robotisation and digital threat and opportunity for fund buyers has probably become my defining footnote in this industry. Last year I launched my online lecture on Robo Fund Selection, to move the thinking on from Robert Ludwig’s 2005 Machine-Learning Paper. To analyse different Algo inputs both quantitative and qualitative. One of the great advantages for Robo Fund Selection over human research is the ability to update much more frequently than humanly possible unless focusing on a small concentrated pool of managers. Thus the greatest criticism of Morningstar’s analyst ratings was their annual frequency (with quarterly updates belying) and the often at times lengthy periods where funds were left without a rating, pending manager meeting, following say an unexpected manager change. A rating thus when time-stamped may be 90-100% accurate (and hence reliable on day 1) but possibly only 50% accurate on day 365 (probably less). The confidence in a fund rating is not a fixed point. This can all leave advisers and investors with uncertainty and potentially produces greater fund turnover than necessary. Human analysts can’t be in all places all of the time. The other advantage will be in terms of complexity, AI can better handle complex data, potentially AI based strategies and manage any complexity arising within the ratings.
Star-Trekking? Then out of nowhere 2018 witnessed the future suddenly become the present for Machine Learning Fund Selection. Enter stage left Morningstar’s new Quant Ratings. Unlike the old MRAR ‘Star’ regressive (‘look-back to look forward’) system; introduced in 2002 replacing the previous Stutzer Sharpe-based Star rankings, the new ratings promise predictive (‘forward looking’) power. Sounds impressive. Morningstar promises four complimentary pillars of fund research: Star, Analyst, Quant and ESG. Focussing on the Q The next question is how?
“The Quantitative Rating is an extension of the Morningstar Analyst RatingTM for funds (Analyst Rating), which provides an analyst's forward-looking assessment of a fund's ability to outperform its peer group or a relevant benchmark on a risk-adjusted basis over a full market cycle. Morningstar manager research analysts assign Analyst Ratings to approximately 1,800 open-end and exchange-traded funds and together with the Quantitative Rating, cover more than 10,000 funds, representing more than 30,000 share classes in the United States.”
Many may recall the character ‘Q’ from the 90s Star Trek show ‘Next Generation’, a morally ambiguous but super intelligent future form of the human who enjoyed toying with the crew of the Enterprise, setting tests with the seeming goal of ‘what doesn’t kill you’. Does Morningstar’s futuristic Q rating challenge today’s fund analysts in the same way? Hard to say. So far Morningstar has shared the tin but not much about the contents. I tried to find any sort of detailed methodology on the new rating to no avail. When I mean by ‘how’ then is broken down into layers; firstly what are the inputs, secondly how do the ratings learn and adapt (framework) and thirdly is the process 100% Robo or more cybernetic?
Lab rats or Algo soup? Did Morningstar throw its 180 analysts into some laboratory or did it have its new Algo analyse thousands of Analyst ratings over time to understand how different factors drive performance? Probably the latter combined with the ratings methodology and approval process. Data. We know Morningstar in recent years has begun to monitor and publish the success (hit rate) of its ratings. Has it additionally analysed the MRAR Star Ratings and Morningstar Style attribution to identify the propensity of the type of managers whom tend to outperform the benchmark. Perhaps but unclear. It appears to include the multi-P analyst model such as factors like People and Process. Such nodes are then powerful to any Algo coder, the potential to combine qualitative and quantitative data moves us well beyond Ludwig’s Machine-learner. Using human intelligence in the Algo assumptions. This is an alternate approach to the wisdom of the crowd model applied by SharingAlpha.com. It also means the name ‘Q Rating’ is a misnomer, perhaps branded to sound less threatening. This is a Robo Fund Rating. What we know so far being;
“Using an approach rooted in artificial intelligence, Morningstar's machine-learning model incorporates the decision-making processes of manager research analysts, their past rating decisions, and the data used to support those decisions. The machine-learning model is then applied to funds not covered by Morningstar analysts. This process generates the Quantitative Rating, which is analogous to the rating a Morningstar analyst might assign if an analyst covered the fund. The scale for the Quantitative Rating is the same as the Analyst Rating: Gold, Silver, Bronze, Neutral, and Negative. Funds that receive the Quantitative Rating will receive quantitative ratings of Positive, Neutral, or Negative for each of the five pillars—Parent, People, Performance, Price, and Process. Funds will either receive an Analyst Rating or a Quantitative Rating, but not both.”
The key here is the imprint of wisdom of Morningstar analysts but then applied independently to different funds. This is what so game-changing for the Fund ratings landscape. One day then has to ask how long before Morningstar takes its 4 independent ratings to the next stage, to amalgamate together into just one rating? Surely that’s an end game? Instead of running Quant, human, ESG and Algo independently; they morph together. Of course Morningstar is a shrewd operator, as arguably the world’s largest fund research agency in terms of global reach, clients, data and assets under influence, it will take its time to test the ratings with clients and build comprehensive and irrefutable data. What happens if the hit rate of Q Ratings is greater than the already positive hit rate of the analyst ratings. There’s the question. Morningstar will continue to distribute both so long as there is demand. The telling indication for me being how Morningstar describes these ratings; “Investors can use the Quantitative and Analyst Ratings the same way. The quantitative approach provides a forward-looking assessment of a much broader group of funds than the Analyst Rating.”
To begin with expect Morningstar to build a good sample data set in the US first, later publish research bestowing the value of the system before rolling out more globally. Unless Q-ratings prove a flop (and I doubt that) then I think it is a question of time. How long before Morningstar’s Portfolio services begin to switch over to the new system? It will happen, gradually. The quandary for the Fund buyer is how to interpret these ratings; detailed methodology or assumptions of the Quant rating have not yet been issued and the indication is that they are updated monthly. That then ensures the on-paper accuracy of the rating is controlled to a much shorter time frame than Morningstar’s analyst ratings. Roughly 12 times more accurate than an analyst rating after 12 months have passed. Whether that offers more insight is yet to be seen. How quickly before users cotton onto the fact and switch? How do we learn to trust such ratings over time?
Meanwhile Morningstar will be keen to not create panic with clients by hardwiring caveat assurances that the human client still matters. To stay safely outside of the Fiduciary perimeter and therefore not threaten the space occupied by their clients. This is of course contrived and it is debatable how long Research Agencies and Consultants can continue to influence fund buying behavior without being accountable for it. The current Morningstar caveat will be familiar to its clients;
“Morningstar’s fund ratings are not a market call, a credit or risk rating, and do not replace a user from conducting their own due diligence on the fund. Fund ratings are not a suitability assessment, a statement of fact, and should not be used as the sole basis in evaluating a fund. Morningstar ratings involve unknown risks and uncertainties which may cause Morningstar’s expectations not to occur or to differ considerably from what we expected. Morningstar does not guarantee the completeness or accuracy of the assumptions or models used in determining the quantitative fund ratings. Except as otherwise required by law or provided for in a separate agreement, Morningstar and its officers, directors, and employees shall not be responsible or liable for any trading decisions, damages, or other losses resulting from, or related to, the information, data, analyses, or opinions in regard to the use of Morningstar’s fund ratings.”
Accountability? In other words, we think Fund A is better than Fund B, we have access to more information that you don’t have. Trust us but you are still on the hook dear client. This is the status quo that has been accepted for 20 years. Many buy lists and portfolio services around the world accept this and employ external fund ratings in their screening. Between Morningstar, Mercer and Russell the market is pretty much sewn up. Compliance wise the double-edge of this sword being that while the MStar rating provides evidence to select a fund; you are still accountable but in selecting against the MStar rating is fraught with a much tougher Compliance test. If this sounds like moral hazard then you are right. Thus the same will be true for the new Q rating. However being Robo and if successful then suddenly changes the active-passive narrative. Rather than active funds underperforming passive due to a statistical abstract like EMH; it shifts to poor fund selection. The respected work of Jonathan Reuter in 2010 and 2015 already laid the ground for such a thesis. This sits uneasy with me but that is hardly surprising since it challenges my information advantage, my very role as a fund selector. Perhaps research has to remain unshackled from the Fiduciary to innovate. For those fund analysts who also allocate that may sound like rarified air few get to enjoy.
Regulators too are keen for human fiduciaries to stay in charge of the Robots (for now). In the UK, the FCA has launched a consultation on the treatment of algorithms within the approved persons (certified) ‘Fiduciary perimeter’. The sense prevails that regulators still like someone watching the computer and able to pull the plug if necessary. Yet the full implication of the new rating does not appear to have dawned on Morningstar, Morningstar users, the 180 human qualitative fund analysts or indeed professional fund buyers. Even if Morningstar segregate the fund coverage of the two ratings; the hit rate of the new Q rating sector by sector will become indirectly comparable at Gold, Silver and Bronze levels. As time advances the data sample will grow in size.
Big data? The Q rating recognises that human intelligence (‘humint’) and experience can add value. Where a self-learning framework is then used like Differentiable Neural Computing (DNC) then the ‘Q rating’ also has the potential to both mitigate human analyst biases, errors and learn beyond know human techniques. For now humans are still assured of their status by regulation and the caveats of providers but for how much longer? The influence of Robo continues to extend. How can we adapt as, as fund analysts, to Robo ratings? Consider leveraging our collective human intelligence and devising our own Algos to enhance research. What my friend and co-founder of APFI and DOOR, Roland Meerdter, dubbed ‘the third way’. Unlike the experience of Morningstar’s 180 analysts and 20 years (let’s simplify and estimate 3,600 years experience) of ratings data available to Q ratings; SharingAlpha has been in existence for far less time but now has over 1,300 analysts to generate sample data sets. In 3 years SharingAlpha will begin to overtake Morningstar in terms of collective analyst ratings data. It’s innovative ‘hit rate’ methodology can allow new forms of Algo to tap into that wisdom. Imagine a Q rating using the shared wisdom of 1000 or 2000+ analysts. The addition of the community function allows SharingAlphas' analysts to interact and this helps challenge and enhance fund views. Again an Algo could observe these interactions and adapt as human judgements become increasingly codified.
Tomorrows' fund ratings could well become pure Robo or a platform of crowd research combined with AI. Traditional research will become pinched between the two commercially. Three years after I published NewFundOrder, thirteen years after Ludwig’s paper, this is no longer science fiction. Thanks to Morningstar and SharingAlpha, the New Fund Order has just arrived. I hope to explore the possible with them and others. Will the likes of Lipper and Mercer follow? Thomson Reuters (backed by giant Reuters) arguably has even bigger data than Morningstar to pull from and vast data sets from its Lipper Leaders Ratings. How about a Q rating based on Mercer’s GIMD? More ratings will surely follow and Algos will improve. Transparency belying the black box will also need to improve to build user trust. The Fiduciary question still needs to be tackled. Nonetheless, embrace or compete just became a very real decision for the fund selector. Whether we need quite so many human fund analysts in the future depends on Q.