Playing SharingAlpha like a Game? - a Raters Tale of the Tape

Playing SharingAlpha like a Game? - a Raters Tale of the Tape

Since joining as both an active rating member and as a member of its advisory board, SA has become a large focal point for me. First in foremost my priority is industry reform, to help create the most transparent fund ratings platform in the world. I do this through media promotion and technically to ensure the model works well, rewards the right behaviours, generates meaningful data, ensures ETF benchmarks are correctly applied to funds. The active-passive debate tells us change is overdue. Secondly, as a fund rater to host my own fund ideas and be transparent among peers and potential clients, to open myself to scrutiny. I believe in the wisdom of the crowd, that many analysts can identify better funds than the few. For this to happen I encourage raters to rate funds based on their insights, processes, conviction or the conviction of their peers. They always say use the hairdresser with the worst hair cut, as that person cuts the hair of everyone else. The adage works even if my baldy cranium hasn't seen the inside of a groomery for over 10 years. Nonetheless I probably do a better job of picking funds for other people than myself. Learn from my mistakes..

Reality Bites when it comes to that Mean Reversion!

It was 8am on a Saturday morning and I had just grabbed myself and Mrs JB a cup of tea. We're looking forward to the day ahead together before I fly out to Asia; having just got back from a London trip the night before. I quickly scan my fund ratings, as I do almost every day, to check my 'hit score', alpha ranking and Fund of Fund rating. This morning the tone turned sour when I realised that not only had I dropped off the top of the SA leaders table but that my Alpha ranking had dropped from '3 Alpha' to '2 Alpha', oh the shame of it!

Ego is probably the worst enemy of any fund selector, when reputation and pride can overtake objective decision making. Why should any ranking matter? And it does, it keys into the very human biases that we all face. Furthermore as the active-passive debate rages on, professional fund investors are under growing pressure to justify their economic value. Statisticians and pro-passive 'Boglelites' revel in illustrating our inabilities (collectively) to select winners or buy managers whom can beat the passives persistently.

However in my day job, as a long-term fund investor, I do not buy funds to 'pick winners', win awards or ratings. I get paid a fixed salary to buy objectively to match managers to mandates, customer needs, assess risk and develop the product offering. My modest annual bonus is not related to my fund selection. I search and buy through a rigorous proprietary 6-factor process, screening, remote and onsite due diligence and so on. I rate relatively few new funds each year and spend more time monitoring and engaging managers already held. That said I have still met and reviewed in excess of 2000 managers in my career but that is only around 1% of the total number of fund managers today. If you then consider those managers whom have left the industry over the last 20 years then that percentage falls to a statistically insignificant proportion. Bewildering. Hubris then is not your friend here. On reflection, I now realise that I had also compartmentalised my ratings on SharingAlpha; from those in my day job, bar some of my highest/newest conviction funds. If somehow it was easier to rate funds on SA than in my day job. It's not.

Through SharingAlpha I suddenly found myself confronted with a much larger potential universe of funds, the attraction to add more fund ratings had a strong emotive pull. Facing into a much larger opportunity set at the touch of a button, I was a 'kid in the candy shop'. After all, many fund raters are ever so slightly paranoid about missing the next big thing, a great new fund manager. I was then an early adopter of SA, already an advocate of better fund ratings transparency through my book '#newfundorder' and, as an institutional fund Gatekeeper, devoid of my own profit and loss ('P&L'). Therefore the opportunity to build a public track record was incredibly enticing having bought and sold funds for nigh on two decades, a chance to demonstrate my experience.

I then quickly achieved a stronger and stronger hit score, thus encouraged I added more funds, all the while deviating further and further from my core ideas. It felt easy via a number of simple gaming approaches;

  1. Downgrade managers that appeared to have had obvious franchise problems, reached capacity or poorly positioned against the market,

  2. Downgrade funds that had overshot the peer group and looked likely to suffer some reversion,

  3. Removing or downgrading funds with negative hit scores, I.e. Survivorship bias,

  4. Rate less familiar funds with a fairly neutral rating close to 3,

  5. Upgrade managers that have had good consistent performance but recently fallen behind peers - the 2016 'Brexit-bounce' being a good recent example and lastly;

  6. Upgrade funds I held with conviction, my core ideas, well researched, long-term or new discoveries. How I started rating on

This is not how I normally select managers; and yet, I revelled in my ability to beat the performance-chasers. It worked and worked well to be honest, particularly when trying to time reversion between winners and losers. Many ratings and awards are heavily based on past performance, which itself has been shown to be a poor predictor of future returns. Indeed this is part of the 'Modus Operandi" of SharingAlpha and a core tenet of qualitative over quantitative Fund Selection. On more than one occasion I topped the ratings table and consistently rated within the top 10, this then became its own strong utility, it became a goal, the competitive urge grew. 'Pride comes before a fall' they say.

At one point I had well over 100 rated funds but hadn't fully thought through how these would impact my own rating going forward even after I removed them (became inactive) and of course the SA model itself has continued to evolve. I look back and had convinced myself that with such a broad list I had somehow diversified myself, which is not how the hit score works. What was more likely was that I had benefitted from mean reversion in some cases but otherwise got lucky. You do not offset the score of Fund A with Fund B, instead raters are scored for the accuracy for each fund rating over a period, which makes up a sample. Your hit score is then the weighted average of those samples. That means it is statistically much more difficulty to hit a good hit score over 100 funds than 10. With 10 funds you can be much more discerning, exploit an anomaly in a particular market or get lucky for a time. Most academics point to the fact more than 50% of all active fund managers consistently underperform the index. Rating 100 funds without high conviction was in hindsight a flawed strategy for me.

Had I gone too far with my ratings, had I succumbed to biases, to gamification? Most definitely. I was effectively trying to time the past-performance of fund managers, which had overtaken my own process, be it through a healthy new supply of fund events, competitions or fund awards. I had begun to chase the hit score rather than rate managers on their own merits, I had started to look for winners and losers. I was sharing my score through social media. I had begun to compete with my own hit score, and other raters, which is a fallacy in itself and in the long-term a nil sum game. The power of SharingAlpha is building a fund ratings community through events, awards and competitions but fund raters denigrate the 'wisdom' of the platform by trying to game other fund raters. Trying to time my rating against funds had brought with it added uncertainty to my long-term hit score. Couple this to an ever increasing workload and I had simply took my eye off the ball!

What is gamification?

The term "gamification" was coined in 2002 by Nick Pelling a British-born computer coder and inventor, but did not gain popularity until 2010, referring to incorporation of social/reward aspects of games into software. Wiki defines as: "The gamification techniques are intended to leverage people's natural desires for socialising, learning, mastery, competition, achievement, status, self-expression, altruism, or closure, or simply their response to the framing of a situation as game or play." From a behavioural point of view (loss aversion, regret, anchoring) should all resonate with many fellow fund buyers and behavioural economists alike.

10 Common Behavioural Biases Facing SA raters:

1	Confirmation Bias. We like to think that we carefully gather and evaluate facts and data before coming to a conclusion. Instead, we tend to suffer from confirmation bias and thus reach a conclusion first. Only thereafter do we gather facts and see those facts in such a way as to support our pre-conceived conclusions. When a conclusion fits with our desired narrative, so much the better, because narratives are crucial to how we make sense of reality.
2	Optimism Bias. This is a well-established bias in which someone’s subjective confidence in their judgments is reliably greater than their objective accuracy. Venture capitalists are wildly overconfident in their estimations of how likely their potential ventures are either to succeed or fail. In a finding that pretty well sums things up, 85-90% of people think that the future will be more pleasant and less painful for them than for the average person.
3	Loss Aversion. We are highly loss averse. Empirical estimates find that losses are felt between two and two-and-a-half as strongly as gains. Thus the disutility of losing $100 is at least twice the utility of gaining $100. Loss aversion favours inaction over action and the status quo over any alternatives. As a consequence, we tend to make bold forecasts but timid choices. 
4	Self-Serving Bias. Our self-serving bias is related to confirmation bias and optimism bias. Self-serving bias pushes us to see the world such that the good stuff that happens is my doing.
5	The Planning Fallacy.  In his terrific book, Thinking, Fast and Slow, Nobel laureate Dan Kahneman outlines what he calls the “planning fallacy.” Most of us overrate our own capacities and exaggerate our abilities to shape the future.  The planning fallacy is our tendency to underestimate the time, costs, and risks of future actions and at the same time overestimate the benefits thereof. It’s at least partly why we underestimate bad results. It’s why the results we achieve aren’t as good as we expect. 
6	Choice Paralysis. Intuitively, the more choices we have the better. However, the sad truth is that too many choices can lead to decision paralysis due to information overload.  We are readily paralysed by too many choices.
7	Herding. We all run in herds — large or small, bullish or bearish.  Institutions herd even more than individuals in that investments chosen by one institution predict the investment choices of other institutions by a remarkable degree.  Even hedge funds seem to buy and sell the same stocks, at the same time, and track each other’s investment strategies.
8	We Prefer Stories to Analysis.  As noted above, narratives are crucial to how we make sense of reality. They help us to explain, understand and interpret the world around us.  Perhaps most significantly, we inherently prefer narrative to data — often to the detriment of our understanding. A corollary to this problem and to confirmation bias is what Nassim Taleb calls the “narrative fallacy” — looking backward and creating a pattern to fit events and constructing a story that explains what happened along with what caused it to happen.
9	Recency Bias. We are all prone to recency bias, meaning that we tend to extrapolate recent events into the future indefinitely. As reported by Bespoke, Bloomberg surveys market strategists on a weekly basis and asks for their recommended portfolio weightings of stocks, bonds and cash.  The peak recommended stock weighting came just after the peak of the internet bubble in early 2001 while the lowest recommended weighting came just after the lows of the financial crisis. That’s recency bias.
10	The Bias Blind-Spot. The cognitive biases which plague us and make it difficult for us to make good choices. Unfortunately, we all tend to share a “bias blind spot” — the inability to recognise that we suffer from the same cognitive distortions that plague other people. 


Tale of the Tape!

The recent reversal in my own hit score served as a wake-up call (almost literally) to signal that I was effectively playing a game with my fund selector reputation. It was time to take my fund list much more seriously. I have started by reducing my fund list, to get back to core ideas or new ideas validated by my own work and approach or arising from another rater I respect. I have removed funds that I did not know as well but have retained some funds with lower hit rates, where I still have high conviction, on the expectation that either my hit rate will improve over time or I was wrong and therefore will learn something about my own biases.

As I have some professional reputation to protect then, going forward, I will less likely rate a fund that I haven't first given some serious consideration or put through my own due diligence. After all I am effectively underwriting that manager with my own reputation and therefore should have a good rationale for doing so. Indeed other raters would presume this to be the case as I would of them. Buying with more conviction may concentrate my own ratings coverage but with 500+ raters already active on then the community already has both huge scale and reach, far beyond any other rating platform.

SA includes in event feedback forms a clear statement that the 'ratings will be used in order to rank the raters' and an option to opt-out if they wish. This is reasonable since funds from awards and events are typically pushed by either past performance screening or sponsoring asset managers. Remember, to rate a fund is to share your rating with the rest of the SA community irrespective of the source. If the rater has no view then it is better to abstain than give a cursory rating when prompted. Raters need to retain this discretion.

SA has already noted that new raters didn't simply rate all the funds highly at events just because they were chosen by the sponsors nor was there an obvious negative bias present. This is a good indication as the community continues to grow. Some raters, at events, tended either to rate some or none of the funds while still completing the feedback form. This creates powerful data for SharingAlpha, asset managers and event organisers. It alludes that many raters do understand the implications of their rating, hence, in case they feel that they don't have enough background on the fund they will simply defer giving a rating. This approach is fairest to raters, event organisers and fund managers, by cutting down on the probability of false ratings. Going forward SA also plans to look at the average hit scores of ratings made at events verses other ratings and check whether there is any anomalous difference.

SharingAlpha allows raters to place a rating in an inactive mode. Ratings placed in inactive mode have no effect on the raters 'hit score'; whilst inactive, but ratings moved from active to inactive (just like in real life) the positive or negative results achieved during that period contribute to the hit score. It is calculated using a time weighted average (based on the holding period) and, as time goes by, the effect on the overall hit score diminishes.

Therefore, when confronted by large lists of new funds coming from awards or events then approach them as you would any fund you rate day to day. Consider that, as raters, we cannot escape our own survivorship bias. Active funds will still contribute to the hit score, their weighted effect gradually reducing over time, just as it would in a portfolio even if later changed to inactive status.

This is good because it is forcing me (and you) to take my rated list much more seriously than before. SharingAlpha then is not a game, it is the next step-change in fund research. It has a great user interface but good functionality is not gamification. Fund research is too serious for that. Biases are however unavoidable. Thus be rigorous, transparent, logical, lateral, investigative, cognisant, collegiate, creative, curious. It is these traits make insightful fund ratings. Buy with conviction, buy what represents value, hold to avoid herding, sell when the thesis collapses.