Crowd Teaching: Curing Cognitive Dissonance in AI Fund selection programs?

Crowd Teaching: Curing Cognitive Dissonance in AI Fund selection programs?

We are seeing the move to automated and self-learning tools to support fund selection decisions. Tools like ‘Morningstar Q ratings’ is just the start. This early adoption of AI could in turn introduce new biases, behaviors and challenges to the wisdom of the crowd of fund buyers. How then best to manage these as a Fund Selector (using such programs) or indeed the programmer designing and writing code for fund buyers to use?

To cut to the end: as with the fund buyers themselves; a collective wisdom approach to self-learning programs can both counter human cognitive dissonance and help accelerate AI.

Okay how? First let’s step back to understand what cognitive dissonance is. Here’s a 10-second history; Ray Kurzweil wrote on the cognitive dissonance between “intuitive linear” views of technological change (eg. Moore’s Law) and the “exponential” rate of change. Dissonance can undermine the best of crowd wisdom. Dissonance lies at the very heart of the human condition. In Psychology it is defined as to have; inconsistent thoughts, beliefs, or attitudes, especially those relating to behavioural decisions and attitudes to change. Human biases are numerous and can arise both in isolation or through group think.

Recognise then that “Dissonance” is bad. It’s a by-product of group think, as much as of individual biases. Recall that ‘group think’ is the negative decision process linked to peer pressure, herding and the opposite of synergy; (which is all unicorns, frickin rainbows and positive vibes).

Hod Lipson, professor at Columbia first suggested “machine teaching” as a solution to dissonance.

What is “machine teaching”? The most simple example is the self-learning that occurs between two autonomous but interacting programs that then react to one another. They learn to adapt or one program teaches the other.

Eg. Imagine the first program is written with a quantitative bias like 3 year risk adjusted performance; it wants to pick managers that rank highest, while a second program has a qualitative bias to fund managers who have been in the same fund for 5 years or more. It wants to select managers with longer tenures. The two programs may advocate different fund managers and so would need to first; rank, cross reference matches, list exceptions and continue to rank until a compromise on the final recommended buy list is found. It’s not dissimilar to a fund buyer with a quant bias compared to one who believes in qualitative analysis. A twin screen has merit but very limited in terms of depth and range of cognitive biases addressed but you get the idea.

Human biases can infiltrate fund screens, buy decisions, media and also AI programs designed to automate some or all of the fund research process. That would be counter-productive. Such biases could be introduced by the programmer or emerge as the consequence of unseen latent biases held in different programs. This is all uncertain and so the question turns to how to prevent or manage dissonance arising?

The solution? 'The Wisdom of Crowds: Why the Many are Smarter than the Few' by James Surowiecki", suggests capturing the ‘collective’ wisdom to solve cognitive problems. Can it be applied to an AI network as well as a human one?

Four conditions apply for AI to synthesise crowd teaching: (a) Cognitive diversity; (b) independence of opinion; (c) decentralisation of experience; (d) suitable mechanisms of aggregation." It is the last condition where data storage offers huge scalability; the first three conditions pose challenges for machine teaching. Surowiecki asserts that ‘Collective wisdom’ can tackle three kinds of problems, and that complexity is itself no barrier;

Cognition problems: such problems arise when AI can only guess the answer – e.g. the number of beans in a jelly bean jar, or how to predict the future.

Coordination problems: how AI coordinates behavior with each other – eg. Self-driving cars – knowing that other AIs are trying to achieve a shared aim? Driving safely from A to B.

Cooperation problems: how do we get self-interested, individual programs to work together, even when narrow self-interest would seem to dictate that no individual should take part – as in politics. Such self-interest can filter into AI.

Using SharingAlpha as a means to mitigate your own cognitive dissonance and peers; as much as to track your hit rates and rankings, offers huge potential to advisers and the fund buyer community more widely. Similarly, for AI programmers designing new Fund buyer tools; consider how you will manage cognitive dissonance in the same way. Your AI program stand-alone is unlikely to be immutable. Whichever way you look at it the future lies in the crowd.