SharingAlpha - The Mechanics of Volatility, Factoring the Nuts and Bolts of Risk

SharingAlpha - The Mechanics of Volatility, Factoring the Nuts and Bolts of Risk

The problems with statistical fund analysis is that it is full of jargon, tricky to explain and deeply inaccessible for investors. Over 20 years I have known both advisers and CFAs alike whom do not understand the basics like standard deviation (the most common measure of volatility). Like many I am no mathematician and a pretty poor statistician if truth be told. I need to understand concepts by how they work in the real world, the mechanics, not what the formula expects. I therefore ask you to bear with me as we get our hands a little oily in jargon land..

In the U.K., the IA's new volatility-based sectors are yet the latest instalment of the industry's circular fascination with standard deviation based approaches, which still underpin most mainstream fund ratings today. Volatility even underpins ESMA's very own Synthetic Risk Reward Indicator (SRRI) using 260 week buffered data. Today there is a bewildering array of Sharpe, Stutzer, MRAR, Consistency of Return, Distribution Technology, Citywire Discovery, FE Alpha or Risk scores, most simply portray risk as a function of past variance of returns (with varying mathematical functions, bells and whistles). Expressed most obtusely, they are founded around the CAPM (Capital Asset Pricing Model) a keystone of modern finance. According to this theory-based model, the return required on an investment (and its expected return in an efficient market) is a positive function of an overall risk factor: the market beta. Academics have been critical of most models and point to low persistency rates. The industry needs to quickly decide if ratings and awards are simply a pat on the back (a media bonus) or a serious guide for investors. By being unclear risks investors assuming the latter.

The foundations of most quantitative approaches are also entrenched in long held actuarial assumptions: that of Geometric Brownian Motion (GBM) - that future returns are a function of the current price, the previous price and the expected movement between prices. Ove the years such assumptions have come under challenge, most notably by Robert Engle's cannily named Generalized Autoregressive Conditional Heteroskedasticity ('GARCH') model that introduced the killer word 'heteroskedasticity' into the debate. In other words the volatility of volatility (or Gamma). In 2001, Engle's paper 'The Use of ARCH/GARCH Models in Applied Econometrics' noted; "Financial decisions are generally based upon the tradeoff between risk and return; the econometric analysis of risk is therefore an integral part of asset pricing, portfolio optimization, option pricing and risk management."

Was adoption the issue going into 2008? Whilst widely adopted by quantitative strategies and trading systems, Engle's GARCH was largely ignored by the asset management industry, certainly among marketing. That failure has created an information arbitrage between useful data and what is pedalled to investors, endorsed by MiFID. The basic assumption of volatility was even commuted into risk management through Value at Risk (VaR). Until the latter half of 2007, the VaR model was felt fit for purpose. The subsequent '1 in 113' year event (say the actuaries) quickly dispelled the robustness of VaR, volatility and traditional risk ratings. More recently we also saw the big challenge for single factor indices, when the 'anomaly' becomes crowded-out, priced-up then collapses. The 'low vol anomaly' short in 2016 was another great example. It questioned the very essence of allocation, diversification and risk assignation through volatility or Beta. The efficacy of volatility targeting funds, whilst fashionable, is widely disputed. I often say 'you can't eat volatility', a lesson many a Risk-Parity fund learned. This is not strictly true however, you can buy (or sell) volatility-based derivatives but few volatility-based funds actually hold little in the way of ViX contracts, options or variance swaps.

There are some good exceptions like Allianz's offering or the innovative fund of structures products at boutique Atlantic. You can also buy funds that seek to capture volatility range-bound or break-out strategies; Blackrock's Andy Warwick, Newton's Aron Pataki or Insight's Steve Waddington are good examples.

No, the biggest problem with the 2008 credit crisis was not necessarily the magnitude of the dislocation itself (which felt traumatic enough and ultimately led to myself being made redundant and many others) but that its severity has been quickly consigned to the history books as an abnormality rather than what it really was - a Minsky Moment if you will, the result of very repeatable scenario of over-leverage. Yet it was in the eye of the credit crisis storm that the European joint regulatory committee itself criticised the reliance on 95% confidence, normal distribution Value at Risk controls. However this did not stop ESMA later introducing the same inherently flawed SRRI a few years later as part of UCITS IV. GARCH too still relies on past data rather than considering current sensitivity to economic factors. If we can agree that volatility is fragile as a proxy of risk, then the modern fund buyer has a few other tools available to them;

  1. Assign Bayesian probabilities to asset class expectations, volatilities and run through Monte Carlo simulations. Here the buyer sets initial probabilities, then tests, updates, repeat. It is an iterative process but one that gives buyers little confidence at outset.

  2. Approaching the question differently, through the wisdom of the crowd, might lead us to another outcome, a robust second opinion. One that is less driven by past returns and rather the collective view of other fund buyers, such as SharingAlpha. The fund's risk is then implied through aggregate low ratings.

  3. Apply a factor-based approach, both in terms of selection and when constructing portfolios. A factor-based approach is one that measures a fund's risk through sensitivity to different economic premia. It can trace its roots back to the simple Fama and French 3-factor model in 1993. Factor analysis has evolved considerably and includes tools from Style Research and Risk-Lab. For example offers factor analysis of funds based on: Beta, Market, Size, Style, Momentum, Macro-factor, Default Spread, Term Spread, Interest Rates, Dividend Yield, Inflation, USD Trade Weight Index, Volatility Risk Premium.

  4. Greek analysis applying other drivers around time decay (Theta), interest rate risk (Rho) and market sensitivity (Delta). Such analysis is popular among hedge fund analysts and arguably works well for funds using options and futures. It can describe a hedge fund by the aggregate of its positions beyond simply volatility (Vega).

  5. Analysing volatility indices like CBOE ViX to benchmark changes in cross-sector volatility and correlation analysis. This approach itself is fraught with challenges given such indices are also driven by derivative markets.

  6. Use complexity analysis to identify points of risk. Complexity analysis assumes that the more complexity that exists, the more fragile the system and hence can infer rising risk within a fund. Ontonix has developed Universal Ratings and requires a step-change in thinking.

  7. Focussing on scenario, drawdown and stress test analysis that moves away from standard time period measures and observes how funds behave in different real and hypothetical conditions.

  8. Behavioural and technical approaches are widely used that seek to identify shifts in trading behaviour. Such approaches could potentially be automated into AI programs.

  9. Liquidity based analysis is becoming more important as funds grow bigger in size, analysis of trading costs, slippage costs, margin management, ladders, Liquidity at Risk (LaR) and probability and scenarios are all useful.

  10. Absolute return analysis that deals in monetary amounts (not percentages) against expected aversion to capital loss is an increasing common approach in wealth management that eschews most performance analysis. It starts from a contractual approach and is very specific to the investor.

Of the above options Factor Analysis appears the most conducive to changing how we describe risk. "Sometimes described as risk premia, 'Factors' can be considered as the economic building blocks that underpin asset markets. If not identified and captured then they can go unseen. However they are powerful indicators to help explain the causes of market movements. volatility, bubbles and potential corrections. A factor-based approach is one that measures a fund's risk through sensitivity to different economic premia." (

Beta Factors: Beta factors arise from the movements, valuations and trading behaviours of equity, credit, commodities or other markets. Beta factors typically follow short-medium term cycles, lasting 6 to 12 months, as markets grow, peak, contract and recover. Beta factors help fund managers understand their positioning in the business cycle and whether it is moving in or out of favour. Beta factors therefore describe whether a fund's market positioning is generally supported by the market or faces headwinds. Beta Factors describe the behaviour of investors.

Macro Factors: Macro factors help describe changes in the wider economy, capturing monetary policy, currency, borrowing cost, inflation and volatility. Macro factors tend to change as the result of changes in economic policy and whether economies are growing (expanding) or in recession (contracting). Macro Factors typically follow medium-long term cycles, often lasting between 12 months and 4 years and more. Macro factors help managers understand if the economy is supportive or obstructive to their strategy. Macro factors can help inform fund managers when to add or remove risk, add leveraged assets or reduce. They describe the behaviour of governments and between economies.

The reality is that many Institutional buyers, platforms and distributors today are still managing large lists of funds through basic spreadsheets, volatility-driven reporting, conventional performance and attribution tools. Where asset managers continue to market funds based on volatility or fund buyers choose funds on past/expected volatility 'swim lanes' then we are in danger of again underestimating risk holistically and concentrating assets in the wrong places. Continuing volatility obsession by asset managers, trade bodies and regulators has driven inconsistent outcomes. Fund buyers ideally need to see risks before they translate into volatility. At a time when the risks belying large free-float indices are not easily understood; the solutions can be found in innovative factor, behavioural and risk software that recognises:

• Its becoming increasingly time consuming and difficult to quickly and easily identify the key characteristics of investment funds.

• Today, the influence of macro-economics is undeniably important and investors want to know how your funds are impacted.

• Traditional optimisation and attribution has become increasingly outdated by non-conventional market cycles, creation of new risk premia, and the growth in Alternative strategies.

JB Beckett, FTSE100 Fund buyer, Founder and Author of New Fund Order #newfundorder. SA Advisory Board Member. UK Director for the Association of Professional Fund Investors.