[A quick tutorial for lay readers. ‘Smart-beta’ is the idea there are recurring patterns in markets, or factors, that professionals use to gain advantage. By automating the application of these factors passive investors, perhaps via an ETF, can obtain some of the same advantage when investing. For example: growth, value or momentum. Just let machines do the hard work and voila! Easy street. That, at least, is the theory.]
New investment techniques are often neatly summarized by old homilies. In that vein I could sum up the conclusion of the paper highlighted today as follows; when something looks too good to be true, you know what?
Shiyang Huang, Yang Song and Hong Xiang variously of the Universities of Hong Kong and Washington wanted to see if ETFs managed with reference to ‘smart-beta’ strategies worked, not just on paper, but in the real world; and, you know what? Or, as the researchers put it: “The claimed “smart” performance of smart beta seems to be a mirage that only exists in backtests.”
Smart-beta strategies then, in other words and in the real world, are actually pants* (*(h) here pants) i.e. they don’t work.
The first smart-beta ETF was launched in 2000 and since then this variety of ETF has proven so popular they now account for around 20% of all ETF assets under management. That this strategy doesn’t work is not a trivial observation therefore.
The folk who create smart-beta driven ETFs usually [Always? Ed.] at launch present back-tested examples demonstrating the magic of their proprietary secret-sauce at work; in theory and in the past. The paper shows on average this ‘magic’ comes out at an annual advantage of +2.77% (too good to be true in itself).
However, in the real world, after these funds are launched that advantage evaporates and what investors have actually been getting is a MINUS -0.44% annual DISADVANTAGE from buying these types of ETFs.
As if that weren’t bad enough. If this huge mismatch between the intimated returns and what is delivered was due to market timing or a deterioration of factor profitability (perhaps, as more pile in some factors work less well?) we could have sympathy; but it isn’t.
The beyond-reasonable-doubt and proven in this paper reason for a discrepancy in before and after results is the shameless data-mining that goes on in the back-testing processes (the researchers decorously refer to this as ‘data-overexploitation’).
To put it bluntly, historical ‘on-paper’ ‘results’ are created to persuade investors to buy products; but Managers can’t deliver this performance (and worse, they must know this by now) in future because the on-paper results have been, de facto, fabricated.
Consider yourself warned if you’ve either bought these things in the past or given thought to acquiring them in future. More so than many investment products smart-beta ETFs don’t deliver what’s on the tin.
You can access the paper in full via the following link The Smart Beta Mirage
Happy Sunday.