Brexit and the rise of Donald Trump are both high-impact anti-establishment events that have shattered powerful ambitions and squandered fortunes. Dazed and confused, the establishment is looking for scapegoats, and some of its wrath is currently focused on prediction markets.
In two sternly worded post-Brexit articles, pundits at The Economist (1) and the Financial Times (2) have accused prediction markets of having « spectacularly failed » and having been « wildly wrong » about both Trump and Brexit. Coming from such highly regarded publications, this hurts.
However, as a long-time prediction market operator, I see no reason feel « most embarrassed » as the FT’s John Authers suggests I should be. On the contrary, it seems to me that the articles’ authors are embarrassing themselves by revealing their own confusion about the empirical data, the rules of forecasting, and the meaning of probability.
Firstly, there is little basis to the assertion that prediction markets were wrong about Trump’s victorious march to he GOP nomination. As I have documented in a previous post, and as Figure 1 shows, Hypermind and the leading real-money prediction markets (Iowa Electronic Markets, PredictIt, and Betfair) had already anointed Trump the favorite before the first ballot was cast in Iowa. From then on, except for a short week between his Iowa stumble and his New Hampshire comeback in early February, he remained the favorite throughout the campaign until his last rivals finally quit in May. The markets “got” Trump.
What about Brexit? It is true that, on the eve of the vote, UK bookmakers and prediction markets all favored the losing “remain” outcome. Our own Hypermind gave it a probability of 75%. With the benefit of hindsight, the pundits judge that forecast harshly. They write that the polling data argued « strongly » against such confidence, and more specifically:
- that polls always suggested that the referendum was on a « knife edge », that just before the vote the polling average showed a « dead heat », and that our traders succumbed to the dreaded confirmation bias: placing more weight on recently released polls favouring « remain » than on the similar number of surveys backing “leave”.
- that our traders were « fooled by the trend in referendums in Scotland and Quebec for voters to move towards the status quo as voting approached. »
But the facts are wrong and the criticism is misguided.
Yes, the polls were close, but there was a clear trend towards “remain” in the last few days. For instance, the final update of the FT’s own poll tracker, including all polls published up to June 22, just before the vote, showed a 2% advantage for “remain” over “leave”.
Furthermore, of the six polls published June 22, four favored “remain”. Of the twelve polls taken after Jo Cox’s murder on June 16, seven favored “remain” while only four favored “leave”, a dramatic trend reversal from the previous period when nine of the last twelve polls favored “leave”. Given this data, prediction traders could be forgiven for thinking that polls showed a small but firm lead for “remain” in the final stretch, where it counts most. Confirmation bias it was not.
The number of undecideds was still relatively high, but based on historical precedent in Scotland (2014) and Quebec (1995) these voters were expected to mostly choose the status-quo “remain” (Figure 2). Even TNS, the only firm whose polls consistently favored “leave”, pointedly noted just before the vote that this historical trend could swing the vote to “remain”. (3) Rather than being “fooled”, traders smartly followed the advice of Nobel Prize winner Daniel Kahneman to consider the “outside view” by taking into account not only the specifics of the Brexit referendum but also previous outcomes in similar consultations. This is simply a best practice in forecasting.
Another reason to believe “remain” had an advantage is that the voters themselves apparently expected it to prevail : an Opinium poll published just before the vote revealed that 46% predicted a “remain” victory, while just 27% expected “leave” to win. Recent research had shown that voter expectations usually forecast an election’s results better than voter intentions. (4)
Prediction traders are not just poll followers. The best of them factor in as much relevant information as they can find before making their most-informed guess about the residual uncertainty. In view of all the evidence available, the consensus probability for “leave” on the eve of the vote ended up around 25% on Hypermind (and a bit lower on other prediction markets and UK bookmakers).
With 20/20 hindsight, the pundits think that probability should have been a lot higher. Their implicit indictment is that because “leave” won, prediction markets should have given it a probability not only higher than 25% but also higher than “remain”, and that failing to do so is some sort of epic fail.
In the comments section to John Authers’ column, several astute readers tried to make the point that the fault lied perhaps not so much with the 25%-chance prediction as with an erroneous interpretation of that probability to mean that “leave” would not happen, hence the surprise. In fact, it had one chance in four, which in the absolute is hardly improbable. As a reader named Kraken elegantly put it: « It’s a feature, not a bug, of prediction markets that apparently unlikely events occur with some probability. »
This is exactly what I have argued in an earlier post on the lessons of Brexit. But if it seems obvious that one should not be astounded every time a 25%-chance outcome occurs, it apparently isn’t to the FT columnist. In his reply to Kraken, he insists that « it’s very unusual for prediction markets not to put more than a 50% chance on the winning outcome in a two-way chance. » It was, therefore « plainly a failure of prediction markets, and a very unusual one. »
But in fact, it isn’t unusual at all. The record shows that market predictions in general – and Hypermind’s in particular – are well calibrated, which means that the estimated probabilities finely predict the proportion of events you can expect to actually occur. From the point of view of a well-calibrated market, a share of unexpected events has to happen, and a share of expected events has to fail to occur. The exact proportion depends on the probability assigned to each event. So it isn’t unusual for a well-calibrated prediction market like Hypermind to assign 25% probability to an event that eventually occurs. In fact, it happens about 25% of the time. The failure lies instead with those who wrongly extrapolate from 25% (unlikely) to 0% (not a chance).
Or perhaps the pundit is really saying that prediction markets should always be expected to favor events that eventually happen, and never favor one that doesn’t? That would make them perfectly prescient, which is absurd. It is the Future we are talking about, and nobody but God himself (or herself) could be expected to be right every time. The best mere mortal markets can offer is fine probability calibration, and some solace in Margaret Thatcher’s immortal insight that « in politics, the unexpected happens. » (4)
(1) Polls versus prediction markets: Who said Brexit was a surprise? – by Dan Rosenheck, The Economist, June 24th, 2016
(2) Brexit shows no greater loser than political and market experts – by John Authers, Financial Times, July 1, 2016
(3) In this Daily Express article, dated June 22, Luke Taylor, Head of Social and Political Attitudes at TNS UK is quoted as saying: « Our latest poll suggests that Leave is in a stronger position than Remain but it should be noted that in the Scottish Independence Referendum and the 1995 Quebec Independence Referendum there was a late swing to the status quo and it is possible that the same will happen here. »
(4) Forecasting Elections: Voter Intentions versus Expectations – by David Rothschild & Justin Wolfers, 2013
(5) Thanks to FT reader PeterE for reminding us of this memorable quote.