Nassim Taleb recently posted this mathematical draft of election forecasting refinement to his Twitter.

The math isn’t super important to see why it’s so cool. His model seems to be that we should try to forecast the election outcome, including uncertainty between now and the end date, rather than build a forecast that takes current poll numbers and implicitly assumes nothing changes.
The mechanism of his model focuses on forming an unbiased time-series, formulated using stochastic methods. The mainstream methods as of now focus on multi-level Bayesian methods that look to see how the election would turn out if it were run today.
 
That seems like it makes more sense. While it’s safe to assume a candidate will always want to have the highest chances of winning, the process by which two candidates interact is highly dynamic and strategic with respect to the election date.

When you stop to think about it, it’s actually remarkable that elections are so incredibly close to 50-50, with a 3-5% victory being generally immense. It captures this underlying dynamic of political game theory.

(At the more local level this isn’t always true, due to issues such as incumbent advantage, local party domination, strategic funding choices, and various other issues. The point though is that when those frictions are ameliorated due to the importance of the presidency, we find ourselves in a scenario where the equilibrium tends to be elections very close to 50-50.)

So back to the mechanism of the model, Taleb imposes a no-arbitrage condition (borrowed from options pricing) to impose time-varying consistency on the Brier score. This is a similar concept to financial options, where you can go bankrupt or make money even before the final event. In Taleb's world, if a guy like Nate Silver is creating forecasts that are varying largely over time prior to the election, this suggests he hasn't put any time dynamic constraints on his model.

The math is based on assumptions though that with high uncertainty, far out from the election, the best forecast is 50-50. This set of assumptions would have to be empirically tested. Still, stepping aside from the math, it does feel intuitive that an election forecast with high variation a year away from the event is not worth relying on, that sticking closer to 50-50 would offer a better full-sample Brier score.


I'm not familiar enough in the practical modelling to say whether this is feasible. Sometime the ideal models are too hard to estimate.

I'm interested in hearing any thoughts on this from people who are familiar with forecasting or have an interest in the modelling behind it.

I also have a specific question to tie this back to a rationality based framework: When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with? Or do you do a mental adjustment or discounting based on some factor you think they've left out? Whether it's prediction market variations, or adjustments based on perceiving changes in nationalism or politician specific skills (e.g. Scott Adams claimed to be able to predict that Trump would persuade everyone to vote for him. While it's tempting to write him off as a pundit charlatan, or claim he doesn't have sufficient proof, we also can't prove his model was wrong either.) I'm interested in learning the reasons we may disagree or be reasonably skeptical of polls, knowing it of course must be tested to know the true answer.

This is my first LW discussion post -- open to freedback on how it could be improved

New Comment
16 comments, sorted by Click to highlight new comments since:

The math isn’t super important to see why it’s so cool. His model seems to be that we should try to forecast the election outcome, including uncertainty between now and the end date, rather than build a forecast that takes current poll numbers and implicitly assumes nothing changes.

I don't think the Markov Models that Silver uses assume that nothing changes. 2 of his 3 models assumed change.

When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with?

(out of memory) A year ago I think I gave Trump 40% conditional on him winning the Republican nomination on GJOpen. I think two months ago I moved from numbers were over the GJOpen average to

Scott Adams claimed to be able to predict that Trump would persuade everyone to vote for him. While it's tempting to write him off as a pundit charlatan, or claim he doesn't have sufficient proof, we also can't prove his model was wrong either

Scott Adams predicted a Trump landslide. That scenario didn't happen, Trump lost the popular vote. He didn't get elected because he convinced the majority of the population but because of Electorate College math.

[-]knb00

Adams also frequently hedged his bets and even changed his prediction once the odds for Trump appeared too long to overcome. This is pretty much what you would expect from a charlatan.

Updating to changed evidence is no sign of a charlatan but behavior of good forecasters.

[-]knb20

I agree, but that isn't what Adams did. Adams first claimed Trump is a master persuader who was virtually certain to win. When Trump was way down in the polls with only weeks left, Adams then switched to predicting a Clinton win, using the Trump controversy du jour as a rationale.

Updating on the evidence would have involved conceding that Trump isn't actually an expert persuader (or conceding that persuasion skills don't actually carry that much weight). In other words, he would have had to admit he was wrong. Instead, he acted like the Trump controversy of the time was something completely shocking and that was the only reason Trump was going to lose.

I want to be careful in how I talk about Adams. He definitely didn't follow the guidelines for methodological forecasting, such as assigning clear numerical predictions and tracking a Brier (or any chosen) scoring method.

As a result I see two main groups of thought on Adams: The first is forecasting oracle. The second is total charlatan (as far as I can tell this is the Rationalist viewpoint, I know SSC took this view).

I think the rationalist viewpoint is close to right. If we include the set of all semi-famous people who did/could speculate on an election (including Adams), and then imagine (we don't have the data) that we tracked all their predictions, with the knowledge that after the fact we would forget everyone who was wrong, Adams doesn't seem significantly correct.

But if Adams (or an abstracted idea of Adams argument) were correct, It would be because unlike current polling methods it allows for really high-dimensional data to be embedded into the forecast. As of now humans seem to be much better at getting a 'feel' for a movement than computers, because it requires using vast unrelated and unstructured data, which we specifically evolved to do* (I know we don't have great experiments to determine what we did/didn't specifically evolve for, so ignore this point if you want).

So, to that extent, current purely model-based election forecasts are at risk of having a severe form of omitted variable bias.

As an example, while polls are a little stable, Marine Le Pen is currently at a huge disadvantage: "According to a BVA poll carried out between Oct. 14 and Oct. 19, Le Pen would win between 25 percent and 29 percent of the vote in next April’s first round. If she faces Bordeaux mayor Alain Juppe -- the favorite to win the Republicans primary -- she’d lose the May 7 run-off by more than 30 percentage points. If it’s former President Nicolas Sarkozy, the margin would be 12 points."*

And yet PredictIt.org has her at ~40%. There is strong prior information from Brexit/Trump that seems important, but is absent in polls. It's almost as if we are predicting how people will change their mind when exposed to a 'treatment effect' of rightwing nationalism.

*http://www.bloomberg.com/news/articles/2016-11-16/french-pollsters-spooked-by-trump-but-still-don-t-see-le-pen-win

So then to tie this back to the original post, if you have stronger prior information, such as a strong reason to believe races will be 50-50, non-uniform priors, or that omitted variable bias exists, it would make sense to impose a structure on time-variation of the poll. I think this set of reasons is why it feels wrong to us when we see predictions varying so much far off from an election.

Don't read too much into small bets.

Predictit puts Le Pen at 40% (now down to 34%), but the much larger Betfair (orig) puts her at 22%. Generally you should quote Betfair because it is larger, because it doesn't limit individuals. The only advantage of Predictit is that it is open to Americans, but that is probably only relevant to American elections.

Even Betfair's prices only represent a million dollars worth of betting. $20k of betting after the American election moved Le Pen up to 40%. I don't know how long it took to correct that, but clearly faster on Betfair than on Predictit. (And I don't know whether the market changed its mind or incorporated the new information of the center-right primary.)

Thanks for the insight on the difference between Predictit/Betfair -- I wasn't aware of this liquidity difference. Although so long as there is a reasonable amount of liquidity on Predict it, it's very strange the two are not in equilibrium. Do you know if there are any open theories as to why this is?

One thing I notice is a lot of commenters on PredictIt are alt-right/NRx. It seems unlikely, but I wonder if different ideological priors are pushing different prediction markets away from a common equilibrium probability.

Maybe there isn't a reasonable amount of liquidity on Predictit. It is now down to 22%, from 34% when I wrote my comment, maybe an hour ago.

Predictit has a time series, but only daily updates. Betfair has a detailed chart without labels on the time axis.

It's just masturbation with math notation.

We have the election estimate F a function of a state variable W, a Wiener process WLOG

That doesn't look like a reasonable starting point to me.

Going back to the OP...

the process by which two candidates interact is highly dynamic and strategic with respect to the election date

Sure, but it's very difficult to model.

it’s actually remarkable that elections are so incredibly close to 50-50

No, it's not. In a two-party system each party adjusts until it can capture close to 50% of the votes. There is a feedback loop.

When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with?

I'm an arrogant git, so I accept them as bit worse :-P To quote an old expression, (historical-) data driven models are like driving while looking into a rearview mirror. Things will change. In this particular case, the Brexit vote showed that under right conditions people who do not normally vote (and so are ignored by historical-data models) will come out of the woodwork.

to know the true answer

Eh, the existence of a "true answer" is doubtful. If you have a random variable, is each instantiation of it a "true answer"? You end up with a lot of true answers...

We have the election estimate F a function of a state variable W, a Wiener process WLOG

That doesn't look like a reasonable starting point to me.

That's fine actually, if you assume your forecasts are continuous in time, then they're continuous martingales and thus equivalent to some time-changed Wiener process. (EDIT: your forecasts need not be continuous, my bad.) The problem is that he doesn't take into the time transformation when he claims that you need to weight your signal by 1/sqrt(t).

He also has a typo in his statement of Ito's Lemma which might affect his derivation. I'll check his math later.

Thanks for posting this! I have a longer reply to Taleb's post that I'll post soon. But first:

When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with? Or do you do a mental adjustment or discounting based on some factor you think they've left out?

I think it depends on the model. First, note that all forecasting models only take into account a specific set of signals. If there are factors influencing the vote that I'm both aware of and don't think are reflected in the signals, then you should update their forecast to reflect this. For example, I think that because Nate Silver's model was based on polls that lag behind current events, if you had some evidence that a given event was really bad or really good for one of the two candidates, such as the Comey letter or the Trump video, you should update in favor of/against a Trump Presidency before it becomes reflected in the polls.

The math is based on assumptions though that with high uncertainty, far out from the election, the best forecast is 50-50.

Not really. The key assumption is that your forecasts are a Wiener process - a continuous time martingale with normally-distributed increments. (I find this funny because Taleb spends multiple books railing against normality assumptions.) This is kind of a troubling assumption, as Lumifer points out below. If your forecast is continuous (though it need not be), then it can be thought of as a time-transformed Wiener process, but as far as I can tell he doesn't account for the time-transformation.

Everyone agrees that as uncertainty becomes really high, the best forecast is 50-50. Conversely, if you make a confident forecast (say 90-10) and you're properly calibrated, you're also implying that you're unlikely to change your forecast by very much in the future (with high probability, you won't forecast 1-99).

I think the question to ask is - how much volatility should make you doubt a forecast? If someone's forecast varied daily between 1-99 and 99-1, you might learn to just ignore them, for example. Taleb tries to offer one answer to this, but makes some questionable assumptions along the way and I don't really agree with his result.

[-][anonymous]00

It's just masturbation with math notation.

We have the election estimate F a function of a state variable W, a Wiener process WLOG

That doesn't look like a reasonable starting point to me.

Going back to the OP...

the process by which two candidates interact is highly dynamic and strategic with respect to the election date

Sure, but it's very difficult to model.

it’s actually remarkable that elections are so incredibly close to 50-50

No, it's not. In a two-party system each party adjusts until it can capture close to 50% of the votes. There is a feedback loop.

When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with?

I'm an arrogant git, so I accept them as bit worse :-P To quote an old expression, (historical-) data driven models are like driving while looking tinto a rearview mirror. Things will change. In this particular case, the Brexit vote showed that under right conditions people who do not normally vote (and so are ignored by historical-data models) will come out of the woodwork.

to know the true answer

Eh, the existence of a "true answer" is doubtful. If you have a random variable, is each instantiation of it a "true answer"? You end up with a lot of true answers...

[This comment is no longer endorsed by its author]Reply

Not for the first time, a draft paper by Nassim Taleb makes me think, "mathematically, the paper looks fine, and congratulations on knowing calculus, but isn't this waffling past the issue people are actually interested in?"

Many people who trusted poll-based forecasts were indeed shocked on election day, but not so much because the months-long sequence of forecasts wobbled back & forth too much. They were shocked more because the final forecasts failed to predict the final outcome. And the problem there was the difficulty of polling representative samples of people who'd actually vote, a very different problem to the one Taleb identifies.

I also have a specific question to tie this back to a rationality based framework: When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with? Or do you do a mental adjustment or discounting based on some factor you think they've left out?

In the specific case of the US election, I did use Silver's forecasts as a guideline, but I also considered a model based on fundamentals (income growth and dead American soldiers), not polls: Doug Hibbs's Bread & Peace model. Hibbs predicted a 53%-54% split in Clinton's favour of the Dem/Rep vote. (That 53%-54% prediction might sound pretty poor with hindsight, but the expected error attached to the prediction was 2%, and Clinton has 51%, so it's within statistical bounds.)

Whether it's prediction market variations

I didn't really pay attention to prediction markets. For elections & referendums I don't believe prediction markets add anything significant beyond polls & fundamentals.

or adjustments based on perceiving changes in nationalism or politician specific skills

I didn't consciously adjust on that basis. As far as I know, ordinary, retrospective economic voting played a big role in explaining even "the Extraordinary Election of Adolf Hitler", so I figured I wouldn't bother putting my thumb on the scale because of vigorous nationalism or charisma.

Upvoted your post as encouragement to post more stuff like this. I shrugged my shoulders at Taleb's paper, but the topic's fascinating and I'd like to see more stuff like Taleb's paper, even if the paper itself is unimpressive.


This comment brought to you by sarahconstantin's and AnnaSalamon's Your Wannabe Rationalist Community Blog Needs You! posts. Without them I'd likely have carried on my 11-month streak of not posting.

They were shocked more because the final forecasts failed to predict the final outcome.

Those are the people who don't understand the difference between a point forecast and a distribution forecast.

In a lot of cases, yes, although some of the shocked are presumably guilty merely of e.g. following Sam Wang's forecasts (which gave Clinton 50:1 or better odds) rather than Silver's.

It's relatively easy to come up with models that seems exciting because they have or they lack certain features. But the ultimate test of any predictive model is in the result: has anyone 'paper played' Taleb's model against past elections?