Nassim Taleb recently posted this mathematical draft of election forecasting refinement to his Twitter.
(At the more local level this isn’t always true, due to issues such as incumbent advantage, local party domination, strategic funding choices, and various other issues. The point though is that when those frictions are ameliorated due to the importance of the presidency, we find ourselves in a scenario where the equilibrium tends to be elections very close to 50-50.)
So back to the mechanism of the model, Taleb imposes a no-arbitrage condition (borrowed from options pricing) to impose time-varying consistency on the Brier score. This is a similar concept to financial options, where you can go bankrupt or make money even before the final event. In Taleb's world, if a guy like Nate Silver is creating forecasts that are varying largely over time prior to the election, this suggests he hasn't put any time dynamic constraints on his model.
The math is based on assumptions though that with high uncertainty, far out from the election, the best forecast is 50-50. This set of assumptions would have to be empirically tested. Still, stepping aside from the math, it does feel intuitive that an election forecast with high variation a year away from the event is not worth relying on, that sticking closer to 50-50 would offer a better full-sample Brier score.
I'm not familiar enough in the practical modelling to say whether this is feasible. Sometime the ideal models are too hard to estimate.
I'm interested in hearing any thoughts on this from people who are familiar with forecasting or have an interest in the modelling behind it.
I also have a specific question to tie this back to a rationality based framework: When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with? Or do you do a mental adjustment or discounting based on some factor you think they've left out? Whether it's prediction market variations, or adjustments based on perceiving changes in nationalism or politician specific skills (e.g. Scott Adams claimed to be able to predict that Trump would persuade everyone to vote for him. While it's tempting to write him off as a pundit charlatan, or claim he doesn't have sufficient proof, we also can't prove his model was wrong either.) I'm interested in learning the reasons we may disagree or be reasonably skeptical of polls, knowing it of course must be tested to know the true answer.
This is my first LW discussion post -- open to freedback on how it could be improved
Updating to changed evidence is no sign of a charlatan but behavior of good forecasters.
I agree, but that isn't what Adams did. Adams first claimed Trump is a master persuader who was virtually certain to win. When Trump was way down in the polls with only weeks left, Adams then switched to predicting a Clinton win, using the Trump controversy du jour as a rationale.
Updating on the evidence would have involved conceding that Trump isn't actually an expert persuader (or conceding that persuasion skills don't actually carry that much weight). In other words, he would have had to admit he was wrong. Instead, he acted like the Trump controversy of the time was something completely shocking and that was the only reason Trump was going to lose.