Consider the following market: 'I roll a d10 once per day. Will I roll a 0 within the first 10 days from when this market starts?'.
Now consider what happens if I don't actually roll a 0:
Day 0, this market's value is ~65%
Day 1, this market's value is ~61%
Day 2, this market's value is ~57%
Day 3, this market's value is ~52%
Day 4, this market's value is ~47%
Day 5, this market's value is ~41%
Day 6, this market's value is ~34%
Day 7, this market's value is ~27%
Day 8, this market's value is ~19%
Day 9, this market's value is ~10%
Day 10, this market's value is ~0%
The market is purely rational, and yet the market shows a monotonic decrease over time (effectively, due to survivorship bias). What am I missing here, that this sort of monotonic movement is unexpected?
As an aside, I am also surprised why people seem to consider this unexpected for financial markets and stocks.
If a company has an X% chance of ruin / day over a fixed time period, you end up with exactly the same sort of rational monotonic movement so long as said ruin doesn't happen.
You see this sort of thing with acquisitions. Say company A is currently priced at $100, and company B announces that it's acquiring A for $200 per share. A will jump up to something like $170 per share, and then slowly increase to $200 on the acquisition date. The $30 gap is there because there's some probability that the acquisition will fall through, and that probability decreases over time (unless it actually does fall through, in which case the price drops back down to ~$100).
This looks really cool! And it would be nice to get some version of this (or at least a link to it) on the Forecasting Wiki.
No - I think probability is the thing supposed to be a martingale, but I might be being dumb here.
Just to confirm: Writing , the probability of at time , as (here is the sigma-algebra at time ), we see that must be a martingale via the tower rule.
The log-odds are not martingales unless because Itô gives us
So unless is continuous and of bounded variation (⇒ , but this also implies that ; the integrand of the drift part only vanishes if for all ), the log-odds are not a martingale.
Interesting analysis on log-odds might still be possible (just use and for discrete-time/jump processes as we naturally get when working with real data), but it's not obvious to me if this comes with any advantages over just working with directly.
h/t Eric Neyman for causing me to look into this again
On a recent Mantic Monday, Scott Alexander said:
Personally, this has never particularly bothered me. Having watched the odds for many things which behave like this. (Pick any sports game where one side has a large, but not unassailable lead and you'll see this pattern).
That said, I'm also sympathetic to the view that Metaculus forecasts aren't perfect. Whenever I think about how my own forecasts are made, I'm definitely slow to update, especially if it's something I don't follow very often. If a question gets lots of interest and catapults to the front page, I'm liable to update then, and usually it's going to be in the direction of the crowd. Is this enough to make the forecasts predictable? (Which would be bad, as Scott says!)
One metric to look at when deciding if forecasts are predictable is to check whether or not the change in forecasts correlated from day to day. (ie if our forecasts increased 1% yesterday, are they more likely to increase tomorrow or not?).
Everything which follows is based on the community prediction (median forecast) which is visible to the public at all times.
Looking across ~1000 binary questions on Metaculus, we actually see the opposite of the "momentum" that Scott talks about. In general, if a question increased 1% yesterday, we should expect it to fall today.
What's going on here? Well my theory upon seeing this (after checking that I hadn't made any dumb mistakes) was that forecasts were slightly noisy and that makes them slightly mean-reverting. When looking at some of the most egregious examples of this that definitely looked like the case.
One way we might be able to check this hypothesis is to look at the "better" forecasts (more predictors, more predictions) and see if they have higher autocorrelation...
... and yes, sure enough that does seem to be the case. For questions with fewer predictions they are more likely to have negative autocorrelation (mean-reverting) behaviour. The largest questions do seem to have at least some auto-correlation. (Eyeballing it, I would guess maybe ~.1 is a fair estimate?)
To make this concrete (and find out over what time horizon Metaculus is 'predictable'), I ran the same exercise, across 1-day, 2-day, etc autocorrelations, fitted a regression and took a point with a 'large' number of predictors. My adjusted autocorrelation chart looks as follows:
My takeaways from this are:
On the whole, I think this is pretty positive for Metaculus - I had to torture the data to show some very slight momentum, and even then I'm not completely convinced it exists.