When somebody correctly predicts an event, how much more should we trust this person? What if they make two correct predictions in a row?
Counter-intuitively, to answer this question, we need to know how many people actually were making predictions about these events, and how the predictions are typically distributed. Very often when people try to predict some event, especially when they try to predict a specific date that is many years away, they come up with wildly different estimates. Typically some will be far too high, while others will be far too low. However, if we assume that the forecasters have a non-zero probability mass of landing at any point between the too high, and too low predictions, then the more forecasters there are, the higher the chance that one will be roughly correct.
Note that this is true especially when some subset of the forecasters have just the strategy of choosing randomly. But of course, if someone gets correct predictions because of luck, we can't extrapolate future forecasting success from that.
For example, there are people that think that AI is just around the corner, while others literally say that it is impossible to achieve, and there are lots of people who say something in between. Even in the world where everyone basically has no clue, and all their reasons for arriving at a particular prediction actually don't make any sense, some of them are probably roughly correct.
Note that all of this directly generalizes also to sets of predictions.
However, in practice, this does not seem to be too big of an issue right now, because you can filter based on if somebody presents good arguments for their position that make it seem like they actually have a good model of the relevant dynamics. And after filtering there will not be very many people left.
However, you need to be careful that you only filter based on what somebody said before the event occurred. Otherwise, they might succumb to hindsight bias, and start to use new knowledge that was not apparent when they made their prediction to justify why they came up with their prediction. This can actually already happen each time the outcome of the event becomes more obvious, as the event comes closer and closer.
If you evaluate past events, you also have to keep in mind that all the information that you find will be heavily skewed in favor of the people that actually got predictions right, or got it wrong in an interesting way. This means that there were probably many more forecasters that did make predictions than is apparent.
For example, information sources will favor featuring people that got predictions right, and people whom you would expect it to get right, but that got it spectacularly wrong.
When somebody correctly predicts an event, how much more should we trust this person? What if they make two correct predictions in a row?
Counter-intuitively, to answer this question, we need to know how many people actually were making predictions about these events, and how the predictions are typically distributed. Very often when people try to predict some event, especially when they try to predict a specific date that is many years away, they come up with wildly different estimates. Typically some will be far too high, while others will be far too low. However, if we assume that the forecasters have a non-zero probability mass of landing at any point between the too high, and too low predictions, then the more forecasters there are, the higher the chance that one will be roughly correct.
Note that this is true especially when some subset of the forecasters have just the strategy of choosing randomly. But of course, if someone gets correct predictions because of luck, we can't extrapolate future forecasting success from that.
For example, there are people that think that AI is just around the corner, while others literally say that it is impossible to achieve, and there are lots of people who say something in between. Even in the world where everyone basically has no clue, and all their reasons for arriving at a particular prediction actually don't make any sense, some of them are probably roughly correct.
Note that all of this directly generalizes also to sets of predictions.
However, in practice, this does not seem to be too big of an issue right now, because you can filter based on if somebody presents good arguments for their position that make it seem like they actually have a good model of the relevant dynamics. And after filtering there will not be very many people left.
However, you need to be careful that you only filter based on what somebody said before the event occurred. Otherwise, they might succumb to hindsight bias, and start to use new knowledge that was not apparent when they made their prediction to justify why they came up with their prediction. This can actually already happen each time the outcome of the event becomes more obvious, as the event comes closer and closer.
If you evaluate past events, you also have to keep in mind that all the information that you find will be heavily skewed in favor of the people that actually got predictions right, or got it wrong in an interesting way. This means that there were probably many more forecasters that did make predictions than is apparent.
For example, information sources will favor featuring people that got predictions right, and people whom you would expect it to get right, but that got it spectacularly wrong.