The Metaculus community strikes me as a better starting point for evaluating how different the safety inside view is from a forecasting/outside view. The case for deferring to superforecasters is the same the case for deferring to the Metaculus community--their track record. What's more, the most relevant comparison I know of scores Metaculus higher on AI predictions. Metaculus as a whole is not self-consistent on AI and extinction forecasting across individual questions (links below). However, I think it is fair to say that Metaculus as a whole has significantly faster timelines and P(doom) compared to superforecasters.
If we compare the distribution of safety researchers' forecasts to Metaculus (maybe we have to set aside MIRI...), I don't think there will be that much disagreement. I think remaining disagreement will often be that safety researchers aren't being careful about how the letter and the spirit of the question can come apart and result in false negatives. In the one section of the FRI studies linked above I took a careful look at, the ARA section, I found that there was still huge ambiguity in how the question is operationalized--this could explain up to an OOM of disagreement in probabilities.
Some Metaculus links: https://www.metaculus.com/questions/578/human-extinction-by-2100/ Admittedly in this question the number is 1%, but compare to the below. Also note that the forecasts date back to as old as 2018. https://www.metaculus.com/questions/17735/conditional-human-extinction-by-2100/ https://www.metaculus.com/questions/9062/time-from-weak-agi-to-superintelligence/ (compare this to the weak AGI timeline and other questions)
I am far less impressed by superforecaster track record than you are. [ we didn't get into this]
I'm interested in hearing your thoughts on this.
The kind of superficial linear extrapolation of trendlines can be powerful, perhaps more powerful than usually accepted in many political/social/futurist discussions. In many cases, succesful forecasters by betting on some high level trend lines often outpredict 'experts'.
But it's a very non-gears level model. I think one should be very careful about using this kind of reasoning when for tail-events.
e.g. this kind of reasoning could lead one to reject development of nuclear weapons.
Agree. In some sense you have to invent all the technology before the stochastic process of technological development looks predictable to you, almost by definition. I'm not sure it is reasonable to ask general "forecasters" about questions that hinge on specific technological change. They're not oracles.
My expectation is that superforcasters weren’t able to look into detailed arguments that represent the x-risk well and they would update after learning more.
I think this proves too much - this would predict that superforecasters would be consistently outperformed by domain experts when typically the reverse it true.
I think I agree.
For my information, what's your favorite reference for superforecasters outperforming domain experts?
As of two years ago, the evidence for this was sparse. Looked like parity overall, though the pool of "supers" has improved over the last decade as more people got sampled.
There are other reasons to be down on XPT in particular.
My expectation is that superforcasters weren’t able to look into detailed arguments that represent the x-risk well and they would update after learning more.
I quite enjoyed this conversation, but imo the x-risk side needs to sit down to make a more convincing, forecasting-style prediction to meet forecasters where they are. A large part of it is sorting through the possible base rates and making an argument for which ones are most relevant. Once the whole process is documented, then the two sides can argue on the line items.
Thank you ! glad you liked it. ☺️
LessWrong & EA is inundated with repeating the same.old arguments for ai x-risk in a hundred different formats. Could this really be the difference ?
Besides, arent superforecasters supposed to be the Kung Fu masters of doing their own research ;-)
I agree with you that a crux is base rate relevancy. Since there is no base rate for x-risk I'm unsure how to translate this to superforecaster language tho
Well, what base rates can inform the trajectory of AGI?
Would be an interesting exercise to do to flesh this out.