Earlier, you wrote about a change to your AGI timelines.
What about p(doom)? It seems that in recent months there have been reasons for both optimism and pessimism.
It seems a little surprising to me how rarely confident pessimists (p(doom)>0.9) they argue with moderate optimists (p(doom)≤0.5).
I'm not specifically talking about this post. But it would be interesting if people revealed their disagreement more often.
Thanks for the reply. I remembered a recent article by Evans and thought that reasoning models might show a different behavior. Sorry if this sounds silly
Are you planning to test this on reasoning models?
I agree. But now people write so often about short timelines that it seems appropriate to recall the possible reason for the uncertainty.
There doesn't seem to be a consensus that ASI will be created in the next 5-10 years. This means that current technology leaders and their promises may be forgotten.
Does anyone else remember Ben Goertzel and Novamente? Or Hugo de Garis?
Yudkowsky may think that the plan 'Avert all creation of superintelligence in the near and medium term — augment human intelligence' has <5% chance of success, but your plan has <<1% chance. Obviously, you and he disagree not only on conclusions, but also on models.
I sympathize with this line of thinking, but I've never understood something like P(doom)>0.8.
The analogies with cancer or poison seem a bit odd, because we're trying to estimate the probability of an event that has never happened before. Without relying on anything like physical laws, without anything close to consensus. Even among the people who proposed the key ideas of the AI Risk discussions, not all were confident pessimists.
We have too many unknowns. We don't know when superintelligence will appear. We can't predict how governments and corporations will treat AI in the coming years. We don't know what will happen if someone tries to use a sufficiently advanced AI for automated safety research. Or narrow AI might change the situation in the world before superintelligence appears. Our civilization could collapse for any number of reasons.
And I don't think we can say for sure what superintelligence will do to humans.