thomblake comments on 2011 Survey Results - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (513)
This doesn't look very good from the point of view of the Singularity Institute. While 38.5% of all people have read at least 75% of the Sequences only 16.5% think that unfriendly AI is the most worrisome existential risk.
Is the issue too hard to grasp for most people or has it so far been badly communicated by the Singularity Institute? Or is it simply the wisdom of crowds?
Don't forget - even if unfriendly AI wasn't a major existential risk, Friendly AI is still potentially the best way to combat other existential risks.
It's best long-term way, probably. But if you estimate it'll take 50 years to get a FAI and that some of the existential risks have a significant probability of happening in 10 or 20 years, then you better should try to address them without requiring FAI - or you're likely to never reach the FAI stage.
In 7 billions of humans, it's sane to have some individual to focus on FAI now, since it's a hard problem, so we have to start early; but it's also normal for not all of us to focus on FAI, but to focus also on other ways to mitigate the existential risks that we estimate are likely to occur before FAI/uFAI.
How do you imagine a hypothetical world where uFAI is not dangerous enough to kill us, but FAI is powerful enough to save us?
Hypothetically suppose the following (throughout, assume "AI" stands for significantly superhuman artificial general intelligence):
1) if we fail to develop AI before 2100, various non-AI-related problems kill us all in 2100.
2) if we ever develop unFriendly AI before Friendly AI, UFAI kills us.
3) if we develop FAI before UFAI and before 2100, FAI saves us.
4) FAI isn't particularly harder to build than UFAI is.
Given those premises, it's true that UFAI isn't a major existential risk, in that even if we do nothing about it, UFAI won't kill us. But it's also true that FAI is the best (indeed, the only) way to save us.
Are those premises internally contradictory in some way I'm not seeing?
No, you're right. thomblake makes the same point. I just wasn't thinking carefully enough. Thanks!
I don't. Just imagine a hypothetical world where lots of other things are much more certain to kill us much sooner, if we don't get FAI to solve them soon.