thomblake comments on 2011 Survey Results - Less Wrong

94 Post author: Yvain 05 December 2011 10:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (513)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 04 December 2011 08:12:29PM 1 point [-]

Of possible existential risks, the most feared was a bioengineered pandemic, which got 194 votes (17.8%) - a natural pandemic got 89 (8.2%), making pandemics the overwhelming leader.

This doesn't look very good from the point of view of the Singularity Institute. While 38.5% of all people have read at least 75% of the Sequences only 16.5% think that unfriendly AI is the most worrisome existential risk.

Is the issue too hard to grasp for most people or has it so far been badly communicated by the Singularity Institute? Or is it simply the wisdom of crowds?

Comment author: thomblake 05 December 2011 03:55:19PM 4 points [-]

Don't forget - even if unfriendly AI wasn't a major existential risk, Friendly AI is still potentially the best way to combat other existential risks.

Comment author: kilobug 05 December 2011 04:24:56PM 3 points [-]

It's best long-term way, probably. But if you estimate it'll take 50 years to get a FAI and that some of the existential risks have a significant probability of happening in 10 or 20 years, then you better should try to address them without requiring FAI - or you're likely to never reach the FAI stage.

In 7 billions of humans, it's sane to have some individual to focus on FAI now, since it's a hard problem, so we have to start early; but it's also normal for not all of us to focus on FAI, but to focus also on other ways to mitigate the existential risks that we estimate are likely to occur before FAI/uFAI.

Comment author: cousin_it 05 December 2011 03:59:57PM 1 point [-]

How do you imagine a hypothetical world where uFAI is not dangerous enough to kill us, but FAI is powerful enough to save us?

Comment author: TheOtherDave 05 December 2011 04:30:28PM 6 points [-]

Hypothetically suppose the following (throughout, assume "AI" stands for significantly superhuman artificial general intelligence):

1) if we fail to develop AI before 2100, various non-AI-related problems kill us all in 2100.
2) if we ever develop unFriendly AI before Friendly AI, UFAI kills us.
3) if we develop FAI before UFAI and before 2100, FAI saves us.
4) FAI isn't particularly harder to build than UFAI is.

Given those premises, it's true that UFAI isn't a major existential risk, in that even if we do nothing about it, UFAI won't kill us. But it's also true that FAI is the best (indeed, the only) way to save us.

Are those premises internally contradictory in some way I'm not seeing?

Comment author: cousin_it 05 December 2011 04:33:29PM 4 points [-]

No, you're right. thomblake makes the same point. I just wasn't thinking carefully enough. Thanks!

Comment author: thomblake 05 December 2011 04:11:51PM 3 points [-]

I don't. Just imagine a hypothetical world where lots of other things are much more certain to kill us much sooner, if we don't get FAI to solve them soon.