You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JStewart comments on Video Q&A with Singularity Institute Executive Director - Less Wrong Discussion

42 Post author: lukeprog 10 December 2011 11:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (122)

You are viewing a single comment's thread. Show more comments above.

Comment author: JStewart 11 December 2011 04:52:16AM *  4 points [-]

As one of the 83.5%, I wish to point out that you're misinterpreting the results of the poll. The question was: "Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?" This is not the same as "unfriendly AI is the most worrisome existential risk".

I think that unfriendly AI is the most likely existential risk to wipe out humanity. But I think that an AI singularity is likely farther off than 2100. I voted for an engineered pandemic, because that and nuclear war were the only two risks I thought decently likely to occur before 2100, though a >90% wipeout of humanity is still quite unlikely.

edit: I should note that I have read the sequences and it is because of Eliezer's writing that I think unfriendly AI is the most likely way for humanity to end.