JStewart comments on Video Q&A with Singularity Institute Executive Director - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (122)
As one of the 83.5%, I wish to point out that you're misinterpreting the results of the poll. The question was: "Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?" This is not the same as "unfriendly AI is the most worrisome existential risk".
I think that unfriendly AI is the most likely existential risk to wipe out humanity. But I think that an AI singularity is likely farther off than 2100. I voted for an engineered pandemic, because that and nuclear war were the only two risks I thought decently likely to occur before 2100, though a >90% wipeout of humanity is still quite unlikely.
edit: I should note that I have read the sequences and it is because of Eliezer's writing that I think unfriendly AI is the most likely way for humanity to end.