You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on Q&A with experts on risks from AI #3 - Less Wrong Discussion

13 Post author: XiXiDu 12 January 2012 10:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 15 January 2012 03:10:20PM *  1 point [-]

Your original question already asked about this particular possibility. If you want to gauge how likely this possibility is seen, ask directly, without mixing that with the question of value. And previous responses show that the answer is not determined by my variant of the question: three popular responses are "It's going to be fine by default" (wrong), "It's not possible to guarantee absence of danger, so why bother?" (because of the danger) and "If people worried about absence of danger so much, they won't have useful things X,Y,Z." (these things weren't existential risks).