Vladimir_Nesov comments on Q&A with experts on risks from AI #3 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (28)
Your original question already asked about this particular possibility. If you want to gauge how likely this possibility is seen, ask directly, without mixing that with the question of value. And previous responses show that the answer is not determined by my variant of the question: three popular responses are "It's going to be fine by default" (wrong), "It's not possible to guarantee absence of danger, so why bother?" (because of the danger) and "If people worried about absence of danger so much, they won't have useful things X,Y,Z." (these things weren't existential risks).