multifoliaterose comments on Other Existential Risks - Less Wrong

32 Post author: multifoliaterose 17 August 2010 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (120)

You are viewing a single comment's thread. Show more comments above.

Comment author: WrongBot 20 August 2010 05:30:39PM 8 points [-]

Hugo de Garis predicts a future war between AI supporters and AI opponents that will cause billions of death. That is a highly-inflammatory prediction, because it fits neatly with human instincts about ideological conflicts and science-fiction-style technology.

The prediction that AIs will be dangerously indifferent to our existence unless we take great care to make them otherwise is not an appeal to human intuitions about conflict or important causes. Eliezer could talk about uFAI as if it were approximately like Skynet and draw substantially more (useless) attention, while still advocating for his preferred course of research. That he has not done so is evidence that he is more concerned with representing his beliefs accurately than attracting media attention.