You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

James_Miller comments on Open thread, September 8-14, 2014 - Less Wrong Discussion

5 Post author: polymathwannabe 08 September 2014 12:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (295)

You are viewing a single comment's thread. Show more comments above.

Comment author: James_Miller 08 September 2014 05:13:49PM *  3 points [-]

This depends on the solution to the Fermi paradox. An advanced civilization might have decided to not build defenses against a paperclip maximizer because it figured no other civilization would be stupid/evil enough to attempt AI without a mathematical proof that its AI would be friendly. A civilization near our level of development might use the information to accelerate its AI program. If a paperclip maximizer beats everything else an advanced civilization might respond to the warning by moving away from us as fast as possible taking advantage of the expansion of the universe to hopefully get in a different Hubble volume from us.