You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Open thread, September 8-14, 2014 - Less Wrong Discussion

5 Post author: polymathwannabe 08 September 2014 12:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (295)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 08 September 2014 03:43:21PM 5 points [-]

Does mankind have a duty to warn extraterrestrial civilizations that we might someday unintentionally build an unfriendly super-intelligent AI that expands at the speed of light gobbling up everything in its path?

One response to such a warning would be to build a super-intelligent AI that expands at the speed of light gobbling up everything in its path first.

And when the two (or more) collide, it would make a nice SF story :-)

Comment author: solipsist 08 September 2014 10:39:35PM 3 points [-]

This wouldn't be a horrible outcome, because the two civilizations light-cones would never fully intersect. Neither civilization would fully destroy the other.

Comment author: James_Miller 08 September 2014 10:57:51PM 10 points [-]

Are you crazy! Think of all the potential paperclips that wouldn't come into being!!

Comment author: Lumifer 09 September 2014 06:40:22PM 2 points [-]

The light cones might not fully intersect, but humans do not expand at close to the speed of light. It's enough to be able to destroy the populated planets.