You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

James_Miller comments on Open thread, September 8-14, 2014 - Less Wrong Discussion

5 Post author: polymathwannabe 08 September 2014 12:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (295)

You are viewing a single comment's thread.

Comment author: James_Miller 08 September 2014 03:22:34PM *  9 points [-]

Does mankind have a duty to warn extraterrestrial civilizations that we might someday unintentionally build an unfriendly super-intelligent AI that expands at the speed of light gobbling up everything in its path? Assuming that the speed of light really is the maximum, our interstellar radio messages would outpace any paperclip maximizer. Obviously any such message would complicate future alien contact events as the aliens would worry that our ambassador was just an agent for a paperclipper. The act of warning others would be a good way to self-signal the dangers of AI.

Comment author: gjm 08 September 2014 04:15:38PM 17 points [-]

I'd have thought any extraterrestrial civilization capable of doing something useful with the information wouldn't need the explicit warning.

Comment author: James_Miller 08 September 2014 05:13:49PM *  3 points [-]

This depends on the solution to the Fermi paradox. An advanced civilization might have decided to not build defenses against a paperclip maximizer because it figured no other civilization would be stupid/evil enough to attempt AI without a mathematical proof that its AI would be friendly. A civilization near our level of development might use the information to accelerate its AI program. If a paperclip maximizer beats everything else an advanced civilization might respond to the warning by moving away from us as fast as possible taking advantage of the expansion of the universe to hopefully get in a different Hubble volume from us.

Comment author: Lumifer 08 September 2014 03:43:21PM 5 points [-]

Does mankind have a duty to warn extraterrestrial civilizations that we might someday unintentionally build an unfriendly super-intelligent AI that expands at the speed of light gobbling up everything in its path?

One response to such a warning would be to build a super-intelligent AI that expands at the speed of light gobbling up everything in its path first.

And when the two (or more) collide, it would make a nice SF story :-)

Comment author: solipsist 08 September 2014 10:39:35PM 3 points [-]

This wouldn't be a horrible outcome, because the two civilizations light-cones would never fully intersect. Neither civilization would fully destroy the other.

Comment author: James_Miller 08 September 2014 10:57:51PM 10 points [-]

Are you crazy! Think of all the potential paperclips that wouldn't come into being!!

Comment author: Lumifer 09 September 2014 06:40:22PM 2 points [-]

The light cones might not fully intersect, but humans do not expand at close to the speed of light. It's enough to be able to destroy the populated planets.

Comment author: solipsist 08 September 2014 11:16:32PM *  3 points [-]

I love this idea! A few thoughts:

  1. What could the alien civilizations do? Suppose SETI decoded "Hi from the Andromeda Galaxy! BTW, nanobots might consume your planet in 23 years, so consider fleeing for your lives." Is there anything humans could do?

  2. The costs might be high. Suppose our message saves an alien civilization one thousand light-years away, but delays a positive singularity by three days. By the time our colonizers reach the alien planet, the opportunity cost would be a three-light-day deep shell of a thousand light-year sphere. Most of the volume of a sphere is close to the surface, so this cost is enormous. Giving the aliens an escape ark when we colonize their planet would be quintillions of times less expensive. Of course, a paperclipper would do no such thing.

  3. It may be presumptuous to warn about AI. Perhaps the correct message to say is something like "If you think of a clever experiment to measure dark energy density, don't do it."

Comment author: James_Miller 09 September 2014 12:48:07AM 4 points [-]
  1. It depends on your stage of development. You might build a defense, flee at close to the speed of light and take advantage of the universe's expansion to get into a separate Hubble volume from mankind, accelerate your AI program, or prepare for the possibility of annihilation.

  2. Good point, and the resources we put into signaling could instead be used to research friendly AI.

  3. The warming should be honest and give our best estimates.

Comment author: Luke_A_Somers 09 September 2014 06:13:31PM 3 points [-]
  1. Quite.

  2. The outer thee days of a 1000 Ly sphere account for 0.0025% of its volume.