You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

D_Malik comments on What if Strong AI is just not possible? - Less Wrong Discussion

7 Post author: listic 01 January 2014 05:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (101)

You are viewing a single comment's thread.

Comment author: D_Malik 02 January 2014 01:02:18AM 12 points [-]

Every strong AI instantly kills everyone, so by anthropic effects your mind ends up in a world where every attempt to build strong AI mysteriously fails.

Comment author: mwengler 02 January 2014 08:12:51PM 1 point [-]

This looks to me like gibberish, does it refer to something after all that someone could explain and/or link to? Or was it meant merely to be a story idea, unlabeled?

Comment author: TylerJay 02 January 2014 09:46:45PM 4 points [-]

It's actually pretty clever. We're taking the assertion "Every strong AI instantly kills everyone" as a premise, meaning that on any planet where Strong AI has ever been created or ever will be created, that AI always ends up killing everyone.

Anthropic reasoning is a way of answering questions about why our little piece of the universe is perfectly suited for human life. For example, "Why is it that we find ourselves on a planet in the habitable zone of a star with a good atmosphere that blocks most radiation, that gravity is not too low and not too high, and that our planet is the right temperature for liquid water to exist?"

The answer is known as the Anthropic Principle: "We find ourselves here BECAUSE it is specifically tuned in a way that allows for life to exist." Basically even though it's unlikely for all of these factors to come together, these are the only places that life exists. So any lifeform who looks around at its surroundings would find an environment that has all of the right factors aligned to allow it to exist. It seems obvious when you spell it out, but it does have some explanatory power for why we find ourselves where we do.

The suggestion by D_Malik is that "lack of strong AI" is a necessary condition for life to exist (since it kills everyone right away if you make it). So the very fact that there is life on a planet to write a story about implies that either Strong AI hasn't been built yet or that it's creation failed for some reason.

Comment author: mwengler 03 January 2014 01:30:21AM 0 points [-]

It seems like a weak premise in that human intelligence is just Strong NI (Strong Natural Intelligence). What would it be about being Strong AI that it would kill everything when Strong NI does not? A stronger premise would be more fundamental, be a premise about something more basic about AI vs NI that would explain how it came to be that Strong AI killed everything when Strong NI obviously does not.

But OK, its a premise for a story.

Comment author: [deleted] 02 January 2014 11:58:45PM 0 points [-]

That doesn't explain why the universe isn't filled with strong AIs, however...

Comment author: James_Miller 04 January 2014 01:17:38AM *  0 points [-]

Most if of probably it is (under the assumption) but observers such as us only exist in the part free from of strong AI. If strong AI spreads out at the speed of light, observers such as us won't be able to detect it.

Comment author: [deleted] 04 January 2014 02:01:45AM 1 point [-]

Still doesn't address the underlying problem. The Milky Way is about 100,000 light years across, but billions of years old. It is extremely unlikely that some non-terrestrial strong AI just happened to come into history in the exact same time that modern humans evolved, and is spreading throughout the universe at near the speed of light but just hasn't reached us yet.

Note that "moving at the speed of light" is not the issue here. Even predictions of how long it would take to colonize the galaxy with procreating humans and 20th century technology still says that the galaxy should have been completely tiled eons ago.

Comment author: James_Miller 04 January 2014 02:46:32AM *  0 points [-]

Imagine that 99.9999999999999% of the universe (and 100% of most galaxies) is under the control of strong AIs, and they expand at the speed of light. Observers such as us would live in the part of the universe not under their control and would see no evidence of strong AIs.

It is extremely unlikely that some non-terrestrial strong AI just happened to come into history in the exact same time that modern humans evolved, and is spreading throughout the universe at near the speed of light but just hasn't reached us yet.

The universe (not necessarily just the observable universe) is very big so I don't agree. It would be true if you wrote galaxy instead of universe.

Comment author: TylerJay 03 January 2014 12:26:47AM 0 points [-]

True, but given the assumptions, it would be evidence for the fact that there are none that have come in physical contact with the story-world (or else they would be dead).

Comment author: shminux 04 January 2014 03:04:21AM -1 points [-]

The anthropic principle selects certain universes out of all possible ones. In this case, we can only exist in the subset of them which admits humans but prohibits strong AI. You have to first subscribe to a version of many worlds to apply it, not sure if you do. Whether the idea of anthropic selection is a useful one still remains to be seen.

Comment author: [deleted] 06 January 2014 04:39:38PM 1 point [-]

My point is more that expansion of the strong AI would not occur at the speed of light, so there should be very distant but observable galactic-level civilizations of AIs changing the very nature of the regions they reside in, in ways that would be spectrally observable. Or, in those multiverses where a local AI respects some sort of prime directive, we may be left alone our immediate stellar neighborhood should nevertheless contain signs of extraterrestrial resource usage. So where are they?

Comment author: shminux 06 January 2014 05:18:30PM -1 points [-]

My point is more that expansion of the strong AI would not occur at the speed of light

How do you know that? Or why do you think it's a reasonable assumption?

so there should be very distant but observable galactic-level civilizations of AIs changing the very nature of the regions they reside in, in ways that would be spectrally observable.

How would we tell if a phenomenon is natural or artificial?

in those multiverses where a local AI respects some sort of prime directive, we may be left alone our immediate stellar neighborhood should nevertheless contain signs of extraterrestrial resource usage.

It would not be a good implementation of the prime directive if the signs of superior intelligences were obvious.