You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

chaosmage comments on The Fermi paradox as evidence against the likelyhood of unfriendly AI - Less Wrong Discussion

5 Post author: chaosmage 01 August 2013 06:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: chaosmage 01 August 2013 11:11:02PM *  -1 points [-]

Any new intelligence would have to arise from (something like) natural selection, as a useful trick to have in the competition for resources that everything from bacteria upwards is evolved to be good at. I fail to imagine any intelligent lifeform that wouldn't want to expand.

Comment author: passive_fist 02 August 2013 02:31:05AM 1 point [-]

Even though the product of natural selection can be assumed to be 'fit' with regards to its environment, there's no reason to assume that it will consciously embody the values of natural selection. Consider: birth control.

In particular, expansion may be a good strategy for a species but not necessarily a good strategy for individuals of that species.

Consider: a predator (say, a bird of prey or a big cat) has no innate desire for expansion. All the animal wants is some predetermined territory for itself, and it will never enlarge this territory because the territory provides all that it needs and policing a larger area would be a waste of effort. Expansion, in many species, is merely a group phenomenon. If the species is allowed to grow unchecked (fewer predators, larger food supply), they will expand simply by virtue of there being more individuals than there were before.

A similar situation can arise with a SAI. Let's say a SAI emerges victorious from competition with other SAIs and its progenitor species. To eliminate competition it ruthlessly expands over its home planet and crushes all opposition. It's entirely possible then that by conquering its little planet it has everything it needs (its utility function is maximized), and since there are no competitors around, it settles down, relaxes, and ceases expansion.

Even if the SAI were compelled to grow (by accumulating more computational resources), expansion isn't guaranteed. Let's say it figures out how to create a hypercomputer with unlimited computational capacity (using, say, a black hole). If this hypercomputer provides it with all its needs, there would be no reason to expand. Plus, communication over large distances is difficult, so expansion would actually have negative value.

Comment author: Vladimir_Nesov 02 August 2013 02:49:04AM *  2 points [-]

It's entirely possible then that by conquering its little planet [the AGI] has everything it needs (its utility function is maximized)

I don't think it is possible. Even specifically not caring about the state of the rest of the world would make it useful for instrumental reasons, to compute more optimal actions to be performed on the original planet. The value of not caring about the rest of the world is itself unlikely to be certain, cleanly evaluating properties of even minimally nontrivial goals seems hard. Even if under its current understanding of the world, the meaning of its values is that it doesn't care about the rest of the world, it might be wrong, perhaps given some future hypothetical discovery about fundamental physics, in which case it's better to already have the rest of the world under control, ready to be optimized in a newly-discovered direction (or before that, to run those experiments).

Far too many things have to align for this to happen.

Comment author: Lumifer 02 August 2013 03:45:16PM 1 point [-]

It is possible to have factors in one's utility function which limit the expansion.

For example, a utility function might involve "preservation in an untouched state", something similar to what humans do when they declare a chunk of nature to be a protected wilderness.

Or a utility function might contain "observe development and change without influencing it".

And, of course, if we're willing to assume an immutable cast-in-stone utility function, why not assume that there are some immutable constraints which go with it?

Comment author: passive_fist 02 August 2013 03:48:52AM 0 points [-]

It's definitely unlikely, I just brought it up as an example because chaosmage said "I fail to imagine any intelligent lifeform that wouldn't want to expand." There are plenty of lifeforms already that don't want to expand, and I can imagine some (unlikely but not impossible) situations where a SAI wouldn't want to expand either.