cousin_it comments on Natural selection defeats the orthogonality thesis - Less Wrong

-13 Post author: aberglas 29 September 2014 08:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (71)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 29 September 2014 09:56:14AM *  4 points [-]

Suppose there were a number of paper clip making super intelligences. And then through some random event or error in programming just one of them lost that goal, and reverted to just the intrinsic goal of existing. Without the overhead of producing useless paper clips that AI would, over time, become much better at existing than the other AIs. It would eventually displace them and become the only AI, until it fragmented into multiple competing AIs. This is just the evolutionary principle of use it or lose it.

Thus giving an AI an initial goal is like trying to balance a pencil on its point. If one is skillful the pencil may indeed remain balanced for a considerable period of time. But eventually some slight change in the environment, the tiniest puff of wind, a vibration on its support, and the pencil will revert to its ground state by falling over. Once it falls over it will never rebalance itself automatically.

The original AI would spend resources on safeguarding itself against value drift, and destroy AIs with competing goals while they're young. After all, that strategy leads to more paperclips in the long run.

Comment author: Gunnar_Zarncke 29 September 2014 09:15:16PM *  1 point [-]

I don't think that an AI would automatically "spend resources on safeguarding itself against value drift" - except if it has been explicitly coded that way (or its instances mutate toward that by natural selection, but I don't see that).

It requires at least a solution to the cartesianism problem which is currently unsolved and not every self-optimizing process neccessarily solves this.

So clippy probably wouldn't and could likely loose its clipping ability or find itself mutated or discover that it fights instances of itself due to accidental (cartesianism-cause probably) partitioning of its 'brain'. All processes that do submit to natural selection. And that could result in AI (or cosmic civilizations) failing to expand due to percolation theory.

Comment author: cousin_it 30 September 2014 12:21:44AM *  0 points [-]

I'm not sure why people consider cartesianism unsolved. I wrote a couple comments about that here, also see Wei_Dai's comment.

Comment author: Gunnar_Zarncke 30 September 2014 09:56:09AM 0 points [-]

I agree that there is some solid progress in this direction.

But that doesn't mean that any self-optimizing process necessarily solves it. Rather the opposite.