JGWeissman comments on Only humans can have human values - Less Wrong

34 Post author: PhilGoetz 26 April 2010 06:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (159)

You are viewing a single comment's thread. Show more comments above.

Comment author: NancyLebovitz 27 April 2010 04:25:18PM 0 points [-]

t kills everyone, burns the cosmic commons to whatever extent necessary to eliminate any potential threat and then it goes about turning whatever is left into paperclips.

That's where Clippy might fail at viability-- unless it's the only maximizer around, that "kill everyone" strategy might catch the notice of entities capable of stopping it-- entities that wouldn't move against a friendlier AI.

A while ago, there was some discussion of AIs which cooperated by sharing permission to view source code. Did that discussion come to any conclusions?

Assuming it's possible to verify that the real source code is being seen, I don't think a paper clipper isn't going to get very far unless the other AIs also happen to be paper clippers.

Comment author: JGWeissman 27 April 2010 07:06:29PM 3 points [-]

That's where Clippy might fail at viability-- unless it's the only maximizer around, that "kill everyone" strategy might catch the notice of entities capable of stopping it-- entities that wouldn't move against a friendlier AI.

An earth originating paperclipper that gets squashed by other super intelligences from somewhere else still is very bad for humans.

Though I don't see why a paperclipper couldn't compromise and cooperate with competing super intelligences as well as other super intelligences with different goals. If other AIs are a problem for Clippy, they are also a problem for AIs that are Friendly towards humans, but not neccesarily friendly towards alien super intelligences.