wedrifid comments on Only humans can have human values - Less Wrong

34 Post author: PhilGoetz 26 April 2010 06:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (159)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 28 April 2010 05:34:45AM 1 point [-]

That's where Clippy might fail at viability-- unless it's the only maximizer around, that "kill everyone" strategy might catch the notice of entities capable of stopping it-- entities that wouldn't move against a friendlier AI.

Intended to be a illustration of how Clippy can do completely obvious things that don't happen to be stupid, not a coded obligation. Clippy will of course do whatever is necessary to gain more paper-clips. In the (unlikely) event that Clippy finds himself in a situation in which cooperation is a better maximisation strategy than simply outfooming then he will obviously cooperate.

Comment author: NancyLebovitz 28 April 2010 10:00:22AM 0 points [-]

It isn't absolute not-viability, but the odds are worse for an AI which won't cooperate unless it sees a good reason to do so than for an AI which cooperates unless it sees a good reason to not cooperate.

Comment author: wedrifid 28 April 2010 06:59:31PM 4 points [-]

but the odds are worse for an AI which won't cooperate unless it sees a good reason to do so than for an AI which cooperates unless it sees a good reason to not cooperate.

Rationalists win. Rational paperclip maximisers win then make paperclips.