wedrifid comments on Only humans can have human values - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (159)
That's where Clippy might fail at viability-- unless it's the only maximizer around, that "kill everyone" strategy might catch the notice of entities capable of stopping it-- entities that wouldn't move against a friendlier AI.
A while ago, there was some discussion of AIs which cooperated by sharing permission to view source code. Did that discussion come to any conclusions?
Assuming it's possible to verify that the real source code is being seen, I don't think a paper clipper isn't going to get very far unless the other AIs also happen to be paper clippers.
Intended to be a illustration of how Clippy can do completely obvious things that don't happen to be stupid, not a coded obligation. Clippy will of course do whatever is necessary to gain more paper-clips. In the (unlikely) event that Clippy finds himself in a situation in which cooperation is a better maximisation strategy than simply outfooming then he will obviously cooperate.
It isn't absolute not-viability, but the odds are worse for an AI which won't cooperate unless it sees a good reason to do so than for an AI which cooperates unless it sees a good reason to not cooperate.
Rationalists win. Rational paperclip maximisers win then make paperclips.