wedrifid comments on Only humans can have human values - Less Wrong

34 Post author: PhilGoetz 26 April 2010 06:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (159)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 27 April 2010 03:29:09PM 1 point [-]

Why do you take the time to make 2 comments, but not take the time to speak clearly? Mysteriousness is not an argument.

Comment author: wedrifid 27 April 2010 04:06:13PM 6 points [-]

Banal translation:

The implied argument is that there aren't any values at all that most people will agree on, because one imagined and not-evolutionarily-viable Clippy doesn't think anything other than paperclips have value.

No, that is not the argument implied when making references to paperclipping. That is a silly argument that is about a whole different problem to paperclipping. It is ironic that your straw man claim is, in fact, the straw man.

But it would seem our disagreement if far more fundamental than what a particular metaphor means:

one imagined and not-evolutionarily-viable Clippy

  1. Being "Evolutionarily-viable" is a relatively poor form of optimisation. It is completely the wrong evaluation of competitiveness to make and also carries the insidious assumption that competing is something that an agent should do as more than a short term instrumental objective.
  2. Clippy is competitively viable. If you think that a Paperclip Maximizer isn't a viable competitive force then you do not understand what a Paperclip Maximizer is. It maximizes paperclips. It doesn't @#%$ around making paperclips while everyone else is making Battle Cruisers and nanobots. It kills everyone, burns the cosmic commons to whatever extent necessary to eliminate any potential threat and then it goes about turning whatever is left into paperclips.
  3. The whole problem with Paperclip Maximisers is that they ARE competitively viable. That is the mistake in the design. A mandate to produce a desirable resource (stationary) will produce approximately the same behavior as a mandate to optimise survival, dominance and power right up until the point where it doesn't need to any more.
Comment author: PhilGoetz 27 April 2010 10:14:49PM *  1 point [-]

Suppose Clippy takes over this galaxy. Does Clippy stop then and make paperclips, or continue immediately expansion to the next galaxy?

Suppose Clippy takes over this universe. Does Clippy stop then and make paperclips, or continue to other universes?

Does your version of Clippy ever get to make any paperclips?

(The paper clips are a lie, Clippy!)

Does Clippy completely trust future Clippy, or spatially-distant Clippy, to make paperclips?

At some point, Clippy is going to start discounting the future, or figure that the probability of owning and keeping the universe is very low, and make paperclips. At that point, Clippy is non-competitive.

Comment author: wedrifid 28 April 2010 05:52:34AM *  2 points [-]

Suppose Clippy takes over this galaxy. Does Clippy stop then and make paperclips, or continue immediately expansion to the next galaxy?

Whatever is likely to produce more paperclips.

Suppose Clippy takes over this universe. Does Clippy stop then and make paperclips, or continue to other universes?

Whatever is likely to produce more paperclips. Including dedicating resources to figuring out if that is physically possible.

Does your version of Clippy ever get to make any paperclips?

Yes.

Does Clippy completely trust future Clippy, or spatially-distant Clippy, to make paperclips?

Yes.

At some point, Clippy is going to start discounting the future, or figure that the probability of owning and keeping the universe is very low, and make paperclips. At that point, Clippy is non-competitive.

A superintelligence that happens to want to make paperclips is extremely viable. This is utterly trivial. I maintain my rejection of the below claim and discontinue my engagement in this line of enquiry. It is just several levels of confusion.

The implied argument is that there aren't any values at all that most people will agree on, because one imagined and not-evolutionarily-viable Clippy doesn't think anything other than paperclips have value.