You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

CarlShulman comments on Is a paperclipper better than nothing? - Less Wrong Discussion

6 Post author: DataPacRat 24 May 2013 07:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

You are viewing a single comment's thread.

Comment author: CarlShulman 24 May 2013 09:26:13PM 6 points [-]

Phrased another way: does the existence of any intelligence at all, even a paperclipper, have even the smallest amount of utility above no intelligence at all?

This is a different and cleaner question, because it avoids issues with intelligent life evolving again, and the paperclipper creating other kinds of life and intelligence for scientific or other reasons in the course of pursuing paperclip production.

I would say that if we use a weighted mixture of moral accounts (either from normative uncertainty, or trying to reflect a balance among varied impulses and intuitions), then it matters that the paperclipper could do OK on a number of theories of welfare and value:

  • Desire theories of welfare
  • Objective list theories of welfare
  • Hedonistic welfare theories, depending on what architecture is most conducive to producing paperclips (although this can cut both ways)
  • Perfectionism about scientific, technical, philosophical, and other forms of achievement
Comment author: Eliezer_Yudkowsky 24 May 2013 11:22:55PM 3 points [-]

Paperclippers are worse than nothing because they might run ancestor simulations and prevent the rise of intelligent life elsewhere, as near as I can figure. They wouldn't enjoy life. I can't figure out how any of the welfare theories you specify could make paperclippers better than nothing?

Comment author: DataPacRat 24 May 2013 11:50:44PM 4 points [-]

Would it be possible to estimate how /much/ worse than nothing you consider a paperclipper to be?

Comment author: Pentashagon 25 May 2013 12:39:08AM 3 points [-]

Replace "paperclip maximizer" with "RNA maximizer." Apparently the long-term optimization power of a maximizer is the primary consideration for deciding whether it is ultimately better or worse than nothing. A perfect paperclipper would be bad but an imperfect one could be just as useful as early life on Earth.

Comment author: CarlShulman 24 May 2013 11:53:31PM *  2 points [-]

This is a different and cleaner question, because it avoids issues with intelligent life evolving again, and the paperclipper creating other kinds of life and intelligence for scientific or other reasons in the course of pursuing paperclip production.

And:

I can't figure out how any of the welfare theories you specify could make paperclippers better than nothing?

Desires and preferences about paperclips can be satisfied. They can sense, learn, grow, reproduce, etc.

Comment author: Eliezer_Yudkowsky 25 May 2013 02:43:56AM 5 points [-]

Desires and preferences about paperclips can be satisfied. They can sense, learn, grow, reproduce, etc.

Do you take that personally seriously or is it something someone else believes? Human experience with desire satisfaction and "learning" and "growth" isn't going to transfer over to how it is for paperclip maximizers, and a generalization that this is still something that matters to us is unlikely to succeed. I predict an absence of any there there.

Comment author: CarlShulman 25 May 2013 05:13:14PM *  4 points [-]

Yes, I believe that the existence of the thing itself, setting aside impacts on other life that it creates or interferes with, is better than nothing, although far short of the best thing that could be done with comparable resources.

Comment author: MugaSofer 28 May 2013 03:39:28PM -1 points [-]

Human experience with desire satisfaction and "learning" and "growth" isn't going to transfer over to how it is for paperclip maximizers

This is far from obvious. There are definitely people who claim "morality" is satisfying the preferences of as many agents as you can.

If morality evolved for game-theoretic reasons, there might even be something to this, although I personally think it's too neat to endorse.

Comment author: Wei_Dai 29 May 2013 06:35:47AM 0 points [-]

Desires and preferences about paperclips can be satisfied.

But they can also be unsatisfied. Earlier you said "this can cut both ways" but only on the "hedonistic welfare theories" bullet point. Why doesn't "can cut both ways" also apply for desire theories and objective list theories? For example, even if a paperclipper converts the entire accessible universe into paperclips, it might also want to convert other parts of the multiverse into paperclips but is powerless to do so. If we count unsatisfied desires as having negative value, then maybe a paperclipper has net negative value (i.e., is worse than nothing)?