You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DataPacRat comments on Is a paperclipper better than nothing? - Less Wrong Discussion

6 Post author: DataPacRat 24 May 2013 07:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

You are viewing a single comment's thread. Show more comments above.

Comment author: DataPacRat 24 May 2013 07:46:51PM 2 points [-]

To be clear - you're saying that you would prefer that there not exist a single thing which takes negentropy and converts it into order (or whatever other general definition for 'life' you prefer), and may or may not have the possibility of evolving into something else more complicated, over nothing at all?

Comment author: Baughn 24 May 2013 09:18:18PM *  5 points [-]

I'm thinking that the paperclipper counts as a life not worth living - an AI that wants to obsess about paperclips is about as repugnant to me as a cow that wants to be eaten. Which is to say, better than doing either of those without wanting it, but still pretty bad. Yes, I'm likely to have problems with a lot of genuinely friendly AIs.

I was assuming that both scenarios were for keeps. Certainly the paperclipper should be smart enough to ensure that; for the other, I guess I'll assume you're actually destroying the universe somehow.

Comment author: lukstafi 25 May 2013 08:17:40PM 2 points [-]

It is a fair point but do you mean that the paperclipper is wrong in its judgement that its life is worth living, or is it merely your judgement that if you were the paperclipper your life would not be worth living by your current standards? Remember that we assume that there is no other life possible in the universe anyway -- this assumption makes things more interesting.

Comment author: Baughn 26 May 2013 12:54:14PM *  2 points [-]

It's my judgement that the paperclipper's life is not worth living. By my standards, sure; objective morality makes no sense, so what other standards could I use?

The paperclipper's own opinion matters to me, but not all that much.

Comment author: lukstafi 26 May 2013 03:06:54PM *  0 points [-]

Would you engage with a particular paperclipper in a discussion (plus observation etc.) to refine your views on whether its life is worth living? (We are straying away from a nominal AIXI-type definition of "the" paperclipper but I think your initial comment warrants that. Besides, even an AIXI agent depends on both terminal values and history.)

Comment author: Baughn 26 May 2013 06:05:50PM 3 points [-]

No, if I did so it'd hack my mind and convince me to make paperclips in my own universe. Assuming it couldn't somehow use the communications channel to directly take over our universe.

I'm not quite sure what you're asking here.

Comment author: lukstafi 26 May 2013 07:09:37PM 1 point [-]

Oh well, I haven't thought of that. I was "asking" about the methodology for judging whether a life is worth living.

Comment author: Baughn 26 May 2013 07:52:45PM 0 points [-]

Whether or not I would enjoy living it, taking into account any mental changes I would be okay with.

For a paperclipper.. yeah, no.

Comment author: lukstafi 27 May 2013 05:19:39AM *  2 points [-]

But you have banned most of the means of approximating the experience of living such a life, no? In a general case you wouldn't be justified in your claim (where by general case I mean the situation where I have strong doubts you know the other entity, not the case of "the" paperclipper). Do you have a proof that having a single terminal value excludes having a rich structure of instrumental values? Or does the way you experience terminal values overwhelm the way you experience instrumental values?

Comment author: MugaSofer 28 May 2013 03:56:23PM -1 points [-]

Assuming that clippy (or the cow, which makes more sense) feels "enjoyment", aren't you just failing to model them properly?

Comment author: Baughn 29 May 2013 01:15:18PM 0 points [-]

It's feeling enjoyment from things I dislike, and failing to pursue goals I do share. It has little value in my eyes.

Comment author: Gabriel 24 May 2013 08:33:08PM 0 points [-]

That sounds as if scenario B precluded abiogenesis from happening ever again. After all, prebiotic Earth kind of was a thing which took negentropy and (eventually) converted it into order.

Comment author: DataPacRat 24 May 2013 08:43:52PM 0 points [-]

The question for B might then become, under which scenario is some sort of biogenesis more likely, one in which a papperclipper exists, or one in which it doesn't? The former includes the paperclipper itself as potential fodder for evolution, but (as was just pointed out) there's a chance the paperclipper might work to prevent it; while the latter has it for neither fodder nor interference, leaving things to natural processes.

At what point in biogenesis/evolution/etc do you think the Great Filter does its filtering?