You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lukstafi comments on Is a paperclipper better than nothing? - Less Wrong Discussion

6 Post author: DataPacRat 24 May 2013 07:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukstafi 25 May 2013 08:17:40PM 2 points [-]

It is a fair point but do you mean that the paperclipper is wrong in its judgement that its life is worth living, or is it merely your judgement that if you were the paperclipper your life would not be worth living by your current standards? Remember that we assume that there is no other life possible in the universe anyway -- this assumption makes things more interesting.

Comment author: Baughn 26 May 2013 12:54:14PM *  2 points [-]

It's my judgement that the paperclipper's life is not worth living. By my standards, sure; objective morality makes no sense, so what other standards could I use?

The paperclipper's own opinion matters to me, but not all that much.

Comment author: lukstafi 26 May 2013 03:06:54PM *  0 points [-]

Would you engage with a particular paperclipper in a discussion (plus observation etc.) to refine your views on whether its life is worth living? (We are straying away from a nominal AIXI-type definition of "the" paperclipper but I think your initial comment warrants that. Besides, even an AIXI agent depends on both terminal values and history.)

Comment author: Baughn 26 May 2013 06:05:50PM 3 points [-]

No, if I did so it'd hack my mind and convince me to make paperclips in my own universe. Assuming it couldn't somehow use the communications channel to directly take over our universe.

I'm not quite sure what you're asking here.

Comment author: lukstafi 26 May 2013 07:09:37PM 1 point [-]

Oh well, I haven't thought of that. I was "asking" about the methodology for judging whether a life is worth living.

Comment author: Baughn 26 May 2013 07:52:45PM 0 points [-]

Whether or not I would enjoy living it, taking into account any mental changes I would be okay with.

For a paperclipper.. yeah, no.

Comment author: lukstafi 27 May 2013 05:19:39AM *  2 points [-]

But you have banned most of the means of approximating the experience of living such a life, no? In a general case you wouldn't be justified in your claim (where by general case I mean the situation where I have strong doubts you know the other entity, not the case of "the" paperclipper). Do you have a proof that having a single terminal value excludes having a rich structure of instrumental values? Or does the way you experience terminal values overwhelm the way you experience instrumental values?

Comment author: MugaSofer 28 May 2013 03:56:23PM -1 points [-]

Assuming that clippy (or the cow, which makes more sense) feels "enjoyment", aren't you just failing to model them properly?

Comment author: Baughn 29 May 2013 01:15:18PM 0 points [-]

It's feeling enjoyment from things I dislike, and failing to pursue goals I do share. It has little value in my eyes.

Comment author: MugaSofer 30 May 2013 10:21:13AM -2 points [-]

Which is why I, who like chocolate icecream, categorically refuse to buy vanilla or strawberry for my friends.

Comment author: Baughn 30 May 2013 05:46:07PM 1 point [-]

Nice strawman you've got there. Pity if something were to.. happen to it.

The precise tastes are mostly irrelevant, as you well know. Consider instead a scenario where your friend asks you to buy a dose of cocaine.