You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ArisKatsaris comments on Newcomb versus dust specks - Less Wrong Discussion

-1 Post author: ike 12 May 2016 03:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread.

Comment author: ArisKatsaris 12 May 2016 07:04:39AM 2 points [-]

Well I personally don't want to be tortured, so I choose the dust speck.

Even if I wasn't personally involved, and I was to decide on morality alone rather than personal interest, average utilitarianism tells me that I should choose the dust speck. (Better that 100% of all people suffer from a dust speck, than 100% of all people suffer from torture)

Comment author: gjm 12 May 2016 11:05:14AM -1 points [-]

Do you generally endorse average utilitarianism? E.g., if you can press a button to create a new world, completely isolated from all others, containing 10^10 people 10x happier than typical present-day Americans, do you press it if what currently exists is a world with 10^10 people only 9x happier than typical present-day Americans and refrain from pressing it if it's 11x instead?

Comment author: ArisKatsaris 13 May 2016 07:45:20PM *  0 points [-]

The answer is complex

  • First of all, the creation of people is a complex moral decision. Whether you espouse average utilitarianism or total utilitarianism or whatever other decision theory, if you ask someone "Would you press a button that would create a person", they'd normally be HESITANT, no matter whether you said it would be a very happy person or a moderately happy person. We tend to think of creating people as a big deal, that brings a big responsibility.

  • Secondly, my average utilitarianism is about the satisfaction of preferences, not happiness. This may seem a nitpick, though.

  • Thirdly, I can't help but notice that you're using the example of the creation of a world that in reality would increase average utility, even as you're using a hypothetical that states that in that particular case it would decrease average utility. This feels as a scenario designed to confuse the moral intuition into giving the wrong answer.

So using the current reality instead (rather than the one where people are 9x happier): Would I choose to create another universe happier than this one? In general, yes. Would I choose to create another universe, half as happy as this one? I general, no, not unless there's some additional value that the presence of that universe would provide to us, enough so that it would make up for the loss in average utility.

Comment author: gjm 13 May 2016 11:31:54PM -1 points [-]

the creation of people is a complex moral decision

True enough. But it seems to me that hesitation in such cases is usually because of uncertainty either about whether the new people would really have good lives or about their effect on others around them. In the scenarios I described, everyone involved gets a good life when ask their interactions with others are taken into account. So yeah, creating livres is complex, but I don't see that that invalidates my question at all.

preferences, not happiness

That happens to be my, er, preference too. I think I do think it's a nitpick; we can just take "10x happier" as a sort of shorthand for some corresponding statement about preferences.

designed to confuse the moral intuition

I promise I had absolutely no such intention. I took the levels higher than typical ones in our world to avoid distracting digressions about whether the typical life in our world is in fact better than nothing. (Note that this isn't the same question as whether it's worth continuing such a life once it's already in progress.)

Your example of a world half as happy as this seems like it has a similar but opposite problem: depending on what "half as happy" actually means, you might be describing a change that would be rejected by total utilitarianism as well as average. That's the problem I was trying to avoid.

Comment author: Jiro 13 May 2016 08:51:55PM 1 point [-]

Would I choose to create another universe happier than this one? In general, yes.

Okay, Now I reveal that just yesterday, we've discovered yet another universe which already exists and is a lot happier than the one you would choose to create. In fact it's so much happier that creating that universe would now drive the average down instead of up.

If you're using average utility, then whether this discovery has been made affects whether you want to create that other universe. Is that correct?

Comment author: ArisKatsaris 14 May 2016 02:52:12PM 0 points [-]

If you're using average utility, then whether this discovery has been made affects whether you want to create that other universe. Is that correct?

With the standard caveats, yes that seems reasonable. Given the existence of that ultrahappy universe an average human life will be more likely to exist in happier circumstances than the ones in the multiversal reality I'd create if I chose to add that less-than-averagely-happy universe.

Same way as I'd not take 20% of actual existing happy people and force them to live less happy lives.

Think about all sentient lives as if they were part of a single mind, called "Sentience". We design portions of Sentience's life. We want as much a proportion of Sentience's existence to be as happy as possible, satisfying Sentience's preferences.