Jiro comments on Newcomb versus dust specks - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (104)
The answer is complex
First of all, the creation of people is a complex moral decision. Whether you espouse average utilitarianism or total utilitarianism or whatever other decision theory, if you ask someone "Would you press a button that would create a person", they'd normally be HESITANT, no matter whether you said it would be a very happy person or a moderately happy person. We tend to think of creating people as a big deal, that brings a big responsibility.
Secondly, my average utilitarianism is about the satisfaction of preferences, not happiness. This may seem a nitpick, though.
Thirdly, I can't help but notice that you're using the example of the creation of a world that in reality would increase average utility, even as you're using a hypothetical that states that in that particular case it would decrease average utility. This feels as a scenario designed to confuse the moral intuition into giving the wrong answer.
So using the current reality instead (rather than the one where people are 9x happier): Would I choose to create another universe happier than this one? In general, yes. Would I choose to create another universe, half as happy as this one? I general, no, not unless there's some additional value that the presence of that universe would provide to us, enough so that it would make up for the loss in average utility.
Okay, Now I reveal that just yesterday, we've discovered yet another universe which already exists and is a lot happier than the one you would choose to create. In fact it's so much happier that creating that universe would now drive the average down instead of up.
If you're using average utility, then whether this discovery has been made affects whether you want to create that other universe. Is that correct?
With the standard caveats, yes that seems reasonable. Given the existence of that ultrahappy universe an average human life will be more likely to exist in happier circumstances than the ones in the multiversal reality I'd create if I chose to add that less-than-averagely-happy universe.
Same way as I'd not take 20% of actual existing happy people and force them to live less happy lives.
Think about all sentient lives as if they were part of a single mind, called "Sentience". We design portions of Sentience's life. We want as much a proportion of Sentience's existence to be as happy as possible, satisfying Sentience's preferences.