taw comments on Open Thread: August 2009 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (188)
A number of days ago I was arguing with AngryParsley about how to value future actions; I thought it was obvious one should maximize the total utility over all people the action affected, while he thought it equally self-evident that maximizing average utility was better still. When I went to look, I couldn't see any posts on LW or OB on this topic.
(I pointed out that this view would favor worlds ruled by a solitary, but happy, dictator over populous messy worlds whose average just happens to work out to be a little less than a dictator's might be; he pointed out that if total was all that mattered, we might wind up favoring worlds where everyone is just 2 utilons away from committing suicide.)
Have we really never discussed this topic?
Total utility has obvious problem - it's only meaningful to talk about relative utilities so where do we put zero? (as it's completely arbitrary)
None of the three make any sense whatsoever.
You've already decided where to put zero when you say this:
That means that zero is the utility of not existing. Granted, it's a lot easier to compare two different possible lives than it is to compare a possible life to that life not coming into existence, but by saying "kill anyone whose utility is less than zero" you're defining zero utility as the utility of a dead person.
Also,
does not make sense to me. Utility is relative, yes, but it's relative to states of the universe, not to other people. If average utility is currently zero, and then, let's say, I recover from an illness than has been causing me distress, then my personal utility has increased, and average utility is no longer zero. Other people don't magically lose utility when I happen to gain some. Total utility doesn't renormalize in the way you seem to think it does.
Repugnant conclusion certainly is worth discussing, but the other two:
I think it would be a very bad idea to have a utility function such that the utility of an empty universe is higher than the utility of a populated non-dystopia; so any utility function for the universe that I might approve should have a pretty hefty negative value for empty universes. I don't think that's too awful of a requirement.
This looks like a total non sequitur to me. What do you mean?
He means that if utility is measured in such a way that average utility is always zero, then total utility is always zero too, average utility being total utility divided number of agents.
Well, that's not a very good utility function then, and taw's three possibilities are nowhere near exhausting the range of possibilities.
So where do you put zero? By this one completely arbitrary decision you can collapse total utility maximization to one of these cases.
It gets far worse when you try to apply it to animals.
As for zero being very high, I've actually heard many times this argument about existence of farm animals, which supposedly suffer so much that it would be better if they didn't exist. It can as easily be applied to wild animals, even though it's far less common to do so.
With animal zero very low, total utility maximization turns us into paperclip maximizer of insects, or whatever is the simplest utility-positive life.
If non-existent beings have exactly zero utility - that any being with less than zero utility ought not to have come into existence - then the choice of where to put zero is clearly not arbitrary.