Nornagest comments on Open Thread, May 1-14, 2013 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (648)
I had a small thought the other day. Average utilitarianism appeals to me most it the various utilitarianisms I have seen, but has the obvious drawback of allowing utility to be raised simply by destroying beings with less than average utility.
My thought was that maybe this could be solved by making the individual utility functions permanent in some sense, i. e. killing someone with low utility would still cause average utility to decrease if they would have wanted to live. This seems to match my intuitions on morality better than any other utilitarianism I have seen.
One strange thing is that the preferences of our ancestors still would count just as much as any other person, but I had already been updating in this direction after reading an essay by gwern called the narrowing moral circle. I wasn't able to think of anything else too weird, but I haven't thought too much about this yet.
Anyway, I was wondering if anyone else has explored this idea already, or if anyone has any thoughts about it.
That's even less tractable a problem than summing over the utility functions of all existing agents, but that's not necessarily a game-changer. There are some other odd features of this idea, though:
Yeah this only makes sense for preference utilitarianism, I should have mentioned that.
It is strange to be sure. I wonder what the aggregated preferences of humanity would look like. I wouldn't be to surprised if it ended up being really similar to the aggregated preferences of current humans. Also, adding some sort of EV to this would probably make any issue here go away. But in any case, it seems to be an open problem on how to chose the starting set of utility functions in a moral way. Once things were running, it might work pretty well, especially once death is solved.
Why not just plan for whatever the current set of utility functions is? In the context of a FAI, it probably wouldn't want the aggregate utility function to change anyway. But again, deciding which functions to aggregate seems to be unsolved.
Aren't utility functions kind of... invariant to scaling and addition of a constant value?
That is, you can say that "I would like A more than B" but not "having A makes me happier than you would be having it". Neither "I'm neither happy or unhappy, so me not existing wouldn't change anything". It's just not defined.
Actually, the only place different people's utility functions can be added up is in a single person's mind, that is, "I value seeing X and Y both feeling well twice as much as just X being in such a state". So "destroying beings with less than average utility" would appeal to those who tend to average utilities instead of summing them. And, of course, it also depends on what they think of those utility functions.
(that is, do we count the utility function of the person before or after giving them antidepressants?)
Of course, the additional problem is that no one sums up utility functions the same way, but there seems to be just enough correllation between individual results that we can start debates over the "right way of summing utiliity functions".
It's hard to do utilitarian ethics without commensurate utility functions, and so utilitarian ethical calculations, in the comparatively rare cases where they're implemented with actual numbers, often use a notion of cardinal utility. (The Wikipedia article's kind of a mess, unfortunately.) As far as I can tell this has nothing to do with cardinal numbers in mathematics, but it does provide for commensurate utility scales; in this case, you'd probably be mapping preference orderings over possible world-states onto the reals in some way.
There do seem to be some interesting things you could do with pure preference orderings, analogous to decision criteria for ranked-choice voting in politics. As far as I know, though, they haven't received much attention in the ethics world.