Nornagest comments on Open Thread, May 1-14, 2013 - Less Wrong

3 Post author: whpearson 01 May 2013 10:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (648)

You are viewing a single comment's thread. Show more comments above.

Comment author: Adele_L 02 May 2013 12:21:52AM 3 points [-]

I had a small thought the other day. Average utilitarianism appeals to me most it the various utilitarianisms I have seen, but has the obvious drawback of allowing utility to be raised simply by destroying beings with less than average utility.

My thought was that maybe this could be solved by making the individual utility functions permanent in some sense, i. e. killing someone with low utility would still cause average utility to decrease if they would have wanted to live. This seems to match my intuitions on morality better than any other utilitarianism I have seen.

One strange thing is that the preferences of our ancestors still would count just as much as any other person, but I had already been updating in this direction after reading an essay by gwern called the narrowing moral circle. I wasn't able to think of anything else too weird, but I haven't thought too much about this yet.

Anyway, I was wondering if anyone else has explored this idea already, or if anyone has any thoughts about it.

Comment author: Nornagest 02 May 2013 01:08:27AM *  3 points [-]

That's even less tractable a problem than summing over the utility functions of all existing agents, but that's not necessarily a game-changer. There are some other odd features of this idea, though:

  • It only seems to work with preference utilitarianism; pleasure/pain utilitarianism would still treat the painless death of an agent with neutral expected utility as neutral. Fair enough; preference utilitarianism seems less broken than conventional utilitarianism anyway.
  • Contingent on using preference utilitarianism, certain ways of doing the summing lead to odd features regarding changing cultural values: if future preferences are unbounded in time, a big enough stack of dead ancestors with strong enough preferences could render arbitrary social changes unethical. This could be avoided by summing only over potential lifespan, time-discounting in some way, or using some kind of nonstandard aggregation function that takes new information into account.
  • Let's say we're now at a point in time . We can plan for using only the preferences of existing or previous agents; all very intuitive so far. But let's say we consider a time further in the future. New agents will have been introduced between and , and there's no obvious way to take their preferences into account; every option gives us potential inconsistencies between optimal actions planned at and optimal actions taken at time . The least bad option seems to be doing a probability-weighted average over agents extant in all possible futures, but (besides being just ridiculously intractable) that seems to introduce some weird acausal effects that I'm not sure I want to deal with. Taking the average at least avoids some of the crazier possible consequences, like the utilitarian go forth and multiply that I'm sure you've thought of already.
Comment author: Adele_L 02 May 2013 02:48:57AM 0 points [-]

Yeah this only makes sense for preference utilitarianism, I should have mentioned that.

It is strange to be sure. I wonder what the aggregated preferences of humanity would look like. I wouldn't be to surprised if it ended up being really similar to the aggregated preferences of current humans. Also, adding some sort of EV to this would probably make any issue here go away. But in any case, it seems to be an open problem on how to chose the starting set of utility functions in a moral way. Once things were running, it might work pretty well, especially once death is solved.

Why not just plan for whatever the current set of utility functions is? In the context of a FAI, it probably wouldn't want the aggregate utility function to change anyway. But again, deciding which functions to aggregate seems to be unsolved.

Comment author: latanius 02 May 2013 03:39:53AM 0 points [-]

Aren't utility functions kind of... invariant to scaling and addition of a constant value?

That is, you can say that "I would like A more than B" but not "having A makes me happier than you would be having it". Neither "I'm neither happy or unhappy, so me not existing wouldn't change anything". It's just not defined.

Actually, the only place different people's utility functions can be added up is in a single person's mind, that is, "I value seeing X and Y both feeling well twice as much as just X being in such a state". So "destroying beings with less than average utility" would appeal to those who tend to average utilities instead of summing them. And, of course, it also depends on what they think of those utility functions.

(that is, do we count the utility function of the person before or after giving them antidepressants?)

Of course, the additional problem is that no one sums up utility functions the same way, but there seems to be just enough correllation between individual results that we can start debates over the "right way of summing utiliity functions".

Comment author: Nornagest 02 May 2013 06:00:53AM *  1 point [-]

It's hard to do utilitarian ethics without commensurate utility functions, and so utilitarian ethical calculations, in the comparatively rare cases where they're implemented with actual numbers, often use a notion of cardinal utility. (The Wikipedia article's kind of a mess, unfortunately.) As far as I can tell this has nothing to do with cardinal numbers in mathematics, but it does provide for commensurate utility scales; in this case, you'd probably be mapping preference orderings over possible world-states onto the reals in some way.

There do seem to be some interesting things you could do with pure preference orderings, analogous to decision criteria for ranked-choice voting in politics. As far as I know, though, they haven't received much attention in the ethics world.