Kindly comments on Brief Question about FAI approaches - Less Wrong

3 Post author: Dolores1984 19 September 2012 06:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (42)

You are viewing a single comment's thread. Show more comments above.

Comment author: Pentashagon 20 September 2012 12:28:02AM -1 points [-]

Ah, I see. You're assuming agents have bounded utility. Well in that case, yes, there is a canonical way to compare utilities. However, that by itself doesn't justify adopting that particular way of comparing utilities. Suppose you have two agents, A and B, with identical preferences except that agent A strongly prefers there to be an odd number of stars in the Milky Way. As long as effecting that desire is impractical, A and B will exhibit the same preferences; but normalizing their utilities to fit the range (-1,1) will mean that you treat A as a utility monster.

Is bounded utility truly necessary to normalize it? So long as the utility function never actually returns infinity in practice, normalization will work. What would a world state with infinite utility look like, anyway, and would it be reachable from any world state with finite utility? Reductionism implies that one single physical change would cause a discontinuous jump in utility from finite to infinite, and that seems to break the utility function itself. Another way to look at it is that the utility function is unbounded because it depends on the world state; if the world state were allowed to be infinite than an infinite utility could result. However, we are fairly certain that we will only ever have access to a finite amount of energy and matter in this universe. If that turns out not to be true then I imagine utilitarianism will cease to be useful as a result.

I'm failing to understand your reasoning about treating A as a utility monster (normalizing would make its utilities slightly lower than B for the same things, right?). I suppose I don't really see this as a problem, though. If "odd number of stars in the milky way" has utility 1 for A, then that means A actually really, really wants "odd number of stars in the milky way", at the expense of everything else. All other things being equal, you might think it wise to split an ice cream cone evenly between A and B, but B will be happy with half an ice cream cone and A will be happy with half an ice cream cone except for the nagging desire for an odd number of stars in the galaxy. If you've ever tried to enjoy an ice cream cone while stressed out, you may understand the feeling. If nothing can be done to assuage A's burning desire that ruins the utility of other things, then why not give more of those things to B? If, instead, you meant that if A values odd stars with utility 1 we should pursue that over all of B's goals, then I don't think that follows. If it's just A and B, the fair thing would be to spend half the available resources on confirming an odd number of stars or destroying one star and the other half on B's highest preference.

I think calibrating utility functions by their extreme values is weird because outcomes of extreme utility are exotic and don't occur in practice. If one really wants to compare decision-theoretic utilities between people, perhaps a better approach is choosing some basket of familiar outcomes to calibrate on. This would be interesting to see and I'm not sure if anyone has thought about that approach.

I thought it was similarly weird to allow any agent to, for instance, obtain 3^^^3 utilons for some trivially satisfiable desire. Isn't that essentially what allows the utility monster in the first place? I see existential risk and the happiness of future humans as similar problems; If existential risk is incredibly negative then we should do nothing but alleviate existential risk. If the happiness of future humans is so incredibly positive then we should become future human happiness maximizers (and by extension each of those future humans should also become future human happiness maximizers).

The market has done a fairly good job of assigning costs to common outcomes. We can compare outcomes by what people are willing to pay for them (or pay to avoid them), assuming they have the same economic means at their disposal.

Another idea I have had is to use instant run-off voting for world states. In the utility function, every world state is ranked according to preferences and then the first world state to achieve a 50% majority of votes in the run-off process is the most ethical world state.

Comment author: Kindly 20 September 2012 01:46:37AM 1 point [-]

Is bounded utility truly necessary to normalize it? So long as the utility function never actually returns infinity in practice, normalization will work.

Huh?

Suppose my utility function is unbounded and linear in kittens (for any finite number of kittens I am aware of, that number is the output of my utility function). How do you normalize this utility to [-1,1] (or any other interval) while preserving the property that I'm indifferent between 1 kitten and a 1/N chance of N kittens?

Comment author: Pentashagon 20 September 2012 11:08:35PM 0 points [-]

Is the number of possible kittens bounded? That's the point I was missing earlier.

If the number of kittens is bounded by M, your maximum utility u_max is bounded by M times the constant utility of a kitten (M * u_kitten). Therefore u_kitten is bounded by 1/M.

Comment author: Cloppy 21 September 2012 02:05:09AM 1 point [-]

In future, consider expressing these arguments in terms of ponies. Why make a point using hypothetical utility functions, when you can make the same point by talking about what we really value?