Kindly comments on Brief Question about FAI approaches - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (42)
Is bounded utility truly necessary to normalize it? So long as the utility function never actually returns infinity in practice, normalization will work. What would a world state with infinite utility look like, anyway, and would it be reachable from any world state with finite utility? Reductionism implies that one single physical change would cause a discontinuous jump in utility from finite to infinite, and that seems to break the utility function itself. Another way to look at it is that the utility function is unbounded because it depends on the world state; if the world state were allowed to be infinite than an infinite utility could result. However, we are fairly certain that we will only ever have access to a finite amount of energy and matter in this universe. If that turns out not to be true then I imagine utilitarianism will cease to be useful as a result.
I'm failing to understand your reasoning about treating A as a utility monster (normalizing would make its utilities slightly lower than B for the same things, right?). I suppose I don't really see this as a problem, though. If "odd number of stars in the milky way" has utility 1 for A, then that means A actually really, really wants "odd number of stars in the milky way", at the expense of everything else. All other things being equal, you might think it wise to split an ice cream cone evenly between A and B, but B will be happy with half an ice cream cone and A will be happy with half an ice cream cone except for the nagging desire for an odd number of stars in the galaxy. If you've ever tried to enjoy an ice cream cone while stressed out, you may understand the feeling. If nothing can be done to assuage A's burning desire that ruins the utility of other things, then why not give more of those things to B? If, instead, you meant that if A values odd stars with utility 1 we should pursue that over all of B's goals, then I don't think that follows. If it's just A and B, the fair thing would be to spend half the available resources on confirming an odd number of stars or destroying one star and the other half on B's highest preference.
I thought it was similarly weird to allow any agent to, for instance, obtain 3^^^3 utilons for some trivially satisfiable desire. Isn't that essentially what allows the utility monster in the first place? I see existential risk and the happiness of future humans as similar problems; If existential risk is incredibly negative then we should do nothing but alleviate existential risk. If the happiness of future humans is so incredibly positive then we should become future human happiness maximizers (and by extension each of those future humans should also become future human happiness maximizers).
The market has done a fairly good job of assigning costs to common outcomes. We can compare outcomes by what people are willing to pay for them (or pay to avoid them), assuming they have the same economic means at their disposal.
Another idea I have had is to use instant run-off voting for world states. In the utility function, every world state is ranked according to preferences and then the first world state to achieve a 50% majority of votes in the run-off process is the most ethical world state.
Huh?
Suppose my utility function is unbounded and linear in kittens (for any finite number of kittens I am aware of, that number is the output of my utility function). How do you normalize this utility to [-1,1] (or any other interval) while preserving the property that I'm indifferent between 1 kitten and a 1/N chance of N kittens?
Is the number of possible kittens bounded? That's the point I was missing earlier.
If the number of kittens is bounded by M, your maximum utility u_max is bounded by M times the constant utility of a kitten (M * u_kitten). Therefore u_kitten is bounded by 1/M.
In future, consider expressing these arguments in terms of ponies. Why make a point using hypothetical utility functions, when you can make the same point by talking about what we really value?