Sure. That has no bearing on what I'm saying.
Did you even read the next paragraph where I tried to explain why it does have a bearing on what you're saying? Do you have a response?
Not at all. You can multiply each probability by a different constant if you do that.
Fair. I assumed a positive constant. I shouldn't have.
Did you even read the next paragraph where I tried to explain why it does have a bearing on what you're saying? Do you have a response?
I read it. I don't understand why you keep bringing up "u", whatever that is. You use u to represent the utility function on a possible world. We don't care what is inside that utility function for the purposes of this argument. And you can't* get out of taking the expected value of your utility function by transforming it into another utility function. Then you just have to take the expected value of that new utility function.
Read steven0461's comment above. He has it spot on.
I said this in a comment on Real-life entropic weirdness, but it's getting off-topic there, so I'm posting it here.
My original writeup was confusing, because I used some non-standard terminology, and because I wasn't familiar with the crucial theorem. We cleared up the terminological confusion (thanks esp. to conchis and Vladimir Nesov), but the question remains. I rewrote the title yet again, and have here a restatement that I hope is clearer.
Some problems with average utilitarianism from the Stanford Encyclopedia of Philosophy:
(If you assign different weights to the utilities of different people, we could probably get the same result by considering a person with weight W to be equivalent to W copies of a person with weight 1.)