Why do we think it's reasonable to say that we should maximize average utility across all our possible future selves
Because that's what we want, even if our future selves don't. If I know I have a 50/50 chance of becoming a werewolf (permanently, to make things simple) and eating a bunch of tasty campers on the next full moon, then I can increase loqi's expected utility by passing out silver bullets at the campsite ahead of time, at the expense of wereloqi's utility.
In other words, one can attempt to improve one's expected utility as defined by their current utility function by anticipating situations where they no longer implement said function.
I'm not asking questions about identity. I'm pointing out that almost everyone considers equitable distributions of utility better than inequitable distributions. So why do we not consider equitable distributions of utility among our future selves to be better than inequitable distributions?
I said this in a comment on Real-life entropic weirdness, but it's getting off-topic there, so I'm posting it here.
My original writeup was confusing, because I used some non-standard terminology, and because I wasn't familiar with the crucial theorem. We cleared up the terminological confusion (thanks esp. to conchis and Vladimir Nesov), but the question remains. I rewrote the title yet again, and have here a restatement that I hope is clearer.
Some problems with average utilitarianism from the Stanford Encyclopedia of Philosophy:
(If you assign different weights to the utilities of different people, we could probably get the same result by considering a person with weight W to be equivalent to W copies of a person with weight 1.)