In passing, I said:
From a statistical standpoint, lottery winners don't exist - you would never encounter one in your lifetime, if it weren't for the selective reporting.
And lo, CronoDAS said:
Well... one of my grandmothers' neighbors, whose son I played with as a child, did indeed win the lottery. (AFAIK, it was a relatively modest jackpot, but he did win!)
To which I replied:
Well, yes, some of the modest jackpots are statistically almost possible, in the sense that on a large enough web forum, someone else's grandmother's neighbor will have won it. Just not your own grandmother's neighbor.
Sorry about your statistical anomalatude, CronoDAS - it had to happen to someone, just not me.
There's a certain resemblance here - though not an actual analogy - to the strange position your friend ends up in, after you test the Quantum Theory of Immortality.
For those unfamiliar with QTI, it's a simple simultaneous test of many-worlds plus a particular interpretation of anthropic observer-selection effects: You put a gun to your head and wire up the trigger to a quantum coinflipper. After flipping a million coins, if the gun still hasn't gone off, you can be pretty sure of the simultaneous truth of MWI+QTI.
But what is your watching friend supposed to think? Though his predicament is perfectly predictable to you - that is, you expected before starting the experiment to see his confusion - from his perspective it is just a pure 100% unexplained miracle. What you have reason to believe and what he has reason to believe would now seem separated by an uncrossable gap, which no amount of explanation can bridge. This is the main plausible exception I know to Aumann's Agreement Theorem.
Pity those poor folk who actually win the lottery! If the hypothesis "this world is a holodeck" is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck. (I.e. to believe that the universe is such that most conscious observers observe ridiculously improbable positive events.)
It's a sad situation to be in - but don't worry: it will always happen to someone else, not you.
Broken intuition pump. The fact that money isn't utility (has diminishing returns) is actually very important here. I, for one, don't think I can envision pooling and redistributing actual utility, at least not well enough to draw any conclusions whatsoever.
Also, a utility function might not be defined over selves at particular times, but over 4D universal histories, or even over the entire multiverse. (This is also relevant to your happiness vs. utility distinction, I think.)
What I'm getting at is that the decision society makes for how to distribute utility across different people, is very similar to the decision you make for how to distribute utility across your possible future selves.
Why do we think it's reasonable to say that we should maximize average utility across all our possible future selves, when no one I know would say that we should maximize average utility across all living people?