In passing, I said:
From a statistical standpoint, lottery winners don't exist - you would never encounter one in your lifetime, if it weren't for the selective reporting.
And lo, CronoDAS said:
Well... one of my grandmothers' neighbors, whose son I played with as a child, did indeed win the lottery. (AFAIK, it was a relatively modest jackpot, but he did win!)
To which I replied:
Well, yes, some of the modest jackpots are statistically almost possible, in the sense that on a large enough web forum, someone else's grandmother's neighbor will have won it. Just not your own grandmother's neighbor.
Sorry about your statistical anomalatude, CronoDAS - it had to happen to someone, just not me.
There's a certain resemblance here - though not an actual analogy - to the strange position your friend ends up in, after you test the Quantum Theory of Immortality.
For those unfamiliar with QTI, it's a simple simultaneous test of many-worlds plus a particular interpretation of anthropic observer-selection effects: You put a gun to your head and wire up the trigger to a quantum coinflipper. After flipping a million coins, if the gun still hasn't gone off, you can be pretty sure of the simultaneous truth of MWI+QTI.
But what is your watching friend supposed to think? Though his predicament is perfectly predictable to you - that is, you expected before starting the experiment to see his confusion - from his perspective it is just a pure 100% unexplained miracle. What you have reason to believe and what he has reason to believe would now seem separated by an uncrossable gap, which no amount of explanation can bridge. This is the main plausible exception I know to Aumann's Agreement Theorem.
Pity those poor folk who actually win the lottery! If the hypothesis "this world is a holodeck" is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck. (I.e. to believe that the universe is such that most conscious observers observe ridiculously improbable positive events.)
It's a sad situation to be in - but don't worry: it will always happen to someone else, not you.
When you said that, it seemed to me that you were saying that you shouldn't play the lottery even if the expected payoff - or even the expected utility - were positive, because the payoff would happen so rarely.
Does that mean you have a formulation for rational behavior that maximizes something other than expected utility? Some nonlinear way of summing the utility from all possible worlds?
If someone suggested that everyone in the world should pool their money together, and give it to one person selected at random (pretend for the sake of argument that utility = money), people would think that was crazy. Yet the idea of maximizing expected utility over all possible worlds assumes that an uneven distribution of utility to all your possible future selves is as good as an equitable distribution among them. So there's something wrong with maximizing expected utility.
Broken intuition pump. The fact that money isn't utility (has diminishing returns) is actually very important here. I, for one, don't think I can envision pooling and redistributing actual utility, at least not well enough to draw any conclusions whatsoever.
Also, a utility function might not be defined over selves at particular times, but over 4D universal histories, or even over the entire multiverse. (This is also relevant to your happiness vs. utility distinction, I think.)