That's wrong. If I found myself stuck in one, I would prefer to live; that's why I need a very strong precommitment, enforced by something I can't turn off.
I mean, you_now would prefer to kill you_then.
As for your last paragraph, the framing was from a global point of view, and probability in this case is the deterministic, Quantum-Measure-based sort.
I mean, younow would prefer to kill youthen
Not really. I prefer to kill my future self only because I anticipate living on in other selves; this can't accurately be described as "you really, REALLY don't care about the case where you lose, to the point that you want to not experience those branches at all, to the point that you'd kill yourself if you find yourself stuck in them."
I do care; what I don't care about is my measure between two measures of the same cardinality. If there was a chance of my being stuck in one world and not living on a...
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?