I am confused.
I think, depending on how unlikely a world with anybody finding such an unlikely chain of throws is, and how his/her priors look, the gambler might want to update to almost certainty on quantum realism or one of the tegmarks; ie. the fundamental improbability of a finite world containing this outcome implies that reality is such that such an observation can be expected.
At least if I'm not completely confused. Wait, is that what the "unloaded gun" was supposed to emulate? But then it comes down to whether you're optimizing over total universes or surviving universes - if you don't care about the branches where you die, the game is neutral to you (if you're certain you're in a quantum realism universe) - you can play, you can leave, it makes no difference. The probability number would be irrelevant to your decisionmaking. I suppose this is one reason we should expect quantum immortality as a philosophy to weed itself out of the observable universe.
If you look back at 1000 heads in a quantum immortality mode where you ignore dead branches, you have an anticipation of 0.5 (initial throw) of observing that series in the loaded universe, but only 0.5/2^1000 in the unloaded universe, so the strategy "after turn 1000, if you got 1000 heads, assume you're in the loaded half" will succeed in 0.5/total anticipated cases and fail in (0.5/2^1000)/total anticipated cases. But then again, in a quantum immortality mode, the entire game is irrelevant and you can quit or play however you like. So this is somewhat confusing to me. Excuse me for rambling.
Closely related to: How Many LHC Failures Is Too Many?
Consider the following thought experiment. At the start, an "original" coin is tossed, but not shown. If it was "tails", a gun is loaded, otherwise it's not. After that, you are offered a big number of rounds of decision, where in each one you can either quit the game, or toss a coin of your own. If your coin falls "tails", the gun gets triggered, and depending on how the original coin fell (whether the gun was loaded), you either get shot or not (if the gun doesn't fire, i.e. if the original coin was "heads", you are free to go). If your coin is "heads", you are all right for the round. If you quit the game, you will get shot at the exit with probability 75% independently of what was happening during the game (and of the original coin). The question is, should you keep playing or quit if you observe, say, 1000 "heads" in a row?
Intuitively, it seems as if 1000 "heads" is "anthropic evidence" for the original coin being "tails", that the long sequence of "heads" can only be explained by the fact that "tails" would have killed you. If you know that the original coin was "tails", then to keep playing is to face the certainty of eventually tossing "tails" and getting shot, which is worse than quitting, with only 75% chance of death. Thus, it seems preferable to quit.
On the other hand, each "heads" you observe doesn't distinguish the hypothetical where the original coin was "heads" from one where it was "tails". The first round can be modeled by a 4-element finite probability space consisting of options {HH, HT, TH, TT}, where HH and HT correspond to the original coin being "heads" and HH and TH to the coin-for-the-round being "heads". Observing "heads" is the event {HH, TH} that has the same 50% posterior probabilities for "heads" and "tails" of the original coin. Thus, each round that ends in "heads" doesn't change the knowledge about the original coin, even if there were 1000 rounds of this type. And since you only get shot if the original coin was "tails", you only get to 50% probability of dying as the game continues, which is better than the 75% from quitting the game.
(See also the comments by simon2 and Benja Fallenstein on the LHC post, and this thought experiment by Benja Fallenstein.)
The result of this exercise could be generalized by saying that counterfactual possibility of dying doesn't in itself influence the conclusions that can be drawn from observations that happened within the hypotheticals where one didn't die. Only if the possibility of dying influences the probability of observations that did take place, would it be possible to detect that possibility. For example, if in the above exercise, a loaded gun would cause the coin to become biased in a known way, only then would it be possible to detect the state of the gun (1000 "heads" would imply either that the gun is likely loaded, or that it's likely not).