Recently the Large Hadron Collider was damaged by a mechanical failure. This requires the collider to be warmed up, repaired, and then cooled down again, so we're looking at a two-month delay.
Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"
This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)
As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry. However, there's an exercise in probability theory (which I first picked up from E.T. Jaynes) along the lines of, "How many times does a coin have to come up heads before you believe the coin is fixed?" This tells you how low your prior probability is for the hypothesis. If a coin comes up heads only twice, that's definitely not a good reason to believe it's fixed, unless you already suspected from the beginning. But if it comes up heads 100 times, it's taking you too long to notice.
So - taking into account the previous cancellation of the Superconducting Supercollider (SSC) - how many times does the LHC have to fail before you'll start considering an anthropic explanation? 10? 20? 50?
After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?
I'd rather say that people who find quantum suicide desirable have a utility function that does not decompose into a linear combination of individual utility functions for their individual Everett branches-- even if they had to deal with a terrorist attack on all of these branches, say. Surely everybody here would find an outcome undesirable where all of their future Everett branches wink out of existence. So if somebody prefers one Everett branch winking out and one continuing to exist to both continuing to exist, you can only describe their utility function by looking at all the branches, not by looking at the different branches individually. (Did that make sense?)
------
I like your explanation Benja. There is no particular reason why a utility function need to consider 'branch winking out of existence' with the same simplicity with which they evaluate more mundane catastrophes. For example, consider the practice of thousands of gamers out there: "That start sucks! Restart!" I can give no mathematical reason why this preference ought to be dismissed.