Oops, I fail! I thought F >= S meant "F is larger than S". But looking at the definitions of terms, Fail >= Survival must mean "Fail subset_of Survival". (I do protest that this is an odd symbol to use.)
Okay, looking back at the original argument, and going back to definitions...
If you've got two sets of universes side-by-side, one where the LHC destroys the world, and one where it doesn't, then indeed observing a long string of failures doesn't help tell you which universe you're in. However, after a while, nearly all the observers will be concentrated into the non-dangerous universe. In other words, if you're going to start running the LHC, then, conditioning on your own survival, you are nearly certain to be in the non-dangerous universe. Then further conditioning on the long string of failures, you are equally likely to be in either universe. If you start out by conditioning on the long string of failures, then conditioning on your own survival indeed doesn't tell you anything more.
But under anthropic reasoning, the argument doesn't play out like this; the way anthropic reasoning works, particularly under the Quantum Suicide or Quantum Immortality versions, is something along the lines of, "You are never surprised by your own survival".
From the above, we can see that we need something like:
Initial probability of Danger: 50%
Initial probability of subjective Survival: 100%
Probability of Failure given Danger and Survival: 100%
Probability of Failure given ~Danger and Survival: 1%
Probability of Danger given Survival and Failure: ~1%
So to comment through Simon's logic vs. anthropic logic step by step:
First thing to note is that since F => S, we have P(W|F) = P(W|F,S), so we can just work out P(W|F)
still holds technically true
Bayes:P(W|F) = P(F|W)P(W)/P(F)
Still technically true; but once you condition on survival, as anthropics does in effect require, then P(Fail|Danger) is very high.
Note that none of these probabilities are conditional on survival. So unless in the absence of any selection effects the probability of failure still depends on whether the LHC would destroy Earth, P(F|W) = P(F), and thus P(W|F) = P(W).
Here we depart from anthropic reasoning. As you might expect, quantum suicide says that P(Fail|Danger) != P(Fail). That's the whole point of raising the possibility of, "given that the LHC might destroy the world, how unusual that it seems to have failed 50 times in a row"
In effect what Eliezer and many commenters are doing is substituting P(F|W,S) for P(F|W). These probabilities are not the same and so this substitution is illegitimate.
...but as stated originally, conditioning on the existence of "observers" is what anthropics is all about. It's not that we're substituting, but just that all our calculations were conditioned on survival in the first place.
Recently the Large Hadron Collider was damaged by a mechanical failure. This requires the collider to be warmed up, repaired, and then cooled down again, so we're looking at a two-month delay.
Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"
This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)
As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry. However, there's an exercise in probability theory (which I first picked up from E.T. Jaynes) along the lines of, "How many times does a coin have to come up heads before you believe the coin is fixed?" This tells you how low your prior probability is for the hypothesis. If a coin comes up heads only twice, that's definitely not a good reason to believe it's fixed, unless you already suspected from the beginning. But if it comes up heads 100 times, it's taking you too long to notice.
So - taking into account the previous cancellation of the Superconducting Supercollider (SSC) - how many times does the LHC have to fail before you'll start considering an anthropic explanation? 10? 20? 50?
After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?