While I'm happy to have had the confidence of Richard, I thought my last comment could use a little improvement.
What we want to know is P(W|F,S)
As I pointed out F=> S so P(W|F,S) = P(W|F)
We can legitimately calculate P(W|F,S) in at least two ways:
1. P(W|F,S) = P(W|F) = P(F|W)P(W)/P(F) <- the easy way
2. P(W|F,S) = P(F|W,S)P(W|s)/P(F|S) <- harder, but still works
there are also ways you can get it wrong, such as:
3. P(W|F,S) != P(F|W,S)P(W)/P(F) <- what I said other people were doing last post
4. P(W|F,S) != P(F|W,S)P(W)/P(F|S) <- what other people are probably actually doing
In my first comment in this thread, I said it was a simple application of Bayes' rule (method 1) but then said that Eliezer's failure was not to apply the anthropic principle enough (ie I told him to update from method 4 to method 2). Sorry if anyone was confused by that or by subsequent posts where I did not make that clear.
Allan: your intuition is wrong here too. Notice that if Zeus were to have independently created a zillion people in a green room, it would change your estimate of the probability, despite being completely unrelated.
Eliezer: F => S -!-> P(X|F) = P(X|F,S)
All right, give me an example.
And yeah, anthropic reasoning is all about conditioning on survival, but you have to do it consistently. Conditioning on survival in some terms but not others = fail.
Richard: your first criticism has too low an effect on the probability to be significant. I was of course aware that humanity could be wiped out in other ways but incorrectly assumed that commenters here would be smart enough to understand that it was a justifiable simplification. The second is wrong: the probabilities without conditioning on S are "God's eye view" probabilities, and really are independent of selection effects.
Recently the Large Hadron Collider was damaged by a mechanical failure. This requires the collider to be warmed up, repaired, and then cooled down again, so we're looking at a two-month delay.
Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"
This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)
As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry. However, there's an exercise in probability theory (which I first picked up from E.T. Jaynes) along the lines of, "How many times does a coin have to come up heads before you believe the coin is fixed?" This tells you how low your prior probability is for the hypothesis. If a coin comes up heads only twice, that's definitely not a good reason to believe it's fixed, unless you already suspected from the beginning. But if it comes up heads 100 times, it's taking you too long to notice.
So - taking into account the previous cancellation of the Superconducting Supercollider (SSC) - how many times does the LHC have to fail before you'll start considering an anthropic explanation? 10? 20? 50?
After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?