Recently the Large Hadron Collider was damaged by a mechanical failure. This requires the collider to be warmed up, repaired, and then cooled down again, so we're looking at a two-month delay.
Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"
This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)
As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry. However, there's an exercise in probability theory (which I first picked up from E.T. Jaynes) along the lines of, "How many times does a coin have to come up heads before you believe the coin is fixed?" This tells you how low your prior probability is for the hypothesis. If a coin comes up heads only twice, that's definitely not a good reason to believe it's fixed, unless you already suspected from the beginning. But if it comes up heads 100 times, it's taking you too long to notice.
So - taking into account the previous cancellation of the Superconducting Supercollider (SSC) - how many times does the LHC have to fail before you'll start considering an anthropic explanation? 10? 20? 50?
After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?
Allan: I don't believe that's what I've been saying; the question is whether the LHC failing is evidence for the LHC being dangerous, not whether surviving is evidence for the LHC having failed.
I was trying to restate in different terms the following argument for failure to be considered evidence:
For "observer" I substituted "surviving observer," because when doing the math I find it more helpful to consider all potential observers and then say that some of them are dead and thus can't observe anything. So my "surviving observer" is the same as your "observer," right?
So I read your argument as: If the LHC is benign, and you're a random (surviving) observer, then it's amazing if (i.e., there is a low probability that) you find yourself in one of the few worlds where the LHC keeps failing. If the LHC is dangerous, and you're a random observer, then it's non-amazing (i.e., there is a high probability that) you find yourself in a world where the LHC keeps failing. Therefore, if you're a random observer, and you find yourself in a world where the LHC keeps failing, then the LHC is probably dangerous (because then, we don't need to assume something amazing going on). Am I misunderstanding something?
If I understand you right, what I'm saying is that both the if's are clearly correct, but I believe that the 'therefore' doesn't follow.
To me, the problem is essentially the same as the following: You are one of 10,000 people who have been taken to a prison. Nobody has explained why. Every morning, the guards randomly select 9/10 of the remaining prisoners and take them away, without explanation. Among the prisoners, there are two theories: one faction thinks that the people taken away are set free. The other faction thinks that they are getting executed.
It is the fourth morning. You're still in prison. The nine other people who remained have just been taken away. Now, if the other people have been executed, then you are the only remaining observer, so if you're a random observer, it's not surprising that you should find yourself in prison. But if the other people have been set free, then they're still alive, so if you're a random observer, there is only a 1/10,000 chance that you are still in prison. Both of these statements are correct if you are a random (surviving) observer. But it doesn't follow that you should conclude that the other people are getting shot, does it? (Clearly you learned nothing about that, because whether or not they get shot does not affect anything you're able to observe.)
Now, I get that you probably think something makes this line of reasoning not apply when we consider the anthropic principle (although I do think that you're wrong then :)). But my point is that, unless I'm missing something, the probabilistic reasoning is the same as in my restatement of your argument, so if the laws of probability don't make the conclusion follow in this scenario, they don't make the conclusion follow in your argument, either.
I should say that I don't reject "the" anthropic principle. I wholeheartedly embrace the version of it that I can derive from the kind of reasoning as above. For example: If our theory of evolution seems to suggest that there is one very improbable step in the evolution of intelligent life -- so improbable that it's not likely to have happened even a single time in the history of the universe -- should we then take that as a reason to conclude that something is wrong with our theory? If we are pretty sure that there is only a single universe, yes. If we have independent evidence that all possible Everett branches exist, no. (If something like mangled worlds is true, maybe -- but let's not get into that now...)
Why should we reject our theory in a single universe, but not if all Everett branches exist? Consider again the prison analogy. You observed how the guards chose the prisoners to take away, and it sure looked random. But now you are the only surviving prisoner. Should you conclude that the guards' selection process wasn't really random? There's no reason to: If the guards used a random process, one prisoner had to remain on the fourth day, and this may just as well have been you -- nothing surprising going on. This corresponds to the scenario where all possible Everett branches exist.
But suppose that you were the only prisoner to begin with (and you know this), and every morning the guards threw a ten-sided die which is marked "keep in prison" on one side and "take away" on the nine others -- and it came up "keep in prison" every morning. In this case, it seems to me that you do have a reason to start suspecting that the die is fixed (i.e., that your original theory, that the "keep in prison" outcome had only a 10% chance of happening, was wrong). This corresponds to the scenario where there is only a single universe.
This is how I always understood the anthropic principle when reading about it, and this version of it I embrace. The other version I'm pretty sure is wrong.
That said, if you have the energy to do so, please do keep arguing with me! :-) I don't really understand this "other anthropic principle," and I'm rejecting it simply because it disagrees with my calculations and I'm really pretty sure that I'm applying my probability theory right here. If I'm wrong, that will be humbling, but I would still rather know than not know, please :-)