Recently the Large Hadron Collider was damaged by a mechanical failure. This requires the collider to be warmed up, repaired, and then cooled down again, so we're looking at a two-month delay.
Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"
This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)
As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry. However, there's an exercise in probability theory (which I first picked up from E.T. Jaynes) along the lines of, "How many times does a coin have to come up heads before you believe the coin is fixed?" This tells you how low your prior probability is for the hypothesis. If a coin comes up heads only twice, that's definitely not a good reason to believe it's fixed, unless you already suspected from the beginning. But if it comes up heads 100 times, it's taking you too long to notice.
So - taking into account the previous cancellation of the Superconducting Supercollider (SSC) - how many times does the LHC have to fail before you'll start considering an anthropic explanation? 10? 20? 50?
After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?
Richard, Cameron Taylor has still not advocated quantum suicide. That straw man is already dead.
I assign quantum suicide a utility of "(utility(death) + utility(alternative))/ 2 - time wasted - risk of accidently killing yourself while making the death machine". That is to say, I think it is bloody stupid.
What I do assert is that anyone answering 'yes' to Elizier's proposal to destroy the universe with an LHC to avert terrorism would also be expected to use the same mechanism to achieve any other goal for which the utility is lower than the cost of creating an LHC. For E, that would mean his FAI. The question seems to logically imply one of:
- Eliezer can see something different between LHC death and cyanide death.
- There are some really messed up utility functions out there.
- The question is simply utterly trivial, barely worth asking.