Followup to: When (Not) To Use Probabilities, How Many LHC Failures Is Too Many?
While trying to answer my own question on "How Many LHC Failures Is Too Many?" I realized that I'm horrendously inconsistent with respect to my stated beliefs about disaster risks from the Large Hadron Collider.
First, I thought that stating a "one-in-a-million" probability for the Large Hadron Collider destroying the world was too high, in the sense that I would much rather run the Large Hadron Collider than press a button with a known 1/1,000,000 probability of destroying the world.
But if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.
Unknown pointed out that this turns me into a money pump. Given a portfolio of a million existential risks to which I had assigned a "less than one in a million probability", I would rather press the button on the fixed-probability device than run a random risk from this portfolio; but would rather take any particular risk in this portfolio than press the button.
Then, I considered the question of how many mysterious failures at the LHC it would take to make me question whether it might destroy the world/universe somehow, and what this revealed about my prior probability.
If the failure probability had a known 50% probability of occurring from natural causes, like a quantum coin or some such... then I suspect that if I actually saw that coin come up heads 20 times in a row, I would feel a strong impulse to bet on it coming up heads the next time around. (And that's taking into account my uncertainty about whether the anthropic principle really works that way.)
Even having noticed this triple inconsistency, I'm not sure in which direction to resolve it!
(But I still maintain my resolve that the LHC is not worth expending political capital, financial capital, or our time to shut down; compared with using the same capital to worry about superhuman intelligence or nanotechnology.)
One reason I dislike many precautionary arguments is that they seem to undervalue what we learn by doing things. Very often in science, when we have chased down a new phenomenon, we detect it by relatively small effects before the effects get big enough to be dangerous. For potentially dangerous phenomena, what we learn by exploring around the edges of the pit can easily be more valuable than the risk we faced of inadvertently landing in the pit in some early step before we knew it was there. Among other things, what we learn from poking around the edges of the pit may protect us from stuff there that we didn't know about that was dangerous even if we didn't poke around the pit. One of the consequences of decades of focus on the physics of radiation and radioisotopes is that we understand hazards like radon poisoning better than before. One of the consequences of all of our recombinant DNA experimentation is that we understand risks of nature's own often-mindboggling recombinant DNA work much better than we did before.
The main examples that I can think of where the first thing you learn, when you tickle the tail enough to notice the tail exists, is that Tigers Exist And Completely Outclass You And Oops You Are Dead, involve (generalized) arms races of some sort. E.g., it was by blind luck that the Europeans started from the epidemiological cesspool side of the Atlantic. (Here the arms race is the microbiological/immunological one.) If history had been a little different, just discovering the possibility that diseases were wildly different on both sides could easily have coincided with losing 90+% of the European population. (And of course as it happened, the outcome was equally horrendous for the American population, but the American population wasn't in a position to apply the precautionary principle to prevent that.) So should the Europeans have used a precautionary principle? I think not. Even in a family of alternate histories where the Europeans always start from the clean side, in many alternate subhistories of that family, it is still better for the Europeans to explore the Atlantic, learn early about the problem, and prepare ways to cope with it. Thus, even in this case where the tiger really is incredibly dangerous, the precautionary principle doesn't look so good.