Followup to: When (Not) To Use Probabilities, How Many LHC Failures Is Too Many?

While trying to answer my own question on "How Many LHC Failures Is Too Many?" I realized that I'm horrendously inconsistent with respect to my stated beliefs about disaster risks from the Large Hadron Collider.

First, I thought that stating a "one-in-a-million" probability for the Large Hadron Collider destroying the world was too high, in the sense that I would much rather run the Large Hadron Collider than press a button with a known 1/1,000,000 probability of destroying the world.

But if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.

Unknown pointed out that this turns me into a money pump.  Given a portfolio of a million existential risks to which I had assigned a "less than one in a million probability", I would rather press the button on the fixed-probability device than run a random risk from this portfolio; but would rather take any particular risk in this portfolio than press the button.

Then, I considered the question of how many mysterious failures at the LHC it would take to make me question whether it might destroy the world/universe somehow, and what this revealed about my prior probability.

If the failure probability had a known 50% probability of occurring from natural causes, like a quantum coin or some such... then I suspect that if I actually saw that coin come up heads 20 times in a row, I would feel a strong impulse to bet on it coming up heads the next time around.  (And that's taking into account my uncertainty about whether the anthropic principle really works that way.)

Even having noticed this triple inconsistency, I'm not sure in which direction to resolve it!

(But I still maintain my resolve that the LHC is not worth expending political capital, financial capital, or our time to shut down; compared with using the same capital to worry about superhuman intelligence or nanotechnology.)

New to LessWrong?

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 3:51 PM

You have another inconsistency as well. As you should have noticed in the "How many" thread, the assumptions that lead you to believe that failures of the LHC are evidence that it would destroy Earth are the same ones that lead you to believe that annihilational threats are irrelevant (after all, if P(W|S) = P(W), then Bayes' rule leads to P(S|W) = P(S)).

Thus, given that you believe that failures are evidence of the LHC being dangerous, you shouldn't care. Unless you've changed to a new set of incorrect assumptions, of course.

Simon, anthropic probabilities are not necessarily the same probabilities you plug into the expected utility formula. When anthropic games are being played, it can be consistent to have a ~1 subjective probability of getting a cookie whether a coin comes up heads or tails, but you value the tails outcome twice as much. E.g., a computer duplicates you if the coin comes up tails, so two copies of you get cookies instead of one. Either way you expect to get a cookie, but in the second case, twice as much utility occurs from the standpoint of a third-party onlooker... at least under some assumptions.

I admit that, to the extent I believe in anthropics at all, I sometimes try to do a sum over the personal subjective probabilities of observers. This leads to paradoxes, but so does everything else I try when people are being copied (and possibly merged).

Regardless, the question of what we expect to see when the world-crusher is turned on, and how much utility we assign to that, are distinct at least conceptually.

And if turning on the LHC or other world-smasher causes other probabilities to behave oddly, we care a great deal even if we survive.

The World-Crusher. CERN should copyright that. In fact, I might buy the domain worldcrusher.com and have it redirect to the LHC site.

And even if it doesn't come up with any new physics, it's definitely proving to be worth its weight in thought experiments.

"if turning on the LHC or other world-smasher causes other probabilities to behave oddly"

How can it possibly do so, except in the plain old sense of causal interaction, which is emphatically not what this discussion is about?

Let's think about what observer selection effects actually involve.

Suppose that there is a sort of multiverse (whether it is a multiverse of actualities or just a multiverse of possibilities does not matter for this analysis). At some level it consists of elementary "events" or "states of affairs" which are connected to each other by elementary causal relations. At a slightly higher level these elementary entities form distinct "worlds" (whether these worlds are strictly causally disjoint, or do interact after all, does not matter for this analysis). At some intermediate level are most of the complex entities and events with which we are habitually concerned, such as the activation of the LHC and the destruction of the Earth.

Regarding these intermediate events, we can ask questions like, what is the relative frequency with which event B occurs in a world given that event A has occurred elsewhere in that world, or even, what is the relative frequency with which event B is causally downstream of event A, throughout the multiverse? (Whether the first question is always a form of the second, in a multiverse which is combinatorially exhaustive with respect to the elementary causal relations constituting the individual worlds, I'm not sure.)

So far, so straightforward. I could almost be talking about statistical analysis of a corpus of documents, rather than of an ensemble of worlds, so far.

Now what are observer selection effects about? Basically, we are making event A something like "the existence of an observer". When we condition on that, we find that some Bs become significantly more or less frequent, than they are when we just ask "how often does B happen, across the multiverse?".

But suppose my event A is something like, "the existence of an observer who reads a blog called Overcoming Bias and shares a world with a physics apparatus called the LHC". Well, so what? It's just another complicated state of affairs on that intermediate level, and it will shift the B-frequencies from their unconditioned values in some complicated way. Even if there are subjective duplicates and their multiplicities change in some strange way, as in an interacting many-worlds theory with splitting and merging... it's complicated, but it's not mysterious.

So finally, what is the scenario we are being asked to entertain? State of affairs A: The existence of an observer who shares a world with an LHC which repeatedly breaks down. And we are asked to estimate how this affects the probability of state of affairs B: An LHC which, if it worked, would destroy the Earth.

Well, let's look at it the other way around. State of affairs A': An LHC which, if it worked, would destroy the Earth. State of affairs B': An LHC which keeps malfunctioning whenever it is switched on.

From this angle, there is no question of anthropics, because we are just talking about a physics experiment. All else being equal, the fact that something is a bomb does not in any way make it less likely to explode.

If we then switch back to the original situation, we are in effect being asked this question: if a device keeps breaking down for unlikely reasons, does that make it more likely to be a bomb?

The sensible answer is certainly no. Now maybe someone can come up with a strange many-world physics, in which observer-duplicate multiplicities vary in such a way that the answer is yes, on account of observer selection effects. It would certainly be interesting to see such an argument. In fact I think this whole line of thought originates with the fallacy of conditioning on survival of observers rather than conditioning on existence of observers. (Even if you die, you existed, and that uses up the opportunity for anthropic reasoning in the classic form.) Nonetheless, some wacky form of observer-physics might exist in which this generic conclusion is true after all. But even if it could be found, you would then have to weigh up the probabilities that this, and only this, is the true physics of the multiverse. (And here we hit one of the truly fundamental problems here: how do we justify our ideas about the extent of the possible? But that's too big a question for this comment.) If only one little corner of the multiverse behaves in this way, then the answer to the question will still be no, because A and B will also occur elsewhere.

So, to sum up a long comment: This whole idea probably derives from a specific fallacy, and it should not be taken seriously unless someone can exhibit an observer-selection argument for a breakdowns-implies-lethality effect, and even then such an effect is probably contingent on a peculiar form of observer-physics.

roko, Given that at least some phycisists have come up with vaguely plausible mechanisms for stable micro black hole creation, you should think about outrageous or outspoken claims made in the past by a small minority of scientists. How often has the majority view been overturned? I suspect that something like 1/1000 is a good rough guess for the probability of the LHC destroying us. This reasoning gives the probability 1/1000 for any conceivable minority hypothesis, which is inconsistent. In general, I think this debate only illustrates the fact that people are not good at all in guessing extremely low or extremely high probabilities and usually end up in some sort of inconsistency.

This reasoning gives the probability 1/1000 for any conceivable minority hypothesis, which is inconsistent.

Inconsistent with what? Inconsistent is a 2-place predicate.

It gives us different probabilities for different hypotheses, depending on the minority. The idea that global warming is not caused by human activity is currently believed by about 1-2% of climatologists.

If you have a hard time finding a theory that you can't, by this criterion, say is true with more than 999/1000 probability, I'd say that's a feature, not a bug.

I am not sure what I had in mind when I had written the reply, but I guess it was somehow related to existence of more than thousand mutually exclusive hypotheses supposing destruction of the Earth, each of which should, if given reasoning is correct, have probability 1/1000 or more.

If you have a hard time finding a theory that you can't, by this criterion, say is true with more than 999/1000 probability, I'd say that's a feature, not a bug.

Full complex theory, maybe, but there should be plenty of hypotheses similar to "the Earth will not be destroyed by LHC" with far greater certainty than 0.999. What about "the sun will rise tomorrow"?

Prase: "This reasoning gives the probability 1/1000 for any conceivable minority hypothesis, which is inconsistent."

Sure; for example if you applied this kind of "rough guestimate" reasoning to, say, 1001 mutually exclusive minority views, you would end up with a probability greater than 1. But I would not apply this reasoning in all cases: there may be some specific cases where I would modify the starting guess, for example if it led to inconsistency.

I think that this illustrates that it is hard to draw hard and fast rules for useful heuristics. I think that you'd agree that assigning a probability of 1/200 or 1/5000 to the hypothesis that the scientific community is mistaken about the safety of some particular process is a reasonable heuristic to go around with, even if overzealous application of such heuristics leads to inconsistencies. The answer, of course, is not to be overzealous.

And, of course, a better answer than the one I originally gave would be to look into the past history of major disasters that were predicted by some minority view within the scientific community, and get some actual numbers. How many times have a small group of outspoken doomsayers been proven right? How many times not? If I had the time I'd do it. Perhaps this would be a useful exercise for the FHI to undertake.

Basically, everyone knows that the probability of the LHC destroying the earth is greater than one in a million, but no one would do anything to stop the thing from running, for the same reason that no one would pay Pascal's Mugger. (My interests evidently haven't changed much!)

I like Roko's suggestion that we should look at how many doomsayers actually predicted a danger (and how early). We should also look at how many dangers occurred with no prediction at all (the Cameroon lake eruptions come to mind).

Overall, the human error rate is pretty high: http://panko.shidler.hawaii.edu/HumanErr/ Getting the error rate under 0.5% per statement/action seems very unlikely, unless one deliberately puts it into a system that forces several iterations of checking and correction (Panko's data suggests that error checking typically finds about 80% of the errors). For scientific papers/arguments one bad per thousand is probably conservative (My friend Mikael claimed the number of erroneous maths papers are far less than this level because of the peculiarities of the field, but I wonder how many orders of magnitude they can buy).

At least to me this seems to suggest that in the absence of any other evidence, assigning a prior probability much less than 1/1000 to any event we regard as extremely unlikely is overconfident. Of course, as soon as we have a bit of evidence (cosmic rays, knowledge of physics) we can start using smaller priors. But uninformative priors are always going to be odd and silly.

One reason I dislike many precautionary arguments is that they seem to undervalue what we learn by doing things. Very often in science, when we have chased down a new phenomenon, we detect it by relatively small effects before the effects get big enough to be dangerous. For potentially dangerous phenomena, what we learn by exploring around the edges of the pit can easily be more valuable than the risk we faced of inadvertently landing in the pit in some early step before we knew it was there. Among other things, what we learn from poking around the edges of the pit may protect us from stuff there that we didn't know about that was dangerous even if we didn't poke around the pit. One of the consequences of decades of focus on the physics of radiation and radioisotopes is that we understand hazards like radon poisoning better than before. One of the consequences of all of our recombinant DNA experimentation is that we understand risks of nature's own often-mindboggling recombinant DNA work much better than we did before.

The main examples that I can think of where the first thing you learn, when you tickle the tail enough to notice the tail exists, is that Tigers Exist And Completely Outclass You And Oops You Are Dead, involve (generalized) arms races of some sort. E.g., it was by blind luck that the Europeans started from the epidemiological cesspool side of the Atlantic. (Here the arms race is the microbiological/immunological one.) If history had been a little different, just discovering the possibility that diseases were wildly different on both sides could easily have coincided with losing 90+% of the European population. (And of course as it happened, the outcome was equally horrendous for the American population, but the American population wasn't in a position to apply the precautionary principle to prevent that.) So should the Europeans have used a precautionary principle? I think not. Even in a family of alternate histories where the Europeans always start from the clean side, in many alternate subhistories of that family, it is still better for the Europeans to explore the Atlantic, learn early about the problem, and prepare ways to cope with it. Thus, even in this case where the tiger really is incredibly dangerous, the precautionary principle doesn't look so good.

The problem with looking at how many doomsayers were successful in history is that it completely overlooks the concerned hypothesis itself. Doomsday prophecies are not all equally probable. If we constrain our attention to prophecies of the destruction of the whole Earth (which seems most relevant for this case), the rate of success is obviously 0.

@ prase: well, we have to get our information from somewhere... Sure, past predictions of minor disasters due to scientific error are not in exactly the same league as this particular prediction. But where else are we to look?

@anders: interesting. So presumably you think that the evidence from cosmic rays makes the probability of an LHC disaster much less than 1 in 1000? Actually, how likely do you think it is that the LHC will destroy the planet?

"Then, I considered the question of how many mysterious failures at the LHC it would take to make me question whether it might destroy the world/universe somehow, and what this revealed about my prior probability..."

From the previous thread:

"Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"... After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?"

If the LHC fails repeatedly it can only be because of logical engineering flaws. In fact, the complexity of the engineering makes it easier for people to attribute failures to unseen, unnatural forces.

If a marble rolling down an incline could destroy the universe, the unnaturalness of the failures could not be hidden. Any incline you approached with marble-y intent would crumble to dust. Or instead of rolling down an incline, the marble would hover in midair.

If the LHC is a machine based on know physics and mechanics then it would require causality-defying forces to stop it from working, just as it would take supernatural forces to stop anyone from simply rolling a marble down a slope.

And if this is the case, why should the LHC supernatural stop-gaps appear as they do -- as comprehensible engineering flaws? Why not something more unambiguously causality-defying like the LHC floating into the air and then disappearing into the void in an exciting flash of lights? Or why not something more efficient? Why should the machine even be built up to this point, only to be blocked by thousands of suspiciously impish last-minute flaws, when a million reasonable legislative, cooperative, or cognitive events could have snuffed the machine from ever even being considered in the first place?

More importantly, if the reasoning here is that some epic force puts the automatic smack-down on any kind of universe destroying event, then, obviously, repeated probability-defying failures of the LHC more logically reduces the probability that it will destroy the universe (by lending increasing support to the existence of this benign universe-preserving force). It doesn't increase the probability of destruction, by its own internal logic.

The argument for stopping the LHC then could only be economic, not self-preservational. So nuclear terrorism would actually be the worse time to start it up, since the energy and man-power resources would be needed more for pressing survival goals, then to operate a worthless machine the universe won't allow us to use.

Very belated note: the human brain is a much more volatile system than millions of tons of material. I suspect that if the LHC could have ended up destroying the universe, the idea to build one would never have occurred to us.

Related: The wacky "science" of "Unusual Events" and "Mysterious Circumstances":

If an accelerator potentially existed that could generate a large number of Higgs particles and if the parameters were so that such an accelerator would indeed give a large positive contribution, then such a machine should practically never be realized! We consider this to be an interesting example and weak experimental evidence for our model because the great Higgs-particle-producing accelerator SSC [17], in spite of the tunnel being a quarter built, was canceled by Congress! Such a cancellation after a huge investment is already in itself an unusual event that should not happen too often. We might take this event as experimental evidence for our model in which an accelerator with the luminosity and beam energy of the SSC will not be built (because in our model, SI will become too large, i.e., less negative, if such an accelerator was built) [17]. Since the LHC has a performance approaching the SSC, it suggests that also the LHC may be in danger of being closed under mysterious circumstances.

http://arxiv.org/abs/0802.2991 http://arxiv.org/pdf/0802.2991v2

I'm pretty sure that given the time to learn the calibration, you could make a million largely independent true predictions with a single error, and that having done so the unlikely statements would be less silly than "the LHC will destroy the world". Of course, "independent" is a weasel word. Almost any true observations won't be independent in some sense.

If the failure probability had a known 50% probability of occurring from natural causes, like a quantum coin or some such... then I suspect that if I actually saw that coin come up heads 20 times in a row, I would feel a strong impulse to bet on it coming up heads the next time around."

I would feel such an urge but would override it just as I override the urge to see supernatural forces or the dark lords of the matrix in varied coincidences with similarly low individual probabilities. (I once rolled a die six 8 times out of nine, probability about 2 million. I once lost with a full house in poker, probability a bit under a hundred thousand to one)

The huge problem with any probabalistic estimates is the assumption that repeated failures of the LHC are independent (like coin tosses) and infrequent. Why, in an immensely complex bit of novel engineering, with a vast number of components interacting, would you assume that? How many varieties of filament failed when Edison was trying to build a light bulb? Thousands: but that did not prove that the carbon filament lightbulb was a danger to humanity, but that it was very difficult problem to solve. it's taken more than twenty years to build LHC: if it takes several years to get it working properly, would that be surprising?

@michael vassar

For the probability of a die coming up "6" eight times out of nine, I get about 1 in 200 thousand, not 1 in 2 million. If the die coming up anything (e.g., "1" or "3") eight times out of nine would have been similarly notable, I get 1 in 37 thousand.

Why do you override the urge to see dark lords of the martrix in these sorts of coincidences? Calculations of how many such coincidences one would expect, given confirmation bias etc.? Belief that coincidence-detectable dark lords of the matrix are sufficiently unlikely that such calculations aren't worth making? A desire not to look or be crazy?

1e-6? You claim to be rationalists who project an expected 6 million deaths or a one in a million chance of losing all human potential forever. I assume this is enough motivation to gather some evidence, or at least read the official LHC safety report.

What overwhelming evidence or glaring flaws in their arguments leaves you convinced at 1e-6 that the LHC is an existential risk?

[-][anonymous]15y00

William makes a good point!

One reason I dislike many precautionary arguments is that they seem to undervalue what we learn by doing things. Very often in science, when we have chased down a new phenomenon, we detect it by relatively small effects before the effects get big enough to be dangerous. For potentially dangerous phenomena, what we learn by exploring around the edges of the pit can easily be more valuable than the risk we faced of inadvertently landing in the pit in some early step before we knew it was there. Among other things, what we learn from poking around the edges of the pit may protect us from stuff there that we didn't know about that was dangerous even if we didn't poke around the pit.

There would definitely be benefit to be had in working out what things cause universe destructions. For example, if someone created a form of engine that relied on high energy particles. If such a device had a component, the failure of which allowed universe destroying consequences to occur, the outcome would be quite significant. We would find that in practice the component never failed, ever. Yet, in the course of just years the Everett branches in which we lived would become sparse indeed!

How many LHC catastrophe's would be worth enduring for that sort of knowledge? I'll leave it to someone far more experienced than I to make that judgement.

Personally, I would be entirely prepared to make a million statements with equal confidence to "The LHC will not destroy the world" without expecting to be wrong one or more times. The earth has experienced many particle collisions of energy equal to or much higher than those that would be produced by the LHC. Speculating on whether the LHC might destroy the earth seems similar to privileging the hypothesis that if you made a brick out of a particular type of clay and dropped it, it would smash the planet. High energy particle collisions may sound like a class with which we have too little experience to make such strong predictions, but in this case I think it's fair to say that isn't so.

Having only read the abstract of that paper, and not the full text,it seems to me that it does not suggest there is any reason to believe that the LHC would create stable black holes where ultra high energy cosmic ray particle collisions would not, only that if such black holes were created, it still could not destroy the earth within a stellar lifetime. Am I mistaken about this?

The issue is that as the paper's authors explain but don't particularly emphasize, the argument you gave, which is the argument usually given, is flawed -- if these black holes from cosmic rays don't Hawking-radiate but do lose their electric charge, then they have enough momentum to pass harmlessly through the Earth and Sun, which is not the case for some of the black holes that would be created by LHC. Giddings and Mangano are some of the people assigned by CERN to study LHC safety, so this isn't something a crackpot made up. It turns out in the paper that there's an argument for safety that isn't (as far as I know) flawed, involving cosmic rays hitting neutron stars and white dwarfs, but this is a different (and far more involved-looking) argument than the one you based your extreme confidence on.

Okay, that makes sense. I was not aware of any mechanism by which the black hole would lose its charge, which would greatly increase its likelihood of passing through the a body such as the earth or sun entirely. On reviewing the paper further though, I note that they state that there is no known consistent set of physical laws that would lead the black hole to lose its charge but not release Hawking radiation, so even without the analysis of whether the particles should be able to pass through white dwarfs or neutron stars, I would be inclined to assign quite a low probability that such a consistent set of laws exists and is actually true, although not necessarily as low as one in a million.

Of course, I didn't have that data until reading the arguments on why such a set of laws should still not lead to the LHC being destructive, so it was never a major factor in my probability assessment that the LHC would be dangerous, but it makes sense that physicists who were aware of the principles involved would afford the proposition an extremely low probability.

I'm not sure the probability they arrive at is any higher than the standard, more ignorant one - it depends on how complicated our model of the universe gets when you can (basically) selectively ignore quantum mechanics, and odd things happen to general relativity too, and then you throw in the probability of the LHC producing a black hole moving slower than escape velocity (tiny already).

the probability of the LHC producing a black hole moving slower than escape velocity (tiny already)

The calculation is in appendix F of the paper. Apparently the probability is tiny for some values of the black hole mass and large for others, so if those others are at all plausible the total probability isn't tiny (all this being conditional on black holes being created in the first place).

Anyway, I've said my bit and since we all agree this scenario is too improbable to be a concrete worry, I'm going to bow out of the discussion.

But if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.

That's not terribly hard: 1) the first twenty bits in http://www.random.org/files/2012/2012-05-10.bin are not 1011 1001 0011 1011 0100; 2) the 21st to 40th bit in http://www.random.org/files/2012/2012-05-10.bin are not 0110 0100 1001 1110 0101; 3) the 41st to 60th bit in http://www.random.org/files/2012/2012-05-10.bin are not 1101 0010 1010 0110 1111; etc. :-)

If you tried to spell that out, the odds you'd make a mistake wouldn't be incredibly low.

Right -- though most of the mistakes I can think of would make the statement they're in more likely to be correct (with the exception of omitting the word “not”).