saturn comments on Exterminating life is rational - Less Wrong

17 Post author: PhilGoetz 06 August 2009 04:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (272)

You are viewing a single comment's thread. Show more comments above.

Comment author: TitaniumDragon 27 May 2014 10:46:04PM *  0 points [-]

I was directed here from FIMFiction.

Because of https://en.wikipedia.org/wiki/Survivorship_bias we really can't know what the odds are of doing something that ends up wiping out all life on the planet; nothing we have tried thus far has even come close, or even really had the capability of doing so. Even global thermonuclear war, terrible as it would be, wouldn't end all life on Earth, and indeed probably wouldn't even manage to end human civilization (though it would be decidedly unpleasant and hundreds of millions of people would die).

Some people thought that the nuclear bomb would ignite the atmosphere... but a lot of people didn't, either, and that three in a million chance... I don't even know how they got at it, but it sounds like a typical wild guess to me. How would you even arrive at that figure? Indeed, there is good reason to believe that the atmosphere may well have experienced such events before, in the form of impact events; this is why we knew, for instance, that the LHC was safe - we had experienced considerably more energetic events previously. Some people claimed it might destroy the universe, but the odds were actually 0 - it simply lacked the ability to do so, because if it was going to cause a vacuum collapse the universe would have already been destroyed by such an event elsewhere. Meanwhile, the physics of small black holes means that they're not a threat - they would decay almost instantly, and would lack the gravity necessary to cause any real problems. And thus far, if we actually look at what we've got, the reality is that everything we have tried has had p=0 of destroying civilization in reality (that is the universe we -actually- live in), meaning that that p = 3 x 10^-6 was actually hopelessly pessimistic. Just because someone can assign arbitrary odds to something doesn't mean that they're right. In fact, it usually means that they're bullshitting.

Remember NASA making up its odds of an individual bolt failing at being one in a 10^8? That's the sort of made up number we're looking at here.

And that's the sort of made up number I always see in these situations; people simply come up with stuff, then pretend to justify it with math when in reality it is just a guess. Statistics used as a lamppost; for support, not illumination.

And this is the biggest problem with all existential threats - the greatest existential threat to humanity is, in all probability, being smacked by a large meteorite, which is something we KNOW, for certain, happens every once in a while. And if we detected that early enough, we could actually prevent such an event from happening.

Everything else is pretty much entirely made up guesswork, based on faulty assumptions, or very possibly both.

Of the "humans kill us all" scenarios, the most likely is some horrible highly transmissible genetically engineered disease which was deliberately spread by madmen intent on global destruction. Here, there are tons of barriers; the first, and perhaps largest barrier is the fact that crazy people have trouble doing this sort of thing; it requires a level of organization which tends to be beyond them. Secondly, it requires knowledge we lack, and which indeed, once we obtain it, may or may not make containing the outbreak of such a disease relatively trivial - you speak of offense being easier than defense, but in the end, a lot of technological systems are easier to break than they are to make, and understanding how to make something like this may well require us to understand how to break it in the process (and indeed, may well be derived from us figuring out how to break it). Thirdly, we actually already have measures which require no technology at all - quarantines - which could stop such a thing from wiping out too many people. Even if you did it in a bunch of places simultaneously, you'd still probably fail to wipe out humanity with it just because there are too many people, too spread out, to actually succeed. And fourth, you'd probably need to test it, and that would put you at enormous risk of discovery. I have my doubts about this scenario, but it is by far the likelist sort of technological disaster.

Of course, if we have sentient non-human intelligences, they'd likely be immune to such nonsense. And given our improvements in automation controlling plague-swept areas is probably going to only get easier over time; why use soldiers who can potentially get infected when we can patrol with drones?

Comment author: Vaniver 27 May 2014 11:47:23PM *  0 points [-]

I don't even know how they got at it, but it sounds like a typical wild guess to me. How would you even arrive at that figure?

Here is a contemporary paper discussing the risk, which doesn't seem to come up with the 3e-6 number, and here are some of Hamming's reflections. An excerpt from the second link:

Shortly before the first field test (you realize that no small scale experiment can be done--either you have a critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, "It is the probability that the test bomb will ignite the whole atmosphere." I decided I would check it myself!

Compton claims (in an interview with Pearl Buck I cannot easily find online) that 3e-6 was actually the decision criterion (if it was higher than that, they were going to shut down the project as more dangerous than the Nazis), and the estimate came in at lower, and so they went ahead with the project.

In modern reactors, they try to come up with a failure probability by putting distributions on unknown variables during potential events, simulating those events, and then figuring out what portion of the joint input distribution will lead to a catastrophic failure. One could do the same with unknown parameters like the cross-section of nitrogen at various temperatures; "this is what we think it could be, and we only need to be worried if it's over here."