“Where is my Doomsday?” asks a prepper on a conspirological site, — “I spent thousands of dollars on ammunition and 10 years on waiting, and still nothing. My ammo is rusting!”

There is a general problem of predicting the end of the world: it is not happening. There are many reasons for this, but one is purely mathematical: if something didn’t happen for a long time, this is very strong evidence that it will not happen any time soon. If we have no nuclear war for 70 years, its probability tomorrow is very small, no matter how serious are international relations.

The first who observed this was Laplace with the “sunrise problem”. He asked: What is the probability that the Sun will not rise tomorrow, given that it has risen for the last 5000 years. He derived an equation, and the probability of no sunrise is 1/N, when N is the number of days when the Sun has risen. This is known as a rule of succession and Laplace has even more general equation for it, which could account for a situation where the Sun had missed several sunrises.

The fact that something didn’t happen for a long time is an evidence that some unknown causal mechanism provides stability for the observed system, even if all visible causal mechanisms are pointing on "the end is nigh”.

“You see, the end of the US is near, as the dollar debt pyramid is unsustainable, it is growing more than a trillion dollars every year” — would say a preper. But the dollar was a fiat currency for decades, and it is very unlikely that it will fail tomorrow.

The same rule of succession could be used to get a rough prediction of the end times. If there is no nuclear war for 70 years, there is a 50 per cent chance that it will happen in the next 70 years. This is known as the Doomsday argument in J.R. Gott’s version.

Surely, something bad will happen in decades. But your ammo will rust first. However, on the civilizational level, we should be invest in preventing the global risks even if they have a small probability, as on a long run it ensures our survival

This could be called "Reverse Doomsday Argument", as it claims that the doomsday is unlikely to be very near. In AI safety, it is a (relatively weak) argument against near-term "near-term AI risk", that is, that dangerous AI is less than 5 years from now.

.

New Comment
2 comments, sorted by Click to highlight new comments since:

If your ammo's rusting after only a decade, you didn't store it well enough - better to learn that now than 10 years after the collapse. In any case, you should have two types of stored items: short-term supplies, maybe a year or two worth, and long-term sustainable supplies, for indefinite use.

The short-term stuff you should cycle through as you use it for practice, daily eating, etc. it never gets more than a year or two old anyway. The long-term stuff is things you expect to maintain, repair, make parts for, etc. This inspection and maintenance needs to be part of your routine as well.

Or, acknowledge that you're not willing to put the effort in for long-term preparations, and just keep a few weeks' stash. This is all short-term, and should be used/inspected/replaced appropriately.

I don't actually think Laplace's formulas work for chaotic systems like global ecologies, economies, or cultures. Induction fails when you can't map future events very well. If you flip a coin 100 times, what can that tell you about an upcoming die roll? You can't step in the same river twice, and it's very hard to find the right categories of past events to predict future ones.

If you're willing to go back more than 70 years, in the US at least, the math suggests prepping is a good strategy:

https://medium.com/s/story/the-surprisingly-solid-mathematical-case-of-the-tin-foil-hat-gun-prepper-15fce7d10437