Quantum versus logical bombs
Child, I'm sorry to tell you that the world is about to end. Most likely. You see, this madwoman has designed a doomsday machine that will end all life as we know it - painlessly and immediately. It is attached to a supercomputer that will calculate the 10100th digit of pi - if that digit is zero, we're safe. If not, we're doomed and dead.
However, there is one thing you are allowed to do - switchout the logical trigger and replaced it by a quantum trigger, that instead generates a quantum event that will prevent the bomb from triggering with 1/10th measure squared (in the other cases, the bomb goes off). You ok paying €5 to replace the triggers like this?
If you treat quantum measure squared exactly as probability, then you shouldn't see any reason to replace the trigger. But if you believed in many worlds quantum mechanics (or think that MWI is possibly correct with non-zero probability), you might be tempted to accept the deal - after all, everyone will survive in one branch. But strict total utilitarians may still reject the deal. Unless they refuse to treat quantum measure as akin to probability in the first place (meaning they would accept all quantum suicide arguments), they tend to see a universe with a tenth of measure-squared as exactly equally valued to a 10% chance of a universe with full measure. And they'd even do the reverse, replace a quantum trigger with a logical one, if you paid them €5 to do so.
Still, most people, in practice, would choose to change the logical bomb for a quantum bomb, if only because they were slightly uncertain about their total utilitarian values. It would seem self evident that risking the total destruction of humanity is much worse than reducing its measure by a factor of 10 - a process that would be undetectable to everyone.
Of course, once you agree with that, we can start squeezing. What if the quantum trigger only has 1/20 measured-squared "chance" of saving us? 1/000? 1/10000? If you don't want to fully accept the quantum immortality arguments, you need to stop - but at what point?
Caught in the glare of two anthropic shadows
This article consists of original new research, so would not get published on Wikipedia!
The previous post introduced the concept of the anthropic shadow: the fact that certain large and devastating disasters cannot be observed in the historical record, because if they had happened, we wouldn't be around to observe them. This absence forms an “anthropic shadow”.
But that was the result for a single category of disasters. What would happen if we consider two independent classes of disasters? Would we see a double shadow, or would one ‘overshadow’ the other?
To answer that question, we’re going to have to analyse the anthropic shadow in more detail, and see that there are two separate components to it:
- The first is the standard effect: humanity cannot have developed a technological civilization, if there were large catastrophes in the recent past.
- The second effect is the lineage effect: humanity cannot have developed a technological civilization, if there was another technological civilization in the recent past that survived to today (or at least, we couldn't have developed the way we did).
To illustrate the difference between the two, consider the following model. Segment time into arbitrarily “eras”. In a given era, a large disaster may hit with probability q, or a small disaster may independently hit with probability q (hence with probability q2, there will be both a large and a small disaster). A small disaster will prevent a technological civilization from developing during that era; a large one will prevent such a civilization from developing in that era or the next one.
If it is possible for a technological civilization to develop (no small disasters that era, no large ones in the preceding era, and no previous civilization), then one will do so with probability p. We will assume p constant: our model will only span a time frame where p is unchanging (maybe it's over the time period after the rise of big mammals?)
[Link]: Anthropic shadow, or the dark dusk of disaster
From a paper by Milan M. Ćirković, Anders Sandberg, and Nick Bostrom:
We describe a significant practical consequence of taking anthropic biases into account in deriving predictions for rare stochastic catastrophic events. The risks associated with catastrophes such as asteroidal/cometary impacts, supervolcanic episodes, and explosions of supernovae/gamma-ray bursts are based on their observed frequencies. As a result, the frequencies of catastrophes that destroy or are otherwise incompatible with the existence of observers are systematically underestimated. We describe the consequences of this anthropic bias for estimation of catastrophic risks, and suggest some directions for future work.
There cannot have been a large disaster on Earth in the last millennia, or we wouldn't be around to see it. There can't have been a very large disaster on Earth in the last ten thousand years, or we wouldn't be around to see it. There can't have been a huge disaster on Earth in the last million years, or we wouldn't be around to see it. There can't have been a planet-destroying disaster on Earth... ever.
Thus the fact that we exist precludes us seeing certain types of disasters in the historical record; as we get closer and closer to the present day, the magnitude of the disasters we can see goes down. These missing disasters form the "anthropic shadow", somewhat visible in the top right of this diagram:

Hence even though it looks like the risk is going down (the magnitude is diminishing as we approach the present), we can't rely on this being true: it could be a purely anthropic effect.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)