TL;DR: crazy predictions based on anthropic reasoning seem crazy only because they contradict our exaggerated expectations about too good future.
Let’s look at the two statements:
- I am in the middle of the set of all humans ranged by birth rank and therefore large human civilization will exist for only a few more millennia at least.
- I am in the very beginning of human history and humanity will exist for billions of years and will colonize the whole galaxy.
The first statement is the Doomsday argument in a nutshell and it is generally regarded as wrong. It could be false for two reasons: either the conclusion is false, or its logic is false.
Often someone argues that DA logic must be false because the conclusion is false, and therefore optimistic statement 2 is true. That is, we will survive for billions of years; therefore, we will not die in the next millennia, and thus logic of DA is wrong.
However, the optimistic statement 2 is only true for a person who is deep into transhumanism, nanotech etc but ignores x-risks. For many people, the doomy statement 1 is more probable, especially for those, who are deep into climate change, nuclear war risks and etc.
For a techno-optimist, the conclusion of DA is wrong not because it is inherently wrong, but because it contradicts our best hopes for the great future – so hating DA is wishful thinking.
Everything adds up to normality. As DA is using mediocrity thinking, it is “normal” by definition, in a tautological sense. I am typical, therefore, I am in the middle, therefore, the end is as far as the beginning. DA doesn’t say that the end is very near.
But the end becomes surprisingly near if we use the birth rank for the calculation, but the real clock time for the timing of the end. As I said in “Each reference class has its own end” we should define “the end” in the same terms as we define the reference class. Therefore, being in the middle of birth rank is neither surprising nor especially bad. It only says that there will be tens of billions of births in the future. It even doesn’t say that there will be a catastrophe in the end.
The DA prediction about being in the middle of the birth ranks becomes bad and surprising only when we compare it with the expected exponential growth of the population ( or very high plato). In that case, all these billions of births will happen in the next millennia. This suggests an abrupt end to the exponential growth which is interpreted as a global catastrophe. But there are obviously other possible population scenarios: population could slowly decline without extinction, or everyone becomes immortal but the birth rate decline (as now happening in rich countries).
Anyway, DA becomes surprising only when it is applied to our optimistic expectation that the human population will continue to be very high.
You are at a bus stop, and have been waiting for a bus for 5 min. The "doomsday logic" says that you are expected to wait another 5 min. 5 min later without a bus you are expected to wait another 10 min. If you look at the reference class of all bus stop waits, some of them have a bus coming in the next minute, some in 10, some in an hour, some next day, some never (because the route changed). You can't even estimate the expected value of the bus wait time until you narrow the reference class to a subset where "expected value" is even meaningful, let alone finite. To do that, you need extra data other than the time passed. Without it you get literally ZERO information about when the bus is coming. You are stuck in Knightian uncertainty. So it's best not to fret about the Doomsday argument as is, and focus on collecting extra data, like what x-risks are there, what the resolution to the Fermi paradox might be, etc.
While sun-rise problem setup is somewhat crazy, the bus waiting problem is ubiquitous. For example, I am waiting for some process to terminate in my computer or file starting to download. The rule of thumb is that if it is not terminating in a few minutes, it will not terminate soon, and it is better to turn off the process.
Leslie in "The end of the world" suggested a version of DA which is independent of assumptions of probability distributions of events. He suggested that if we assume deterministic universe without world-branching, then any process has u... (read more)