The doomsday argument, in its simplest form, claims that since 2/3 of all humans will be in the final 2/3 of all humans, we should conclude it is more likely we are in the final two thirds of all humans who’ve ever lived, than in the first third. In our current state of quasi-exponential population growth, this would mean that we are likely very close to the final end of humanity. The argument gets somewhat more sophisticated than that, but that's it in a nutshell.
There are many immediate rebuttals that spring to mind - there is something about the doomsday argument that brings out the certainty in most people that it must be wrong. But nearly all those supposed rebuttals are erroneous (see Nick Bostrom's book Anthropic Bias: Observation Selection Effects in Science and Philosophy). Essentially the only consistent low-level rebuttal to the doomsday argument is to use the self indication assumption (SIA).
The non-intuitive form of SIA simply says that since you exist, it is more likely that your universe contains many observers, rather than few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).
Even in that form, it may seem counter-intuitive; but I came up with a series of small steps leading from a generally accepted result straight to the SIA. This clinched the argument for me. The starting point is:
A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?
Here, the probability is certainly 99%. But now consider the situation:
B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?
There should be no difference from A; since your odds of dying are exactly fifty-fifty whether you are blue-doored or red-doored, your probability estimate should not change upon being updated. The further modifications are then:
C - same as B, except the coin is flipped before you are created (the killing still happens later).
D - same as C, except that you are only made aware of the rules of the set-up after the people to be killed have already been killed.
E - same as C, except the people to be killed are killed before awakening.
F - same as C, except the people to be killed are simply not created in the first place.
I see no justification for changing your odds as you move from A to F; but 99% odd of being blue-doored at F is precisely the SIA: you are saying that a universe with 99 people in it is 99 times more probable than a universe with a single person in it.
If you can't see any flaw in the chain either, then you can rest easy, knowing the human race is no more likely to vanish than objective factors indicate (ok, maybe you won't rest that easy, in fact...)
(Apologies if this post is preaching to the choir of flogged dead horses along well beaten tracks: I was unable to keep up with Less Wrong these past few months, so may be going over points already dealt with!)
EDIT: Corrected the language in the presentation of the SIA, after
Sure.
Wrong. Your epistemic situation is no longer the same after the announcement.
In a single-run (one-small-world) scenario, the coin has a 50% to come up tails or heads. (In a MWI or large universe with similar situations, it would come up both, which changes the results. The MWI predictions match yours but don't back the SIA). Here I assume the single-run case.
The prior for the coin result is 0.5 for heads, 0.5 for tails.
Before the killing, P(red|heads) = P(red|tails) = 0.01 and P(blue|heads) = P(blue|tails) = 0.99. So far we agree.
P(red|before) = 0.5 (0.01) + 0.5 (0.01) = 0.01
Afterwards, P'(red|heads) = 0, P'(red|tails) = 1, P'(blue|heads) = 1, P'(blue|tails) = 0.
P(red|after) = 0.5 (0) + 0.5 (1) = 0.5
So after the killing, you should expect either color door to be 50% likely.
This, of course, is exactly what the SIA denies. The SIA is obviously false.
So why does the result seem counterintuitive? Because in practice, and certainly when we evolved and were trained, single-shot situations didn't occur.
So let's look at the MWI case. Heads and tails both occur, but each with 50% of the original measure.
Before the killing, we again have P(heads) =P(tails) = 0.5
and P(red|heads) = P(red|tails) = 0.01 and P(blue|heads) = P(blue|tails) = 0.99.
Afterwards, P'(red|heads) = 0, P'(red|tails) = 1, P'(blue|heads) = 1, P'(blue|tails) = 0.
Huh? Didn't I say it was different? It sure is, because afterwards, we no longer have P(heads) = P(tails) = 0.5. On the contrary, most of the conscious measure (# of people) now resides behind the blue doors. We now have for the effective probabilities P(heads) = 0.99, P(tails) = 0.01.
P(red|after) = 0.99 (0) + 0.01 (1) = 0.01
No; you need to apply Bayes theorem here. Intuitively, before the killing you are 99% sure you're behind a blue door, and if you survive you should take it as evidence that "yay!" the coin in fact did not land tails (killing blue). Mathematically, you just have to remember to use your old posteriors as your new priors:
P(red|survival) = P(red)·P(survival|red)/P(survival) = 0.01·(0.5)/(0.5) = 0.01
So SIA + Bayesian updating happens t... (read more)