The doomsday argument says I have only a 10% chance of being within the first 10% of humans ever born, which gives nonzero information about when humanity will end. The argument has some problems with the choice of reference class; my favorite formulation (invented by me, I'm not sure if it's well-known) is to use the recursive reference class of "all people who are considering the doomsday argument with regard to humanity". But this is not the issue I want to discuss right now.
Imagine your prior says the universe can contain 10, 1000 or 1000000 humans, with probability arbitrarily assigned to these three options. Then you learn that you're the 50th human ever born. As far as I can understand, after receiving this information you're certain to be among the first 10% of humans ever born, because it's true in every possible universe where you receive such information. Also learning your index doesn't seem to tell you very much about the date of the doomsday: it doesn't change the relative probabilities of doomsday dates that are consistent with your existence. (This last sentence is true for any prior, not just the one I gave.) Is there something I'm missing?
I've been meaning to post about the Doomsday Argument for awhile. I have a strong sense that it's wrong, but I've had a hell of a time trying to put my finger on how it fails. The best that I can come up with is as follows: Aumann's agreement theorem says that two rational agents cannot disagree, in the long run. In particular, two rational agents presented with the same evidence should update their probability distribution in the same direction. Suppose I learn that I am the 50th human, and I am led to conclude that it is far more likely that only 1000 humans should ever live, than 100,000. But suppose I go tell Bob that I'm the 50th human; it would be senseless for him to come to the same conclusion that I have. Formally, it looks something like this:
P(1000 humans|I am human #50)>P(1000 humans)
but
P(1000 humans|Skatche is human #50)=P(1000 humans)
where the right hand sides are the prior probabilities. The same information has been conveyed in each case, yet very different conclusions have been reached. Since this cannot be, I conclude that the Doomsday Argument is mistaken. This could perhaps be adapted as an argument against anthropic reasoning more generally.
Why do you say that?
Suppose you have an urn containing consecutively numbered balls. But you don't know how many. Draw one ball from the urn and update your probabilities regarding the number of balls. Draw a second ball, and update again.
Two friends each draw one ball and then share information. I don't see why the ball you drew yourself should be privileged.
Two variants of this urn problem that may offer some insight into the Doomsday Argument:
The balls are not numbered