People sometimes think that the doomsday argument is implausible because it always says we are more likely to die out sooner than our other reasoning suggests, regardless of the situation. There’s something dubious about an argument that has the same conclusion about the world regardless of any evidence about it. Nick Bostrom paraphrases, “But isn’t the probability that I will have any given rank always lower the more persons there will have been? I must be unusual in some respects, and any particular rank number would be highly improbable; but surely that cannot be used as an argument to show that there are probably only a few persons?” (he does not agree with this view).

That this reasoning is wrong is no new insight. Nick explains for instance that in any given comparison of different length futures, the doomsday reasoning doesn’t always give you the same outcome. You might have learned that your birth rank ruled out the shorter future. It remains the case though that the shift from whatever you currently believe to what the doomsday argument tells you to believe is always one toward shorter futures. I think it is this that seems fishy to people.

I maintain that the argument’s predictable conclusion is not a problem at all, and I would like to make this vivid.

Once a farmer owned a group of cows. He would diligently count them, to ensure none had escaped, and discover if there were any new calves. He would count them by lining them up and running his tape measure along the edge of the line.

“One thousand cows” he exclaimed one day. “Fifty new calves!”

His neighbour heard him from a nearby field, and asked what he was talking about. The farmer held out his tape measure. The incredulous neighbour explained that since cows are more than an inch long, his figures would need some recalculation. Since his cows were about five foot long on average, the neighbour guessed he would need to divide his number by 60. But the farmer quickly saw that this argument must be bogus. If his neighbour was right, whatever number of cows he had the argument would say he had fewer. What kind of argument would that be?

A similar one to the Doomsday Argument’s claim that the future should always be shorter than we otherwise think. In such cases the claim is that your usual method of dealing with evidence is biased, not that there is some particular uncommon evidence that you didn’t know about.

Similarly, the Self Indication Assumption‘s ‘bias’ toward larger worlds is taken as reason against it. Yet it is just a claim that our usual method is biased toward small worlds.


New Comment