The Doomsday argument gives an anthropic argument for why we might expect doom to come reasonably soon. It's known that the Doomsday argument works under SSA, but not under SIA.
Ok, but since different anthropic probability theories are correct answers to different questions, what are the question versions of the Doomsday argument, and is the original claim correct?
No Doomsday on birth rank
Simplify the model into assuming there is a large universe (no Doomsday any time soon) with many, many future humans, and a small one (a Doomsday reasonably soon - within the next 200 billion people, say), with equal probability. In order to think in terms of frequencies, which comes more naturally to humans, we can imagine running the universe many, many times, each with the Doomsday chance.
There are roughly a 108.5 billion humans who have ever lived. So, asking:
- What proportion of people with birth rank 108.5 billion, live in a small universe (with a Doomsday reasonably soon)?
The answer to that question converges to , the SIA probability. Half of the people with that birth rank live in small universes, half in large universes.
Doomsday for time travellers
To get an SSA version of the problem, we can ask:
- What proportion of universes, where a randomly selected human has a birthrank of 108.5 billion, will be small (with a Doomsday reasonably soon)?
This will give an answer close to as it converges on the SSA probability.
But note that this is generally not the question that the Doomsday argument is posing. If there is a time traveller who is choosing people at random from amongst all of space and time - then if they happen to choose you, that is a bad sign for the future (and yet another reason you should go with them). Note that this is consistent with conservation of expected evidence: if the time traveller is out there but doesn't choose you, then this a (very mild) update towards no Doomsday.
But for the classical non-time-travel situation, the Doomsday argument fails.
Thanks again for the useful response.
My initial argument was really a question “Is there any approach to anthropic reasoning that allows us to do basic scientific inference, but does not lead to Doomsday conclusions?” So far I’m skeptical.
The best response you’ve got is I think twofold.
That might perhaps work, but it does look horribly convoluted. To me it does seem like determining the conclusion in advance (you want SIA to favour universes 1 and 2 over 3 and 4, but not favour 1 over 2) and then hacking around with SIA until it gives that result.
Incidentally, I think you’re still not out of the woods with a volume cutoff. If it is very large in the time dimension then SIA is start going to favour universes which have Boltzmann Brains in the very far future over universes whose physics don’t ever allow Boltzmann Brains. And then SIA is going to suggest that not only are we probably in a universe with lots of BBs, we most likely are BBs ourselves (because almost all observers with exactly our experiences are BBs). So SIA calls for further surgery either to remove BBs from consideration or to apply the 4volume cutoff in a way that doesn’t lead to lots of Boltzmann Brains.
The problem with this is that ADT with unbounded utility functions doesn’t lead to stable conclusions. So you have to bound or truncate the utility function.
But then ADT is going to pay the most attention to universes whose utility is close to the cutoff ... namely versions of universe 1,2,3,4 which have utility at or near the maximum. For the reasons I’ve already discussed above, that’s not in general going to give the same results as applying a volume cutoff. If the utility scales with the total number of observers (or observers like me), then ADT is not going to say “Make decisions as if you were in universe 1 or 2 ... but with no preference between these ... rather than as if you were in universe 3 or 4”
I think the most workable utility function you’ve come up with is the one based on subjective bubbles of order galactic volume or thereabouts i.e. the utility function scales roughly linearly with the number of observers in the volume surrounding you, but doesn’t care about what happens outside that region (or in any simulations, if they are of different regions). Using that is roughly equivalent to applying a volume truncation using regular astronomical volumes (rather than much larger volumes).
However the hack to avoid simulations looks a bit unnatural to me (why wouldn’t I care about simulations which happen to be in the same local volume?) Also, I think this utility function might then tend to favour “zoo” hypotheses or “planetarium” hypotheses (I.e. decisions are made as if in a universe densely packed with planetaria containing human level civilisations, rather than simulations of said simulations).
More worryingly, I doubt if anyone really has a utility function that looks like this ie. one that cares about observers 1 million light years away just as much as it cares about observers here on Earth, but then stops caring if they happen to be 1 trillion light years away...
So again I think this looks rather like assuming the right answer, and then hacking around with ADT until it gives the answer you were looking for.