Here is a simplified version of the Doomsday argument in Anthropic decision theory, to get easier intuitions.

Assume a single agent A exists, an average utilitarian, with utility linear in money. Their species survives with 50% probability; denote this event by S. If the species survives, there will be 100 people total; otherwise the average utilitarian is the only one of its kind. An independent coin lands heads with 50% probability; denote this event by H.

Agent A must price a coupon CS that pays out €1 on S, and a coupon CH that pays out €1 on H. The coupon CS pays out only on S, thus the reward only exists in a world where there are a hundred people, thus if S happens, the coupon CS is worth (€1)/100. Hence its expected worth is (€1)/200=(€2)/400.

But H is independent of S, so (H,S) and (H,¬S) both have probability 25%. In (H,S), there are a hundred people, so CH is worth (€1)/100. In (H,¬S), there is one person, so CH is worth (€1)/1=€1. Thus the expected value of CH is (€1)/4+(€1)/400 = (€101)/400. This is more than 50 times the value of CS.

Note that C¬S, the coupon that pays out on doom, has an even higher expected value of (€1)/2=(€200)/400.

So, H and S have identical probability, but A assigns CS and CH different expected utilities, with a higher value to CH, simply because S is correlated with survival and H is independent of it (and A assigns an ever higher value to C¬S, which is anti-correlated with survival). This is a phrasing of the Doomsday Argument in ADT.

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 8:49 PM

I still think that this explanation fails the criteria "explain as if I am 5". I copy below my comment, in which I try to construct more clear example of ADT reasoning for a civilization which is at a risk of extinction, and which you said is, in fact, presumptious philosopher variant (I hope to create an example which is applicable to our world situation):

Imagine that there is 1000 civilization in the Universe and 999 from them will extinct in their early stage. And one civilization, which will not extinct, could survive only if it spent billions of billions on large prevention project. Each civilization independently developed DA argument on its early stage and concluded that Doom probability is almost 1. Each civilization has two options in early stage:

1) Start partying, trying to get as much utility as possible before the inevitable catastrophe. 2) Ignore anthropic update and go all in in desperate attempt of the catastrophe prevention.

If we choose option 1, then all other agents similar to our decision process will come to the same conclusion and even a civilization which was able to survive, will not attempt to survive, and as a result, all intelligent life in the universe will die off.

If we choose 2, we will most likely fail anyway, but one of the civilizations will survive.

The choice depends on our utilitarian perspective: If we interested only in our civilization well-being, option 1 will give us higher utility, but if we care about the survival of other civilizations, we should choose 2, even if we believe that probability is against us.

in which I try to construct more clear example of ADT reasoning for a civilization which is at a risk of extinction, and which you said is, in fact, presumptious philosopher variant (I hope to create an example which is applicable to our world situation)

I do not think there is a sensible ADT DA that can be constructed for reasonable civilizations. In ADT, only weird utilities like average utilitarians have a DA.

SSA has a DA. ADT has a SSAish like agent, which is the average utilitarian. Therefore, ADT must have a DA. I constructed it. And it turns out the ADT DA via this has no real doom aspect to it; it has behaviour that looks like avoiding doom, but only for agents with strange preferences. ADT does not have a DA with teeth.