Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

turchin comments on The Doomsday argument in anthropic decision theory - Less Wrong Discussion

5 Post author: Stuart_Armstrong 31 August 2017 01:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread.

Comment author: turchin 02 September 2017 03:26:29PM *  1 point [-]

I created a practical example, which demonstrates me correctness of your point of view as I understand it.

Imagine that there is 1000 civilization in the Universe and 999 from them will extinct in their early stage. And one civilization, which will not extinct, could survive only if it spends billions of billions on large prevention project.

Each civilization independently developed DA argument on its early stage and concluded that Doom probability is almost 1. Each civilization has two options in early stage:

1) Start partying, trying to get as much utility as possible before the inevitable catastrophe. 2) Ignore anthropic update and go all in in desperate attempt of the catastrophe prevention.

If we choose option 1, then all other agents similar to our decision process will come to the same conclusion and even a civilization which was able to survive, will not attempt to survive, and as a result, all intelligent life in the universe will die off.

If we choose 2, we will most likely fail anyway, but one of the civilizations will survive.

The choice depends on our utilitarian perspective: If we interested only in our civilization well-being, option 1 will give us higher utility, but if we care about the survival of other civilizations, we should choose 2, even if we believe that probability is against us.

Is this example correct from the point of ADT?

Comment author: Stuart_Armstrong 02 September 2017 06:32:04PM *  1 point [-]

This is a good illustration of anthropic reasoning, but it's an illustration of the presumptuous philosopher, not of the DA (though they are symmetric in a sense). Here we have people saying "I expect to fail, but I will do it anyway because I hope others will succeed, and we all make the same decision". Hence it's the total utilitarian (who is the "SIAish" agent) who is acting against what seems to be the objective probabilities.

http://lesswrong.com/lw/8bw/anthropic_decision_theory_vi_applying_adt_to/