DanArmak comments on Avoiding doomsday: a "proof" of the self-indication assumption - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (228)
I don't see how the SIA refutes the complete DA (Doomsday Argument).
The SIA shows that a universe with more observers in your reference class is more likely. This is the set used when "considering myself as a random observer drawn from the space of all possible observers" - it's not really all possible observers.
How small is this set? Well, if we rely on just the argument given here for SIA, it's very small indeed. Suppose the experimenter stipulates an additional rule: he flips a second coin; if it comes up heads, he creates 10^10 extrea copies of you; if tails, he does nothing. However, these extra copies are not created inside rooms at all. You know you're not one of them, because you're in one of the rooms. The outcome of the second coin flip is made known to you. But it clearly doesn't influence your bet on their doors' colors, even when it increases the number of observers in your universe 10^8 times, and even though these extra observers are complete copies of your life up to this point, who are only placed in a different situation from you in the last second.
Now, the DA can be reformulated: instead of the set of all humans ever to live, consider the set of all humans (or groups of humans) who would never confuse themselves with one another. In this set the SIA doesn't apply (we don't predict that a bigger set is more likely). The DA does apply, because humans from different eras are dissimilar and can be indexed as the DA requires. To illustrate, I expect that if I were taken at any point in my life and instantly placed at some point of Leonardo da Vinci's life, I would very quickly realize something was wrong.
Presumed conclusion: if humanity does not become extinct totally, expect other humans to be more and more similar to yourself as time passes, until you survive only in a universe inhabited by a Huge Number of Clones
It also appears that I should assign very high probability to the chance that a non-Friendly super-intelligent AI destroys the rest of humanity to tile the universe with copies of myself in tiny life-support bubbles. Or with simulators running my life up to then in a loop forever.