CarlShulman comments on If life is unlikely, SIA and SSA expectations are similar - Less Wrong

3 Post author: Stuart_Armstrong 15 November 2011 04:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread.

Comment author: CarlShulman 15 November 2011 09:45:25PM 4 points [-]

This is kind of irrelevant to normal applications of SIA to estimates of the frequency of civilizations: you're assuming we know this model with infinite certainty, and restricting maximum populations to ludicrously low levels. But in reality we'll also have uncertainty about the model, e.g. whether life is unlikely or not, and populations could be immense. If we assign even a little weight to those other models with likely life then SIA will strongly update us towards them.

The example in this post is similar to saying "assume a fair coin, which comes up Heads for its first trillion flips; what is the probability that the next flip will be Heads?" Yes, given the wacky assumption of infinite certainty in the fair coin model the probability for the next flip is 0.5, but in fact one should assign some prior credence to other models, and the trillion-Heads streak should give a strong update towards them.

Comment author: Stuart_Armstrong 16 November 2011 11:14:22AM *  1 point [-]

This is kind of irrelevant to normal applications of SIA to estimates of the frequency of civilizations

Agreed. I'm not making much of a point here, just that some models make little distinction between SIA and SSA - this may be relevant, for instance, to the presumptuous philosopher. If presumptuous philosophers are unlikely, then Anthropic Decision Theory may push even selfless philosophers towards SSA.