Katja Grace has just presented an ingenious model, claiming that SIA combined with the great filter generates its own variant of the doomsday argument. Robin echoed this on Overcoming Bias. We met soon after Katja had come up with the model, and I signed up to it, saying that I could see no flaw in the argument.
Unfortunately, I erred. The argument does not work in the form presented.
First of all, there is the issue of time dependence. We are not just a human level civilization drifting through the void in blissful ignorance about our position in the universe. We know (approximately) the age of our galaxy, and the time elapsed since the big bang.
How is this relevant? It is relevant because all arguments about the great filter are time-dependent. Imagine we had just reached consciousness and human-level civilization, by some fluke, two thousand years after the creation of our galaxy, by an evolutionary process that took two thousand years. We see no aliens around us. In this situation, we have no reason to suspect any great filter; if we asked ourselves "are we likely to be the first civilization to reach this stage?" then the answer is probably yes. No evidence for a filter.
Imagine, instead, that we had reached consciousness a trillion years into the life of our galaxy, again via an evolutionary process that took two thousand years, and we see no aliens or traces of aliens. Then the evidence for a filter is overwhelming; something must have stopped all those previous likely civilizations from emerging into the galactic plane.
So neither of these civilizations can be included in our reference class (indeed, the second one can only exist if we ourselves are filtered!). So the correct reference class to use is not "the class of all potential civilizations in our galaxy that have reached our level of technological advancement and seen no aliens", but "the class of all potential civilizations in our galaxy that have reached our level of technological advancement at around the same time as us and seen no aliens". Indeed, SIA, once we update on the present, cannot tell us anything about the future.
But there's more. Let us lay aside, for the moment, the issue of time dependence. Let us instead consider the diagrams in Katja's post as if the vertical axis were time: all potential civilizations start at the same point, and progress at the same rate. Is there still a role for SIA?
The answer is... it depends. It depends entirely on your choice of prior. To illustrate this, consider this pair of early-filter worlds:
To simplify, I've flattened the diagram, and now consider only two states: human civilizations and basic lifeforms. And here are some late filter worlds:
Assign an equal prior of (1/4) to each one of these world. Then the prior probability of living in a late filter world is (1/4+1/4)=1/2, and the same holds for early filter worlds.
Let us now apply SIA. These boost the probability of Y and B at the expense of A and X. Y and B end up having a probability 1/3, while A and X end up having a probability 1/6. The postiori probability of living in a late filter world is (1/3+1/6)=1/2, and the same goes for early filter worlds. Applying SIA has not changed the odds of late versus early filters.
But people might feel this is unfair; that I have loaded the dice, especially by giving world Y the same prior as the others. It has too many primitive lifeforms; it's too unlikely. Fine then; let us give prior probabilities as follows:
X | Y | A | B |
---|---|---|---|
2/30 |
1/30 | 18/30 | 9/30 |
This world does not exactly over-weight the chance of human survival! The prior probability of a late filter is (18/30+9/30)=9/10, while that of an early filter is 1/10. But now let us consider how SIA changes those odds: Y and B are weighted by a factor of two, while X and A are weighted by a factor of one. The postiori probabilities are thus:
X | Y | A | B |
---|---|---|---|
1/20 |
1/20 | 9/20 | 9/20 |
The postiori probability of a late filter is (9/20+9/20)=9/10, same as before: again SIA has not changed the probability of where the filter is. But it gets worse; if, for instance, we had started with the priors:
X | Y | A | B |
---|---|---|---|
1/30 |
2/30 | 18/30 | 9/30 |
This is the same as before, but with X and Y inversed. The early filter still has only one chance in ten, a priori. But now if we apply SIA, the postiori odds of X and Y are 1/41 and 4/41, totalling of 5/41 > 1/10. Here applying SIA has increased our chances of survival!
In general there are a lot of reasonable priors over possible worlds were SIA makes little or no difference to the odds of the great filter, either way.
Conclusion: Do I believe that this has demonstrated that the SIA/great filter argument is nonsense? No, not at all. I think there is a lot to be gained from analysing the argument, and I hope that Katja or Robin or someone else - maybe myself, when I get some spare time, one of these centuries - sits down and goes through various scenarios, looks at classes of reasonable priors and evidence, and comes up with a conclusion about what exactly SIA says about the great filter, the strength of the effect, and how sensitive it is to prior changes. I suspect that when the dust settles, SIA will still slightly increase the chance of doom, but that the effect will be minor.
Having just saved humanity, I will now return to more relaxing pursuits.
That's why you don't see any worlds where the filter happens after where we are - these worlds are not in our reference class (to use outdated SSA terminology). We can't use SIA on them.
There still is a way of combining SIA with the filter argument; it goes something like:
1) Use SIA on the present time to show there are lots of civilizations at our level around now.
2) Use a distribution on possible universes to argue that 1) implies there were lots of civilizations at our level around before.
3) From 2), argue that the filter is in our future.
The problem is 2). There are universes in which there is no great filter, but whose probability is boosted by SIA - say, slow-start simultaneous worlds, where it takes several billion years for life to get going, but life is never filtered at all, and now the galaxy is filled with civilizations at approximately our level. This world is very unlikely - but SIA boosts its probability!
So until we have some sensible distributions over possible worlds with filters, we can't assert SIA+great filter => DOOM. I feel it's intuitively likely that SIA does increase doom somewhat, but that's not a proof.
This dispute about 2) seems a little desperate to me as a way out of doom.
Surely there is high prior probability for universes whose density of civilizations does NOT rise dramatically at a crucial time close to our own (such that at around our time t/o ~ 13 billion years the density of civilizations at our level is high, whereas at times very slightly before t/o in cosmological terms, the density is very low)? If you assume that with high probability, lots of civilizations now implies lots of civilizations a million years ago (but still none of them expa... (read more)