Do you have any thoughts on the interaction between a great filter and anthropics? For example, maybe the reason we live in our universe is that the great filter is early and by virtue of existing now we have already passed through a great filter that eliminated other faster growing civilizations that would have overrun us. And if quantum immortality seems likely, maybe the existence of a late filter is irrelevant because we are unlikely to find ourselves in universes where we don't pass through a late great filter.
A bit of a tangent to the post, I know, but seems a decent place to bring it up.
I think of anthropics as an issue of decision, not probability: https://www.fhi.ox.ac.uk/wp-content/uploads/Anthropic_Decision_Theory_Tech_Report.pdf
The Quantum suicide/immortality argument is a bit harder to parse; it's clearest when there's an immediate death/survival quantum event, not so clear in the run-up to potentially destructive events when the amplitudes for the observer is the same on either branch.
QI will help to survive not for the whole civilisation, but only for the one observer. He will be either last human in the post-apocalyptic world, or AI.
That's only true if there exists a quantum future in which the observer can survive alone indefinitely. It could be that absent a sufficient number of survivors there are thermodynamic limits on how long an observer can survive, in which case there might be branches where a lone survivor lives out 100 or even 1000 years but I wouldn't really call that quantum immortality.
The only such future is where he will be able constantly upgrade himself , and become a posthuman or AI-upload. After that stage, he will be able to create new beings. Basically it means that QI remote future is very positive.
I would add that the most uncertain thing is the interaction between different x-risks, as it seems that most of them could happen in very short period of time, like 10-20 years just before creation of the powerful AI. I call this epoch "oscillations before the Singularity" and for me the main question is will we be able to survive it.
A post suggested by James Miller's presentation at the Existential Risk to Humanity conference in Gothenburg.
Seeing the emptiness of the night sky, we can dwell upon the Fermi paradox: where are all the alien civilizations that simple probability estimates imply we should be seeing?
Especially given the ease of moving within and between galaxies, the cosmic emptiness implies a Great Filter: something that prevents planets from giving birth to star-spanning civilizations. One worrying possibility is the likelihood that advanced civilizations end up destroying themselves before they reach the stars.
The Great Filter as an Outside View
In a sense, the Great Filter can be seen as an ultimate example of the Outside View: we might have all the data and estimation we believe we would ever need from our models, but if those models predict that the galaxy should be teeming with visible life, then it doesn't matter how reliable our models seem: they must be wrong.
In particular, if you fear a late great filter - if you fear that civilizations are likely to destroy themselves - then you should increase your fear, even if "objectively" everything seems to be going all right. After all, presumably the other civilizations that destroyed themselves thought everything seemed to going all right. Then you can adjust your actions using your knowledge of the great filter - but presumably other civilizations also thought of the great filter and adjusted their own actions as well, but that didn't save them, so maybe you need to try something different again or maybe you can do something that breaks the symmetry from the timeless decision theory perspective like send a massive signal to the galaxy...
The Great Filter isn't magic
It can all get very headache-inducing. But, just as the Outside View isn't magic, the Great Filter isn't magic either. If advanced civilizations destroy themselves before becoming space-faring or leaving an imprint on the galaxy, then there is some phenomena that is the cause of this. What can we say, if we look analytically at the great filter argument?
First of all suppose we had three theories - early great filter (technological civilizations are rare), late great filter (technological civilizations destroy themselves before becoming space-faring), or no great filter. Then we look up at the empty skies, and notice no aliens. This rules out the third theory, but leaves the relative probabilities of the other two intact.
Then we can look at objective evidence. Is human technological civilization likely to end in a nuclear war? Possibly, but are the odds in the 99.999% range that would be needed to explain the Fermi Paradox? Every year that has gone by has reduced the likelihood that nuclear war is very very very very likely. So a late Great Filter may seemed quite probable compared with an early one, but much of the evidence we see is against it (especially if we assume that AI - which is not a Great Filter! - might have been developed by now). Million-to-one prior odds can be overcome by merely 20 bits of information.
And what about the argument that we have to assume that prior civilizations would also have known of the Great Filter and thus we need to do more than they would have? In your estimation, is the world currently run by people taking the Great Filter arguments seriously? What is the probability that the world will be run by people that take the Great Filter argument seriously? If this probability is low, we don't need to worry about the recursive aspect; the ideal situation would be if we can achieve:
Powerful people taking the Great Filter argument seriously.
Evidence that it was hard to make powerful people take the argument seriously.
Of course, successfully achieving 1 is evidence against 2, but the Great Filter doesn't work by magic. If it looks like we achieved something really hard, then that's some evidence that it is hard. Every time we find something unlikely with a late Great Filter, that shifts some of the probability mass away from the late great filter and into alternative hypotheses (early Great Filter, zoo hypothesis,...).
Variance and error of xrisk estimates
But let's focus narrowly on the probability of the late Great Filter.
Current estimates for the risk of nuclear war are uncertain, but let's arbitrarily assume that the risk is 10% (overall, not per year). Suppose one of two papers comes out:
Paper A shows that current estimates of nuclear war have not accounted for a lot of key facts; when these facts are added in, the risk of nuclear war drops to 5%.
Paper B is a massive model of international relationships with a ton of data and excellent predictors and multiple lines of evidence, all pointing towards the real risk being 20%.
What would either paper mean from the Great Filter perspective? Well, counter-intuitively, papers like A typically increase the probability for nuclear war being a Great Filter, while papers like B decrease it. This is because none of 5%, 10%, and 20% are large enough to account for the Great Filter, which requires probabilities in the 99.99% style. And, though paper A decreases the probability of the nuclear war, it also leaves more room for uncertainties - we've seen that a lot of key facts were missing in previous papers, so it's plausible that there are key facts still missing from this one. On the other hand, though paper B increases the probability, it makes it unlikely that the probability will be raised any further.
So if we fear the great filter, we should not look at risks whose probabilities are high, but risks who's uncertainty is high, where the probability of us making an error is high. If we consider our future probability estimates as a random variable, then the one whose variance is higher is the one to fear. So a late Great Filter would make biotech risks even worse (current estimates of risk are poor) while not really changing asteroid impact risks (current estimates of risk are good).