Our universe might be fined-tuned for life because there are a huge number of universes each with different laws of physics and only under a tiny set of these laws can sentient life exist and we shouldn’t be surprised to live in one of these fine-tuned universes.
I was assuming Solonomoff Induction over the full space of computable universes, which is a more principled take on fine tuning and selection effects. We should expect to find ourselves in the universe described by the simplest theory (TOE) which explains our observations.
Our universe might also be fine-tuned for the Fermi paradox, especially if advanced civilizations often create paperclip maximizers.
Paperclip maximizers are a specific absurdity with probability near zero, and I find that discussing them sucks insight out of the discussion.
Perhaps if you look at the subset of all possible laws of physics under which sentient life can exist,
This full set is infinite, complex, and irrelevant - for all we know much of this space could have life radically different than our own. It is more productive to focus on the subset of the multiverse with physics like ours - compatible with our observations. In other words - we are hardly a random observer sampled from the full set of 'sentient life'.
in a tiny subset of these you will get a Fermi paradox because, say, some quirk in the laws of physics makes interstellar travel very hard or creates a trap that destroys all civilizations before they become spacefaring.
Interstellar travel does look pretty hard, but not hard enough to prevent slow colonization. For this argument to apply to our section of the multiverse, it would need to involve new unknown physics. This is one possibility - but it seems low probability compared to other options as discussed in my post.
creates a trap that destroys all civilizations before they become spacefaring. Civilizations such as ours will constantly arise in these universes.
But then they are destroyed. I didn't describe this in my post, but from an observational selection effect, it is crucial to consider the effect of deep simulations. The universes that produce lots of deep simulations with sentient observers will tend to swamp out all other possibilities, such as those where civilizations arise but do not produce deep simulations.
The models I discussed in my post all tend to produce enormous amounts of computation applied to deep simulation - which creates enormous numbers of observers such as ourselves and vastly outweighs universes where civilizations are destroyed.
In contrast, imagine that in universes fine-tuned for life but not the Fermi paradox civilizations often create some kind of paperclip maximizer that spreads at the maximum possible speed making the development of further life impossible.
Right - we don't live in that kind of universe.
Paperclip maximizers are a specific absurdity with probability near zero, and I find that discussing them sucks insight out of the discussion.
Strongly disagree, and in general it's dangerous to dismiss an argument by asserting that it's stupid and that merely discussing it is bad.
This full set [of all possible laws of physics under which sentient life can exist] is infinite,
How can you be so sure of this?
Our sun appears to be a typical star: unremarkable in age, composition, galactic orbit, or even in its possession of many planets. Billions of other stars in the milky way have similar general parameters and orbits that place them in the galactic habitable zone. Extrapolations of recent expolanet surveys reveal that most stars have planets, removing yet another potential unique dimension for a great filter in the past.
According to Google, there are 20 billion earth like planets in the Galaxy.
A paradox indicates a flaw in our reasoning or our knowledge, which upon resolution, may cause some large update in our beliefs.
Ideally we could resolve this through massive multiscale monte carlo computer simulations to approximate Solonomoff Induction on our current observational data. If we survive and create superintelligence, we will probably do just that.
In the meantime, we are limited to constrained simulations, fermi estimates, and other shortcuts to approximate the ideal bayesian inference.
The Past
While there is still obvious uncertainty concerning the likelihood of the series of transitions along the path from the formation of an earth-like planet around a sol-like star up to an early tech civilization, the general direction of the recent evidence flow favours a strong Mediocrity Principle.
Here are a few highlight developments from the last few decades relating to an early filter:
The Future(s)
When modelling the future development of civilization, we must recognize that the future is a vast cloud of uncertainty compared to the past. The best approach is to focus on the most key general features of future postbiological civilizations, categorize the full space of models, and then update on our observations to determine what ranges of the parameter space are excluded and which regions remain open.
An abridged taxonomy of future civilization trajectories :
Collapse/Extinction:
Civilization is wiped out due to an existential catastrophe that sterilizes the planet sufficient enough to kill most large multicellular organisms, essentially resetting the evolutionary clock by a billion years. Given the potential dangers of nanotech/AI/nuclear weapons - and then aliens, I believe this possibility is significant - ie in the 1% to 50% range.
Biological/Mixed Civilization:
This is the old-skool sci-fi scenario. Humans or our biological descendants expand into space. AI is developed but limited to human intelligence, like CP30. No or limited uploading.
This leads eventually to slow colonization, terraforming, perhaps eventually dyson spheres etc.
This scenario is almost not worth mentioning: prior < 1%. Unfortunately SETI in current form is till predicated on a world model that assigns a high prior to these futures.
PostBiological Warm-tech AI Civilization:
This is Kurzweil/Moravec's sci-fi scenario. Humans become postbiological, merging with AI through uploading. We become a computational civilization that then spreads out some fraction of the speed of light to turn the galaxy into computronium. This particular scenario is based on the assumption that energy is a key constraint, and that civilizations are essentially stellavores which harvest the energy of stars.
One of the very few reasonable assumptions we can make about any superintelligent postbiological civilization is that higher intelligence involves increased computational efficiency. Advanced civs will upgrade into physical configurations that maximize computation capabilities given the local resources.
Thus to understand the physical form of future civs, we need to understand the physical limits of computation.
One key constraint is the Landauer Limit, which states that the erasure (or cloning) of one bit of information requires a minimum of kTln2 joules. At room temperature (293 K), this corresponds to a minimum of 0.017 eV to erase one bit. Minimum is however the keyword here, as according to the principle, the probability of the erasure succeeding is only 50% at the limit. Reliable erasure requires some multiple of the minimal expenditure - a reasonable estimate being about 100kT or 1eV as the minimum for bit erasures at today's levels of reliability.
Now, the second key consideration is that Landauer's Limit does not include the cost of interconnect, which is already now dominating the energy cost in modern computing. Just moving bits around dissipates energy.
Moore's Law is approaching its asymptotic end in a decade or so due to these hard physical energy constraints and the related miniaturization limits.
I assign a prior to the warm-tech scenario that is about the same as my estimate of the probability that the more advanced cold-tech (reversible quantum computing, described next) is impossible: < 10%.
From Warm-tech to Cold-tech
There is a way forward to vastly increased energy efficiency, but it requires reversible computing (to increase the ratio of computations per bit erasures), and full superconducting to reduce the interconnect loss down to near zero.
The path to enormously more powerful computational systems necessarily involves transitioning to very low temperatures, and the lower the better, for several key reasons:
Assuming large scale quantum computing is possible, then the ultimate computer is thus a reversible massively entangled quantum device operating at absolute zero. Unfortunately, such a device would be delicate to a degree that is hard to imagine - even a single misplaced high energy particle could cause enormous damage.
Stellar Escape Trajectories
The Great Game
If two civs both discover each other's locations around the same time, then MAD (mutually assured destruction) dynamics takeover and cooperation has stronger benefits. The vast distances involve suggest that one sided discoveries are more likely.
Spheres of Influence
Conditioning on our Observational Data
Observational Selection Effects
All advanced civs will have strong instrumental reasons to employ deep simulations to understand and model developmental trajectories for the galaxy as a whole and for civilizations in particular. A very likely consequence is the production of large numbers of simulated conscious observers, ala the Simulation Argument. Universes with the more advanced low temperature reversible/quantum computing civilizations will tend to produce many more simulated observer moments and are thus intrinsically more likely than one would otherwise expect - perhaps massively so.
Rogue Planets
Although the error range is still large, it appears that free floating planets outnumber planets bound to stars, and perhaps by a rather large margin.
Assuming the galaxy is colonized: It could be that rogue planets form naturally outside of stars and then are colonized. It could be they form around stars and then are ejected naturally (and colonized). Artificial ejection - even if true - may be a rare event. Or not. But at least a few of these options could potentially be differentiated with future observations - for example if we find an interesting discrepancy in the rogue planet distribution predicted by simulations (which obviously do not yet include aliens!) and actual observations.
Also: if rogue planets outnumber stars by a large margin, then it follows that rogue planet flybys are more common in proportion.
Conclusion
SETI to date allows us to exclude some regions of the parameter space for alien civs, but the regions excluded correspond to low prior probability models anyway, based on the postbiological perspective on the future of life. The most interesting regions of the parameter space probably involve advanced stealthy aliens in the form of small compact cold objects floating in the interstellar medium.
The upcoming WFIST telescope should shed more light on dark matter and enhance our microlensing detection abilities significantly. Sadly, it's planned launch date isn't until 2024. Space development is slow.