There's a valid argument here, but the conclusion sounds too strong. I think the level of proof which is required for "easily colonise the universe" in this context is much higher than in the context of your other post (which is about best guess scenarios), because if there is a Great Filter then something surprising happens somewhere. So we should consider whether even quite unlikely-sounding events like "we've misunderstood astrophysics" might be possible.
I'm still highly skeptical of the existence of the "Great Filter". It's one possible explanation to the "why don't we see any hint of existence of someone else" but not the only one.
The most likely explanation to me is that intelligent life is just so damn rare. Life is probably frequent enough - we know there are a lot of exoplanets, many have the conditions for life, and life seems relatively simple. But intelligent life ? It seems to me it required a great deal of luck to exist on Earth, and it does seem somewhat likely that it's rar...
This degree of insight density is why I love LW.
Someone who is just scanning your headline might get the wrong idea, though: It initially read (to me) as two alternate possible titles, implying that the filter is early and AI is hard and these two facts have a common explanation (when the actual content seems to be "at least one of these is true, because otherwise the universe doesn't make sense").
Once AI is developed, it could "easily" colonise the universe.
I dispute this assumption. I think it is vanishingly unlikely for anything self-replicating (biological, technological, or otherwise) to survive trips from one island-of-clement-conditions (~ 'star system') to another.
6 hours of the sun's energy, or 15 billion years worth of current human energy use (or only a few trillion years human energy use in the early first millennium, it really was not exponential until the 19th/20th century and these days its more linear). The only way you get energy levels that high is with truly enormous stellar-scale engineering projects like Dyson clouds, which we see no evidence of when we look out into the universe in infrared - those are something we would actually be able to see. Again, if things of that sheer scale are something that intelligent systems don't get around to doing for one reason or another, then this sort of project would never happen.
Additionally, the papers referenced there have 'seed' masses sent to other GALAXIES massing grams with black-box arbitrary control over matter and the capacity to last megayears in the awful environment of space. Pardon me if I don't take that possibility very seriously, and adjust the energy figures up accordingly.
I think it's quite unlikely, yes.
It seems like a natural class of explanations for the fermi paradox, one which I am always surprised never gets more people coming up with it. Most people pile into 'intelligent systems almost never appear' or 'intelligent systems have extremely short lifespans'. Why not 'intelligent systems find it vanishingly difficult to spread beyond small islands'? It seems more reasonable to me than either of the two previous ones, as it is something that we haven't seen intelligent systems do yet (we are an example of one both arising and sticking around for a long time).
If I must point out more justification than that, I would immediately go with:
1 - All but one of our ships BUILT for space travel that have gone on to escape velocity have failed after a few decades and less than 100 AUs. Space is a hard place to survive in.
2 - All self-replicating systems on earth live in a veritable bath of materials and energy they can draw on; a long-haul space ship has to either use literally astronomical energy at the source and destination to change velocity, or 'live' off only catabolizing itself in an incredibly hostile environment for millennia at least whil...
Because to creatures such as us that have only been looking for a hundred years with limited equipment, a relatively 'full' galaxy would look no different from an empty one.
Consider the possibility that you have about 10,000 intelligent systems that can use radio-type effects in our galaxy (a number that I think would likely be a wild over-estimation given the BS numbers I occasionally half-jokingly calculate given what I know of the evolutionary history of life on Earth and cosmology and astronomy, but it's just an example). That puts each one, on average, in an otherwise 'empty' cube 900 light years on a side that contains millions of stars. EDIT: if you up it to a million intelligent systems, the cube only goes down to about 200 light years wide with just under half a million stars, I just chose 10,000 because then the cube is about the thickness of the galaxy's disc and the calculation was easy.
We would be unable to detect Earth's own omnidirectional radio leaks less than a light year away according to figures I have seen, and since omnidirectional signals decrease with the square of distance even to be seen 10 light years away you would need hundreds of times as much. Seein...
Tongue in cheek thought that just popped into my head: There is no great filter, and we are actually seeing intelligence everywhere because it turns out dark matter is just a really advanced form of computronium.
Or, the simulation is running other star systems at a lower fidelity. Or, Cosmic Spirits from Beyond Time imposed life on our planet via external means, and abiogenesis is actually impossible. The application of sufficiently advanced intelligence may be indistinguishable from reality.
It's also possible that AI used to be hard but no longer is because something in the universe recently changed. Although this seems extremely unlikely, The Fermi paradox implies that something very unlikely is indeed occurring.
You are missing at least two options.
First, our knowledge of physics is far from complete, and there can be some reasons that make interstellar colonization just impossible.
Second, consider this: our building technology is vastly better than it was few thousands years ago, and our economic capabilities are much greater. Yet, noone among last century rulers was buried in tomb comparable to Egyptian pyramids. The common reply is that it takes only one expansionist civilization to take over the universe. But number of civilizations is finite, and colonization can be so unattractive that number of expansionists is zero.
Has anybody suggested that the great filter may be that AIs are negative utilitarians that destroy life on their planet? My prior on this is not very high but it's a neat solution to the puzzle.
So the Great Filter must predate us, unless AI is hard.
There's a 3rd possibility: AI is not super hard, say 50 yrs away, but species tend to get filtered when they are right on the verge of developing AI. Which points to human extinction in the next 50 years or so.
This seems a little unlikely. A filter that only appeared on the verge of AI would likely be something technology-related. But different civs explore the tech tree differently. This only feels like a strong filter if the destructive tech was directly before superintelligence on the tree. ...
Once AI is developed, it could "easily" colonise the universe.
I was wondering about that. I agree with the could, but is there a discussion of how likely it is that it would decide to do that?
Let’s take it as a given that successful development of FAI will eventually lead to lots of colonization. But what about non-FAI? It seems like the most “common” cases of UFAI are mistakes in trying to create an FAI. (In a species with similar psychology to ours, a contender might also be mistakes trying to create military AI, and intentional creation by...
Or, conversely, Great Filter doesn't prevent civilizations from colonising galaxies, and we've been colonised long time ago. Hail Our Alien Overlords!
And I'm serious here. Zoo hypothesis seems very conspiracy-theory-y, but generalised curiosity is one of the requirments for developing civ capable of galaxy colonisation, and powerful enough civ can sacrifice few star systems for research purposes, and it seem that most efficient way of simulating biological evolution or civ developement is actually having a planet develop on its own.
I'd like to repeat the comment I had made at "outside in" for the same topic, the great filter.
I think our knowledge of all levels – physics, chemistry, biology, praxeology, sociology is nowhere near the level where we should be worrying too much about the fermi paradox.
Our physics has openly acknowledged broad gaps in our knowledge by postulating dark matter, dark energy, and a bunch of stuff that is filler for – "I don’t know". We don't have physics theories that explain the smallest to the largest.
Coming to chemistry and biology, w...
Alternatively the only stable AGI has a morality that doesn't make it behave in a way where it simply colonises the whole universe.
What if there is something that can destroy entire universe, and sufficiently advanced civilization eventually does it?
Another possibility is that AI wipes us out and is also not interested in expansion.
Since expansion is something inherent to living beings, and AI is a tool built by living beings, it wouldn't make sense for its goals not to include expansion of some kind (i.e. it would always look at the universe with sighing eyes, thinking of all the paperclips that represents). But perhaps in an attempt to keep AI in line somehow we would constrain it to a single stream of resources? In which case it would not be remotely interested in anything outside of Earth?&n...
There is a third alternative. Observed universe is limited, the probability of life arising from non-living matter is low, suitable planets are rare, evolution doesn't directly optimize for intelligence. Civilizations advanced enough to build strong AI are probably just too rare to appear in our light cone.
We could have already passed the 'Gread Filter' by actually existing in the first place.
So the Great Filter (preventing the emergence of star-spanning civilizations) must strike before AI could be developed.
Maybe AI is the Great Filter, even if it is friendly.
The friendly AI could determine that colonization of what we define as “our universe” is unnecessary or detrimental to our goals. Seems unlikely, but I wouldn’t rule it out.
A self-improving intelligence can indeed be the Great Filter, that is, if it has already reached us from elsewhere, potentially long ago.
Keeping in mind that the delta between "seeing evidence of other civilizations in the sky" (= their light-speeded signals reaching us) and "being enveloped by another civilization from the sky" (= being reached by their near-light-speeded probes) is probably negligible (order of 10^3 years being conservative?).
Preventing us from seeing evidence, or from seeing anything at all which it doesn't want us to see, would be trivial.
Yes, I'm onto you.
or we could be on the cusp of building it
It's not a negligible probability that this is in fact the case. Some would know this fact from a closer perspective, most would not.
It seems possible to me that if many intelligences reached our current stage, at least a few would have neurological structures that were relatively easy to scan and simulate. This would amount to AI being "easy" for them (in the form of ems, which can be sped up, improved, etc.)
I think we can file this under AI is hard because you have to create an intelligence that is so vast that it can have as close to apriori knowledge of its existence as we do. While I agree that once that intelligence exists that it may wish/want to control such vast resources and capability that it could quickly bring together the resources, experts, and others to create exploratory vehicles to begin the exploration and mapping process of the universe. However, I think you also have to realize that life happens and our Universe is a dynamic place. Ships wo...
Or the origin of life is hard.
Or the evolution of multicellular life is hard.
Or the evolution of neural systems is hard.
Or the breakaway evolution of human-level intelligence is hard.
(These are not the same thing as an early filter.)
Or none of that is hard, and the universe is filled with intelligences ripping apart galaxies. They are just expanding their presence at near the speed of light, so we can't see them until it is too late.
(These are not the same thing as an early filter.)
Why not? I thought the great filter was anything that prevented ever-expanding intelligence visibly modifying the universe, usually with the additional premise that most or all of the filtering would happen at a single stage of the process (hence 'great').
Or none of that is hard, and the universe is filled with intelligences ripping apart galaxies. They are just expanding their presence at near the speed of light, so we can't see them until it is too late.
If they haven't gotten here yet at near-lightspeed, that means their origins don't lie in our past; the question of the great filter remains to be explained.
You do seem to be using the term in a non-standard way. Here's the first use of the term from Hanson's original essay:
[...]there is a "Great Filter" along the path between simple dead stuff and explosive life.
The origin and evolution of life and intelligence are explicitly listed as possible filters; Hanson uses "Great Filter" to mean essentially "whatever the resolution to the Fermi Paradox is". In my experience, this also seems to be the common usage of the term.
Attempt at the briefest content-full Less Wrong post:
Once AI is developed, it could "easily" colonise the universe. So the Great Filter (preventing the emergence of star-spanning civilizations) must strike before AI could be developed. If AI is easy, we could conceivably have built it already, or we could be on the cusp of building it. So the Great Filter must predate us, unless AI is hard.