What if post-singularity civilizations expand at the speed of light? Then we should not expect to see anything:
It looks like we are going to have less than 200 years between interstellar detectability and singularity. So the chance of us being around at the same time (adjusted for distance) as another civilization to a resolution of a few hundred years seems quite low.
Life will only get to the point of asking questions like these on worlds that haven't been ground up for resources, so we can only be outside the "expansion cone" of any post-singularity civilization. If the expansion cone and the light cone are close (within a few hundred years), then, given that we are outside of the expansion cone, we are probably outside the light cone as well. So the AI-as filter doesn't get falsified by observing no AIs.
It doesn't even have to be a filter, though it probably is; 100% of civilizaions could successfully navigate the intelligence explosion and we would see nothing, because we can only exist in the last corner of the universe that hasn't been ground up by them.
This is all assuming lightspeed expansion. Here's a few ideas: a single nanoseed catapulted at 99.9 % the speed of...
If intelligent life was common and underwent such expansion, then there would be very few new-arising lonely civilizations later in the history of the universe (the real estate for their evolution already occupied or filled with signs of intelligence). The overwhelming majority of civilizations evolving with empty skies would be much younger.
So, whether you attend to the number of observers with our observations, or the proportion of all observers with such observations, near-c expansion doesn't help resolve the Fermi paradox.
Another way of thinking about it is: we are a civilization that has developed without seeing any sign of aliens, developed on a planet that had not been colonized by aliens. Colonization would have prevented our observations just as surely as alien transmissions or a visibly approaching wave of colonization.
I still don't get it.
Assume life is rare/filtered, we straightforwardly expect to see what we see (empty sky).
Assume life is common and the singularity comes quickly and reliably, and colonization proceeds at the speed of light, then condition on the fact that we are pre-singularity. As far as I can tell, a random young civilization still expects empty skies, possibly slightly less because of the relatively small volume of spacetime where we would observe an approaching colonization wave.
So the observation of empty skies is only very weak evidence against life being common, given that this singularity stuff is sound.
The latter hypothesis is more specific, but I already believe all those assumptions (quick, reliable, and near-c).
Given that I take those singularity assumptions seriously (not just hypothetically), and given that we are where we are in the history of the universe, the fermi paradox seems resolved for me; I find it unlikely that a given young civilization would observe any other civilization, no matter the actual rate of life. If we did observe another isolated civilization it would be pretty much falsify my "quick,reliable, and lightspeed" singularity belief....
Hmm.
Okay, filters that would produce results consistent with observation.
1: Politics: Aka: "The Berzerker's Garden" The first enduring civilization in our galaxy rose many millions of years before the second, and happened to be both highly preservationist and highly expansionist. They have outposts buried in the deep crust of every planet in the galaxy, including earth. Whenever a civilization arises that is both inclined and able to turn the galaxy entire into fast food joints/smily faces/ect, arises said civilization very suddenly disappears. The berzerkers cannot be fought, and cannot be fooled, because they have been watching the entirety of history, and their contingency plans for us predate the discovery of fire. If we are really lucky, they will issue a warning before annihilating us.
2: Physics is booby trapped: One of the experiments every technological civilization inevitably conducts while exploring the laws of the universe has an unforeseeable, and planet-wrecking result. We are screwed.
3: Economics: The minimal mass of a technological "ecology" capable of sustaining itself outside of a compatible biosphere is just too large to fit into a star sh...
AFAIK the idea that "UFAI exacerbates, and certainly does not explain, the question of the Great Filter" is standard belief among SIAI rationalists (and always has been, i.e., none of us ever thought otherwise that I can recall).
I was just going to quote your comment on Overcoming Bias to emphasise this.
AFAIK, all SIAI personnel think and AFAIK have always thought that UFAI cannot possibly explain the Great Filter; the possibility of an intelligence explosion, Friendly or unFriendly or global-economic-based or what-have-you, resembles the prospect of molecular nanotechnology in that it makes the Great Filter more puzzling, not less. I don't view this as a particularly strong critique of UFAI or intelligence explosion, because even without that the Great Filter is still very puzzling - it's already very mysterious.
I think some people may be misinterpreting you as believing this because many people understand your advocacy as implying "UFAI is the biggest baddest existential risk we need to deal with". Assuming a late filter not explained by UFAI suggests there is an unidentified risk in our future that is much likelier than an uncontrolled intelligence explosion.
And this disaster can’t be an unfriendly super-AI, because that should be visible
This is not necessarily true. If the goals of the AI do not involve a rapid acquisition of resources even outside its solar system, then we would not see evidence for it (E.g, wireheading that does not involve creating as many sentient organisms as possible).
However, because there would be many instances of this, AI being the filter is probably still not likely. If it's very likely for UAI to be screwed up in a self-contained way, we would not expect to see evidence of life. If UAI has a non-negligible chance to gobble up everything it sees for energy, then we would expect to se it.
Not if the system is optimizing for the probability of success and can cheaply send out probes to eat the universe and use it to make sure the job is finished lest something go wrong (e.g. the sun-destroyer [???] failed, or aliens resuscitate the Sun under whatever criterion of stellar life is used).
"Our analysis of the alien probe shows that its intended function is to ... um ... go back to a star that blew up ten thousand years ago and make damn sure that it's blown up and not just fooling."
For galactic civilisations I'd guess that there would be a strong first mover advantage. If one civilisation (perhaps controlled be an AI) started expanding 1000 years before another then any conflict between them would likely be won by the civilisation that started capturing resources first.
But what if none of them know which of them expanded first? There might be several forces colonising the galaxy, and all keeping extremely quiet so that they don't get noticed and destroyed by and older civilisation. Thus no need for a great filter, and even if UFAI were common we wouldn't observe it colonising the galaxy.
The relevant notion of intelligence for a singularity is optimization power, and it's not obvious that we aren't already witnessing the expansion of such an intelligence. You may have already had these thoughts, but you didn't mention them, and I think they're important to evaluating the strength of evidence we have against UFAI explosions:
What do agents with extreme optimization power look like? One way for them to look is a rapidly-expanding-space-and-resource-consuming process which at some point during our existence engulfs our region of space destro...
I take issue with the assumption that the only two options are perpetual expansion of systems derived from an origin of life out into the universe, and destruction via some filter. The universe just might not be clement to expansion of life or life's products beyond small islands of habitability, like our world.
You cannot assume that the last few hundred years of our history is typical, or that you can expect similar exponentiation into the future. I would argue that it is a fantastic anomaly and regression to the mean is far more likely.
Good post, good explanation. I agree. I saw the recent comment on OB that probably sparked you making this topic, I was thinking of posting it fleetingly before akrasia kicked in. So, thanks.
A throwaway parenthesized remark from RH that nevertheless should be of major importance, because it lowers the credence we should assign to the argument that "UFAI is a good great filter candidate, and a great filter is a good explanation for the Fermi paradox, ergo we should raise our belief in the the verisimilitude of UFAI occurring."
Early or late great filter?
I'm currently leaning strongly towards late filter, because many of the proposed early filters seem to not be such big barriers. We've for example found a bunch of exoplanets in the last decade or so and several of those seem plausibly in the habitable zone. Life on Earth arose very early in its history so if life arising is the hard and rare step I would expect there to be many more hundreds of millions or even billions of years of conditions on Earth being seemingly ripe for it arising and it not doing so.
..." Abiogenesis
Really, it seems like any kind of superintelligent AI, friendly or unfriendly, would result in expanding intelligence throughout the universe. So perhaps a good statement would be: "If you believe the Great Filter is ahead of us, that implies that most civilizations get wiped out before achieving any kind of superintelligent AI, meaning that either superintelligent AI is very hard, or wiping out generally comes relatively early." (It seems possible that we already got lucky with the Cold War... http://www.guardian.co.uk/commentisfree/2012/oct/27/vasili-arkhipov-stopped-nuclear-war)
Katja says:
The large minimum total filter strength contained in the Great Filter is evidence for larger filters in the past and in the future.
That's true - but anthropic evidence seems kind-of trumped by the direct observational evidence that we have already invented advanced technology and space travel, which took many billions of years. From here, expansion shouldn't be too difficult - unless, of course, we meet more-advanced aliens.
Other civilizations may possibly be expanding too by now - SETI is still too small and young to say much about that di...
Interesting. However, I'd like to propose an alternative: The real probability of another alien civilisation being inside our universe shard, that is the area of the universe that us humans can possibly explore below the speed of light, is very low. So there might be predatory super intelligence that has wiped out the civilisation that made it, but we're just not in its universe shard.
When you (and Robin) say "because [UFAI] should be visible," that seems to imply that there are a significant number of potential observer moments that occur where we can see evidence for a UFAI but the UFAI is not yet able to break us down into spare parts. I've always assumed that if a UFAI was created in our lightcone, we would be extinct in very short amount of time. Thus, the assertion "UFAI is not the great filter because we don't see any" is similar to saying "giant asteroids aren't the great filter because we don't see any ...
Alien UFAI could be dangerous for us if we find its radiosignal as a result of SETI seach. His messages could content a bite which could lure us to built a copy of alien AI based on schemas which he will send to us in this messages.
D. Carrigan wrote about it: http://home.fnal.gov/~carrigan/SETI/SETI_Hacker.htm
Some simple natural selection reason imply that UFAI radiosignals should dominate in all SETI signals if any exist. And the goal of such UFAI is to convert the Earth to another radio backon which will send his own code futher.
My article on the topic...
I have a hard time imagining a filter that could've wiped out all of a large number of civilizations that got to our current point or further. That's not to say that future x-risks aren't an issue--it just feels implausible that no civilization would've been able to successfully coordinate with regard to them or avoid developing them. (E.g. bonobos seem substantially more altruistic than humans and are one of the most intelligent non-human species.)
Also, I thought of an interesting response to the Great Filter, assuming that we're pretty sure it actually...
Of course, that might just mean that 99.9% of all civilizations destroy themselves in the roughly 100 years between the invention of the nuclear bomb and the invention of AGI.
This should be "UFAI can't be the only great filter." Nothing says that once you get past a great filter, you are home free. Maybe we already passed a filter on life originating in the first place, or a technology-using species evolving, but UFAI is another filter that still has an overwhelming probability of killing us if nothing else does first.
The fact that UFAI can't be the only great filter certainly screens off the presence of a great filter as evidence of UFAI being a great filter, but there are good arguments directly from how UFAI would work that indicate that it is pretty big danger.
[Summary: The fact we do not observe (and have not been wiped out by) an UFAI suggests the main component of the 'great filter' cannot be civilizations like ours being wiped out by UFAI. Gentle introduction (assuming no knowledge) and links to much better discussion below.]
Introduction
The Great Filter is the idea that although there is lots of matter, we observe no "expanding, lasting life", like space-faring intelligences. So there is some filter through which almost all matter gets stuck before becoming expanding, lasting life. One question for those interested in the future of humankind is whether we have already 'passed' the bulk of the filter, or does it still lie ahead? For example, is it very unlikely matter will be able to form self-replicating units, but once it clears that hurdle becoming intelligent and going across the stars is highly likely; or is it getting to a humankind level of development is not that unlikely, but very few of those civilizations progress to expanding across the stars. If the latter, that motivates a concern for working out what the forthcoming filter(s) are, and trying to get past them.
One concern is that advancing technology gives the possibility of civilizations wiping themselves out, and it is this that is the main component of the Great Filter - one we are going to be approaching soon. There are several candidates for which technology will be an existential threat (nanotechnology/'Grey goo', nuclear holocaust, runaway climate change), but one that looms large is Artificial intelligence (AI), and trying to understand and mitigate the existential threat from AI is the main role of the Singularity Institute, and I guess Luke, Eliezer (and lots of folks on LW) consider AI the main existential threat.
The concern with AI is something like this:
Or, as summarized by Luke:
... AI leads to intelligence explosion, and, because we don’t know how to give an AI benevolent goals, by default an intelligence explosion will optimize the world for accidentally disastrous ends. A controlled intelligence explosion, on the other hand, could optimize the world for good. (More on this option in the next post.)
So, the aim of the game needs to be trying to work out how to control the future intelligence explosion so the vastly smarter-than-human AIs are 'friendly' (FAI) and make the world better for us, rather than unfriendly AIs (UFAI) which end up optimizing the world for something that sucks.
'Where is everybody?'
So, topic. I read this post by Robin Hanson which had a really good parenthetical remark (emphasis mine):
Yes, it is possible that the extremely difficultly was life’s origin, or some early step, so that, other than here on Earth, all life in the universe is stuck before this early extremely hard step. But even if you find this the most likely outcome, surely given our ignorance you must also place a non-trivial probability on other possibilities. You must see a great filter as lying between initial planets and expanding civilizations, and wonder how far along that filter we are. In particular, you must estimate a substantial chance of “disaster”, i.e., something destroying our ability or inclination to make a visible use of the vast resources we see. (And this disaster can’t be an unfriendly super-AI, because that should be visible.)
This made me realize an UFAI should also be counted as an 'expanding lasting life', and should be deemed unlikely by the Great Filter.
Another way of looking at it: if the Great Filter still lies ahead of us, and a major component of this forthcoming filter is the threat from UFAI, we should expect to see the UFAIs of other civilizations spreading across the universe (or not see anything at all, because they would wipe us out to optimize for their unfriendly ends). That we do not observe it disconfirms this conjunction.
[Edit/Elaboration: It also gives a stronger argument - as the UFAI is the 'expanding life' we do not see, the beliefs, 'the Great Filter lies ahead' and 'UFAI is a major existential risk' lie opposed to one another: the higher your credence in the filter being ahead, the lower your credence should be in UFAI being a major existential risk (as the many civilizations like ours that go on to get caught in the filter do not produce expanding UFAIs, so expanding UFAI cannot be the main x-risk); conversely, if you are confident that UFAI is the main existential risk, then you should think the bulk of the filter is behind us (as we don't see any UFAIs, there cannot be many civilizations like ours in the first place, as we are quite likely to realize an expanding UFAI).]
A much more in-depth article and comments (both highly recommended) was made by Katja Grace a couple of years ago. I can't seem to find a similar discussion on here (feel free to downvote and link in the comments if I missed it), which surprises me: I'm not bright enough to figure out the anthropics, and obviously one may hold AI to be a big deal for other-than-Great-Filter reasons (maybe a given planet has a 1 in a googol chance of getting to intelligent life, but intelligent life 'merely' has a 1 in 10 chance of successfully navigating an intelligence explosion), but this would seem to be substantial evidence driving down the proportion of x-risk we should attribute to AI.
What do you guys think?