"This is very strange. Consider that if humankind makes it another thousand years, we’ll probably have started to colonize other star systems. Those star systems will colonize other star systems and so on until we start expanding at nearly the speed of light, colonizing literally everything in sight. After a hundred thousand years or so we’ll have settled a big chunk of the galaxy, assuming we haven’t killed ourselves first or encountered someone else already living there."
There are two assumption here that everybody in the rationalist/EA/AI alignment community seems to take for granted while they seems are least debatable (if not downright false) to me:
1) If we make it to another thousand years, we'll probably have started to colonize other star systems.
Well, it may turn out it's a little bit more complicated to go to an other star system than to go to Mars - them being far and so on. Any colony outside of the solar system would necessarily have very limited economic ties with the Earth, since even information would need a few years to be transmitted (good luck managing your Alpha Centauri properties from Earth...) and material goods would be even worse. So the economic interest is not even clear in any non-paperclip maximiser scenario, and that's assuming galaxy level colonisation is even possible physically (remember that if you are going fast, you will need to slow down, and that the faster you go the heavier you get).
2) "expanding at nearly the speed of light". This seems downright implausible and ruled out by my understanding of current physic. Light goes fast, you know.
I think your analysis relies on the assumption that the Einsteinian speed limit is definite, which by current run of the mill physics, geometric unity is up in the air. So whether this is possible could be a break-point, but technological maturity could still break this barrier in forms of entanglement, for instance, granted this is not constrained by the 'ultimate physics' of the universe.
I'm not sure I understand exactly what you say, but given my understanding of current physics the speed of light is a hard barrier in the general relativistic framework, and one that is unlikely to be overturned, because any new theoretical physic should have relativity as a special case (just like Newtonian mechanic is a special case of relativity where you can consider v/c=0.
So, if I'm understanding this argument thread correctly (and I am in no means totally sure that I am) then you've basically argued against the speed of light as a hard barrier using quantum entanglement, all the way back around around to again accepting that, with our current understanding of physics, yes, it DOES seem to be a hard barrier to any type of successful interplanetary colonization beyond any species' local solar system/star group?
If I'm correct in that generalization (base as it may be) then aren't we essentially agreeing that the "Great Filer" this article is discussing is most likely as simple as "the speed of light{"? Because even if certain civilizations exist out there who have, over hundreds of thousands, or even millions of years, managed to expand or even simply not destroy themselves, then isn't it logical to assume the reason we haven't seen or heard any evidence of such is because that evidence still hasn't reached us? Maybe never will, within the life span of our sun and solar system? And even if it did reach us from millions of light years away, maybe we simply cant recognize it in its diluted or/deteriorated state from its travel throughout the vastness of "space"?
I guess what I'm saying is, maybe we're just underestimating how vast our universe is, and how sparsely distributed intelligent species may be throughout? Even if there are "a lot" of them out there, that doesn't necessarily mean we would ever be aware of it. Or that they may ever be aware of us.
Feel free to point out my mistakes in logic/reasoning here. I'm simply here to learn. Thanks.
Right, you are correct, as the our current theoretical understanding puts it. It could still be the case that this speed limit is imposed, but the goal of reaching point B from point A is still actualized.
As far as I am aware entanglement does not violate Einstein's rule, as no information is transmitted between entangled particles - you can not use entanglement to transmit information faster than light. So no "quantum teleportation" is possible (and anyway entanglement requires the actual particles to travel and they can not do this faster than light either).
Correct. It doesn't violate the limit yet they communicate as though they were violating it. It achieves communication by different means scarcely understood. We can achieve accurate predictions using entanglement. Information can be currently be communicated through qubits in quantum computers.
What are you talking about ? What has quantum computing to do with faster than light travel ?
Entangled particle "communicate with each other instantly", but the no-communication theorem is a well known result in quantum mechanic (https://en.wikipedia.org/wiki/No-communication_theorem) that shows that entanglement can not be used to transmit information (i.e. messages or matter) faster than light.
Thanks for linking this. Seems like I was arguing from an incorrect premise. What sources do you know of where I can learn/understand more about quantum theory?
An entangled quantum state is one which cannot be written as the tensor product of two pure states. Imagine I prepare two objects whose states are quantum-entangled; suppose they either both say '0' or '1', but they can't disagree, and we don't know beforehand. We each take an object and go our separate ways.
A day later, I decide to "measure" the state of my object. This basically just means I get to find out whether it says '0' or '1'. I find out it says '0'. This allows me to deduce that when you measure your object, yours will also say '0'.
But this isn't sending information 'faster than light' or 'instantly' or anything strange like that. The quantum superposition was between the and states, and when I measured the state of my object, I "found out" which world I was in. Then, I inferred what you would see if you peeked.
We can construct a classical example of a similar kind of "faster-than-light" inference. Consider a closed system of two objects of equal mass, which were earlier at rest together before spontaneously pushing off of each other (maybe we intervened to make this happen, whatever). We aren't sure about their velocities, but when we learn the velocity of one object, we immediately know that the velocity of the other is by conservation of momentum. Under these assumptions, the velocity of one object logically determines the velocity of the other object, but does not cause the other object to have the opposite velocity.
(I'm not a physicist, but if you want to learn the math of basic quantum theory, I recommend Nielsen and Chuang)
Interesting. It seems that although entanglement breaking the speed of light is technically untrue, we could make these inferences 'as if' they were quicker than the speed of light. Instead of measuring the differences between the states of the two items, we're able to make that inference instantly. So although it's not faster than the speed of light, our measurements operate as though they were.
I think my misunderstanding comes from the inability to put this operation into practice. But as per my original argument, if electrons were simply more concentrated waves which behave like particles, determining a full theory of everything that incorporates the quantum is possible. Computing power aside, having these roadblocks in place do not seem sufficient to keep our current theoretical knowledge as is. The most famous example was the conventional wisdom of the ether, which general relativity broke. Ignorance is bliss.
You describe the "x-risk" as if it were only one. As far as I understand, the general idea of Great Filter as self-destruction is "every civilization found _one way or another_ to destroy or irreparably re-barbarize itself". Not the same way. Not "EwayAcivilizations" but "AcivilizationsEway". And this is a much weaker claim.
The planet-sized hive of perfectly subservient ant-people with a queen who is paranoid enough to colonize the galaxy using only contemporary technology seems like she wouldn't fall for most x-risks.
Note that the paper Dissolving the Fermi Paradox makes a strong case that the Great Filter is early.
Despite science-fiction, I see little plausibility in hive rationality. So - and I may be putting my neck under an axe by this - I claim that no hive race could raise to getting anything near "contemporary technology". Also, most of the contemporary technology usable for colonizing is already costly and/or faulty enough that someone who is "paranoid enough" (and some Prof. Moody tells us there is no such thing - but still) would be unlikely to ever leave theit own planet.
The ways in which contemporary technology is faulty do not destroy the civilzation that uses it. I mean that she does not try that exciting physics experiment that turns her first planet into a strangelet.
Could a society of hives develop contemporary technology? Imagine that each queen is human-level intelligent, in their ancestral environment there are as many hives as we had humans, and in their World War III, they finally kill off all but one queen, and now they're stuck at that technology level.
And what about this argument:
As the civilisation progresses, it becomes increasingly cheaper to destroy the world to the point where any lunatic can do so. It might be so that physical laws make it much harder to protect against destruction than actually destroy—this actually seems to be the case with nuclear weapons.
Certainly, currently there are at least 1 in million people in this world who would choose to destroy it all if they could.
It might be so that we achieve this level of knowledge before we make it to travel across solar systems.
I'd think that some of these alien civilisation would have figured it out in time, implanted everyone with neural chips that override any world ending decision, kept technological discoveries over a certain level available only to a small fraction of the population or in the hand of aligned AI, or something.
An aligned AI definitely seems able to face a problem of this magnitude, and we'd likely either get that or botch that before reaching the technological level any lunatic can blow up the planet.
There’s been a recent spate of popular interest in the Great Filter theory, but I think it all misses an important point brought up in Robin Hanson’s original 1998 paper on the subject.
The Great Filter, remember, is the horror-genre-adaptation of Fermi’s Paradox. All of our calculations say that, in the infinite vastness of time and space, intelligent aliens should be very common. But we don’t see any of them. We haven’t seen their colossal astro-engineering projects in the night sky. We haven’t heard their messages through SETI. And most important, we haven’t been visited or colonized by them.
This is very strange. Consider that if humankind makes it another thousand years, we’ll probably have started to colonize other star systems. Those star systems will colonize other star systems and so on until we start expanding at nearly the speed of light, colonizing literally everything in sight. After a hundred thousand years or so we’ll have settled a big chunk of the galaxy, assuming we haven’t killed ourselves first or encountered someone else already living there.
But there should be alien civilizations that are a billion years old. Anything that could conceivably be colonized, they should have gotten to back when trilobytes still seemed like superadvanced mutants. But here we are, perfectly nice solar system, lots of any type of resources you could desire, and they’ve never visited. Why not?
Well, the Great Filter. No knows specifically what the Great Filter is, but generally it’s “that thing that blocks planets from growing spacefaring civilizations”. The planet goes some of the way towards a spacefaring civilization, and then stops. The most important thing to remember about the Great Filter is that it is very good at what it does. If even one planet in a billion light-year radius had passed through the Great Filter, we would expect to see its inhabitants everywhere. Since we don’t, we know that whatever it is it’s very thorough.
Various candidates have been proposed, including “it’s really hard for life to come into existence”, “it’s really hard for complex cells to form”, “it’s really hard for animals to evolve intelligent”, and “actually space is full of aliens but they are hiding their existence from us for some reason”.
The articles I linked at the top, especially the first, will go through most of the possibilities. This essay isn’t about proposing new ones. It’s about saying why the old ones won’t work.
The Great Filter is not garden-variety x-risk. A lot of people have seized upon the Great Filter to say that we’re going to destroy ourselves through global warming or nuclear war or destroying the rainforests. This seems wrong to me. Even if human civilization does destroy itself due to global warming – which is a lot further than even very pessimistic environmentalists expect the problem to go – it seems clear we had a chance not to do that. A few politicians voting the other way, we could have passed the Kyoto Protocol. A lot of politicians voting the other way, and we could have come up with a really stable and long-lasting plan to put it off indefinitely. If the gas-powered car had never won out over electric vehicles back in the early 20th century, or nuclear-phobia hadn’t sunk the plan to move away from polluting coal plants, then the problem might never have come up, or at least been much less. And we’re pretty close to being able to colonize Mars right now; if our solar system had a slightly bigger, slightly closer version of Mars, then we could restart human civilization anew there once we destroyed the Earth and maybe go a little easy on the carbon dioxide the next time around.
In other words, there’s no way global warming kills 999,999,999 in every billion civilizations. Maybe it kills 100,000,000. Maybe it kills 900,000,000. But occasionally one manages to make it to space before frying their home planet. That means it can’t be the Great Filter, or else we would have run into the aliens who passed their Kyoto Protocols.
And the same is true of nuclear war or destroying the rainforests.
Unfortunately, almost all the popular articles about the Great Filter miss this point and make their lead-in “DOES THIS SCIENTIFIC PHENOMENON PROVE HUMANITY IS DOOMED?” No. No it doesn’t.
The Great Filter is not Unfriendly AI. Unlike global warming, it may be that we never really had a chance against Unfriendly AI. Even if we do everything right and give MIRI more money than they could ever want and get all of our smartest geniuses working on the problem, maybe the mathematical problems involved are insurmountable. Maybe the most pessimistic of MIRI’s models is true, and AIs are very easy to accidentally bootstrap to unstoppable superintelligence and near-impossible to give a stable value system that makes them compatible with human life. So unlike global warming and nuclear war, this theory meshes well with the low probability of filter escape.
But as this article points out, Unfriendly AI would if anything be even more visible than normal aliens. The best-studied class of Unfriendly AIs are the ones whimsically called “paperclip maximizers” which try to convert the entire universe to a certain state (in the example, paperclips). These would be easily detectable as a sphere of optimized territory expanding at some appreciable fraction of the speed of light. Given that Hubble hasn’t spotted a Paperclip Nebula (or been consumed by one) it looks like no one has created any of this sort of AI either. And while other Unfriendly AIs might be less aggressive than this, it’s hard to imagine an Unfriendly AI that destroys its parent civilization, then sits very quietly doing nothing. It’s even harder to imagine that 999,999,999 out of a billion Unfriendly AIs end up this way.
The Great Filter is not transcendence. Lots of people more enthusiastically propose that the problem isn’t alien species killing themselves, it’s alien species transcending this mortal plane. Once they become sufficiently advanced, they stop being interested in expansion for expansion’s sake. Some of them hang out on their home planet, peacefully cultivating their alien gardens. Others upload themselves to computronium internets, living in virtual reality. Still others become beings of pure energy, doing whatever it is that beings of pure energy do. In any case, they don’t conquer the galaxy or build obvious visible structures.
Which is all nice and well, except what about the Amish aliens? What about the ones who have weird religions telling them that it’s not right to upload their bodies, they have to live in the real world? What about the ones who have crusader religions telling them they have to conquer the galaxy to convert everyone else to their superior way of life? I’m not saying this has to be common. And I know there’s this argument that advanced species would be beyond this kind of thing. But man, it only takes one. I can’t believe that not even one in a billion alien civilizations would have some instinctual preference for galactic conquest for galactic conquest’s own sake. I mean, even if most humans upload themselves, there will be a couple who don’t and who want to go exploring. You’re trying to tell me this model applies to 999,999,999 out of one billion civilizations, and then the very first civilization we test it on, it fails?
The Great Filter is not alien exterminators. It sort of makes sense, from a human point of view. Maybe the first alien species to attain superintelligence was jealous, or just plain jerks, and decided to kill other species before they got the chance to catch up. Knowledgeable people like as Carl Sagan and Stephen Hawking have condemned our reverse-SETI practice of sending messages into space to see who’s out there, because everyone out there may be terrible. On this view, the dominant alien civilization is the Great Filter, killing off everyone else while not leaving a visible footprint themselves.
Although I get the precautionary principle, Sagan et al’s warnings against sending messages seem kind of silly to me. This isn’t a failure to recognize how strong the Great Filter has to be, this is a failure to recognize how powerful a civilization that gets through it can become.
It doesn’t matter one way or the other if we broadcast we’re here. If there are alien superintelligences out there, they know. “Oh, my billion-year-old universe-spanning superintelligence wants to destroy fledgling civilizations, but we just can’t find them! If only they would send very powerful radio broadcasts into space so we could figure out where they are!” No. Just no. If there are alien superintelligences out there, they tagged Earth as potential troublemakers sometime in the Cambrian Era and have been watching us very closely ever since. They know what you had for breakfast this morning and they know what Jesus had for breakfast the morning of the Crucifixion. People worried about accidentally “revealing themselves” to an intergalactic supercivilization are like Sentinel Islanders reluctant to send a message in a bottle lest modern civilization discover their existence – unaware that modern civilization has spy satellites orbiting the planet that can pick out whether or not they shaved that morning.
What about alien exterminators who are okay with weak civilizations, but kill them when they show the first sign of becoming a threat (like inventing fusion power or leaving their home solar system)? Again, you are underestimating billion-year-old universe-spanning superintelligences. Don’t flatter yourself here. You cannot threaten them.
What about alien exterminators who are okay with weak civilizations, but destroy strong civilizations not because they feel threatened, but just for aesthetic reasons? I can’t be certain that’s false, but it seems to me that if they have let us continue existing this long, even though we are made of matter that can be used for something else, that has to be a conscious decision made out of something like morality. And because they’re omnipotent, they have the ability to satisfy all of their (not logically contradictory) goals at once without worrying about tradeoffs. That makes me think that whatever moral impulse has driven them to allow us to survive will probably continue to allow us to survive even if we start annoying them for some reason. When you’re omnipotent, the option of stopping the annoyance without harming anyone is just as easy as stopping the annoyance by making everyone involved suddenly vanish.
Three of these four options – x-risk, Unfriendly AI, and alien exterminators – are very very bad for humanity. I think worry about this badness has been a lot of what’s driven interest in the Great Filter. I also think these are some of the least likely possible explanations, which means we should be less afraid of the Great Filter than is generally believed.