To quickly escape the great filter should we flood our galaxy with radio signals?  While communicating with fellow humans we already send out massive amounts of information that an alien civilization could eventually pickup, but should we engage in positive SETI?  Or, if you fear the attention of dangerous aliens, should we set up powerful long-lived solar or nuclear powered automated radio transmitters in the desert and in space that stay silent so long as they receive a yearly signal from us, but then if they fail to get the no-go signal because our civilization has fallen, continuously transmit our dead voice to the stars?  If we do destroy ourselves it would be an act of astronomical altruism to warn other civilizations of our fate especially if we broadcasted news stories from just before our demise, e.g. physicists excited about a new high energy experiment.  

Something prevents solar systems from giving birth to space faring civilizations.  Robin Hanson has called this the great filter.  Stuart Armstrong and Anders Sandberg show that it would take an advanced civilization a trivial amount of effort to seed nearby galaxies with self-replicating intelligences.  Since we seem pretty close to being able to expand throughout the stars ourselves, especially if the singularity is near, if much of the great filter lies in front of us, we are probably doomed.  For reasons I won’t go into here, (but see this) there is good reason to believe that much of the great filter does lie before us (although Scott Alexander has a different view).  Since I don’t want this post to be about the causes of the Fermi paradox, let’s make the following doomed assumption:

 

With high probability there has existed a large number of civilizations in our galaxy that equaled or exceeded our current level of technological development and would have gone on to make their presence felt throughout the galaxy, but they all suffered some disaster preventing this expansion.  And assume with high probability that the number of civilizations that reached our level of development but soon collapsed greatly exceed the number that reached our level of development and survived at least another thousand years because if this were not true we would almost certainly have seen evidence of extraterrestrial intelligences by now.

 

Accepting the doomed assumption gives us an outside view of the probability of our species’ survival.  An inside view would basically sum up all of the possible causes of our civilization’s collapse, whereas the outside view would say that since with high probability many civilizations reached our level of development and then fell, we too will probably fail even if we can’t think of enough inside view reasons for why we are likely doomed.  

 

To help think through what we should do if we believe the doomed assumption, consider the following analogy:  Imagine you’re a gladiator who must defeat one more opponent to achieve your seventh victory.  If a gladiator in your city beats seven opponents he gets his name forever engraved on the coliseum walls and is granted freedom.  All matches in your coliseum are to the death.  Sizing up your next opponent, you at first give yourself an excellent chance of victory as it seems that several of your past opponents were stronger than this next guy, and you are in top condition.  But then you realize that no gladiator has ever had his name on the walls, all died before winning their seventh victory and you take this as a horrible sign.  The coliseum has been around for a long, long time and since the beginning there was the rule that if you win seven victories you get your name immortalized.  You become convinced that you will die in the next match and decide that you might as well have fun while you can, so you abandon your diet and training for wine and prostitutes.  

 

Your master becomes concerned at your behavior, and when you explain to him how you think it nearly impossible that you alone of all the gladiators who have ever fought in your coliseum will survive long enough to get his name inscribed on the wall, he offers you a deal.  If you give your master a few gold pieces he will bribe the stadium owner to permanently write your name on the coliseum wall before the next fight, and he credibly promises that even if you lose your name will remain.  Should you pay?  Inscribing your name would do nothing to make your next opponent weaker, but once your name is engraved you no longer need fear the outside view assessment that you won’t be able to win because you are not special enough to alone have your name inscribed.  If you are extremely perplexed that in the history of the coliseum no other gladiator managed to win enough fights to get his name listed you might decide that there is some unknown X factor working against any gladiator in your position and even if you can’t identify X and so can’t imagine from an inside view how just getting your name inscribed will help you overcome X, you might decide that this paradox merely means you can’t trust your inside view and so you should do whatever you can, no matter how seemingly silly, to make the outside view apply with less force to your predicament.

 

I wonder if we are in a similar situation with regards to positive SETI.  For me at least, the Fermi paradox and the great filter create a conflict between my inside and outside assessments of the chances of our high technology civilization surviving long enough to make us known to species at our level of development. Flooding the galaxy with signals, even if just conditional on our civilization’s collapse, would significantly reduce the odds that we won’t survive long enough to reveal our existence to other civilizations at our level of development if such civilizations are commonplace.  Flooding, consequently, would from an outside view make me more optimistic about the chances of our survival.  Of course, if we model other civilizations as being somewhat rational actors then the fact that they have seemingly chosen to not flood the galaxy with radio signals should cause us to be more reluctant to do so.

 

You might argue that I’m confusing map and territory, but consider an extreme example.  First, pretend scientists make two horrifying discoveries: 

 

1)  They find multicellular life on one of Saturn’s moons that genetic analysis proves arose independent of life on Earth.

 

2)  They uncover ruins of an ancient dinosaur civilization proving that some species of dinosaurs achieved around human level intelligence, despite the common ancestor of this species and mankind being unintelligent.

 

These two findings would provide massive Bayesian evidence that intelligent life in our galaxy is almost certainly commonplace and make the only real candidates for the Fermi paradox the zoo hypothesis (which I think is unlikely) or a late great filter.  But now imagine that Elon Musk’s fear of the great filter motivates him to develop a SpaceX-hyperloop transmitter that simultaneously sends a powerful signal to every single star system in the galaxy which any civilization at our level of development would find and recognize as being sent by an extraterrestrial intelligence.  Plus, once activated the transmitter must by the laws of physics keep operating for the next billion years.  After the transmitter had been turned on, wouldn't you become more optimistic about mankind’s survival even if the transmitter had no other practical purpose?  And if Musk were going to turn on his transmitter tomorrow would you fear that the great filter is on the verge of annihilating us?

 

[This post greatly benefited from a discussion I had with Stuart Armstrong, although he doesn't necessarily agree with the post’s contents.] 


New to LessWrong?

New Comment
51 comments, sorted by Click to highlight new comments since: Today at 10:51 PM

This is a very good metaphor and I approve of you making it.

But it only works when there are a reasonably small number of predecessors.

If there have been a thousand gladiators before you, fine.

If there have been a trillion gladiators before you, someone else has had this idea, and that makes the absence of names on the coliseum ten times more scary.

If there have been a trillion gladiators before you, the conditions have been in place for gladiators to bribe others to put their names on the wall since the beginning of gladiating, and there are still no names on the wall, then you are fundamentally misunderstanding some aspect of the situation. Either people are lying when they say there have been a trillion gladiators before you, or people are lying on a much more fundamental level - for example, the walls of the Coliseum are wiped clear once a year, or this entire scenario is a dream.

If we assume a late filter and set up this scenario, the obvious question becomes "What happened to the last civilization who tried this?"

And the obvious answer is "nothing good".

Since it seems unlikely that every past civilization collapsed by coincidence just before it could implement this idea, then with certain strong assumptions like "very many civilizations" and "ease of galaxy-wide transmission", we are left with only the possibility of early filter, or careful enforcement of late filter by alien intelligence.

The reason that this scenario requires such nonsensical decision theory is because it's based on flawed assumptions - that this state of affairs plus a late filter could ever come about naturally.

I hope this seems like a logical development of what I said in the post you linked.

Let G = the number of civilizations in our galaxy that have reached our level of development.

I agree with you that for sufficiently large values of G we are left with either "careful enforcement of late filter by alien intelligence" or "flawed assumptions" For sufficiently low G we don't have to fear the great filter.

But for a medium range G (1000 gladiators) we should be very afraid if it, and I think this is the most likely situation since the higher the G, so long as G isn't so large as to create absurdities (if we assume away the zoo hypothesis and alien exterminators), observers like us are common. What's needed is some kind of mathematical model that captures the tradeoff between the Fermi paradox getting worse and the anthrorpics making observers such as us more common as G increases.

Question, perhaps better for the open thread.

Imagine a universe without a great filter, where the natural progression of life is to (1) live quietly for a few billion years (2) invent technology for a few hundred years, and (3) expand outwards at close to the speed of light for a few billion years. What sort of aliens would a technologically nascent species expect to see?

I think they would see no aliens at all. The quiet aliens are quiet. The aliens inventing technology are uncommon, just as people aged 37 years 4 days plus or minus 2 seconds are uncommon. The aliens expanding at light speed are invisible until they envelop your planet. If your species evolved naturally on a planet that was not conquered by aliens hundreds of millions of years ago, you're not going to see any aliens until after you too start expanding out at the speed of light.

I haven't seen much discussion of the resolution I've just outlined. Does anyone know a good counterargument?

ETA Aliens expanding at near light-speed cannot be seen for the same reason that super-sonic jets cannot be heard.

If this very plausible scenario is true then most civilizations at our stage of development would exist when their galaxy was young, because when the galaxy got to be the age of, say, the Milky Way some civilization would have almost certainly taken it over. So under your scenario the Fermi paradox becomes why is our galaxy so old.

Or why stage 1 is so long.

This sort of scenario might work if Stage 1 takes a minimum of 12 billion years, so that life has to first evolve slowly in an early solar system, then hop to another solar system by panspermia, then continue to evolve for billions of years more until it reaches multicellularity and intelligence. In that case, almost all civilisations will be emerging about now (give or take a few hundred million years), and we are either the very first to emerge, or others have emerged too far away to have reached us yet. This seems contrived, but gets round the need for a late filter.

I don't get the reason panspermia needs to be involved. Simply having a minimum metallicity threshold for getting started would do the job.

It might do, except that the recent astronomical evidence is against that : solar systems with sufficient metallicity to form rocky planets were appearing within a couple of billion years after the Big Bang. See here for a review.

Hmmmm. (ETA: following claim is incorrect) They're judging that the planets are rocky by measuring their mass, not by noticing that they're actually rocky.

If you don't have a Jupiter-sized core out there sucking up all the gas, why would gas planets need to end up as giants? They naturally could do that - that happened with the star, after all, but it doesn't seem inevitable to me, and it might not even be common.

In that case, the earth-mass planets would be gas planets after all. If you think this is a stretch, keep in mind that these are specifically in systems noted to be low metallicity. Suggesting that they might not be high in metals after all is not much of a stretch.

Actually, Kepler is able to determine both size and mass of planet candidates, using the method of transit photometry.

For further info, I found a non-paywalled copy of Bucchave et al's Nature paper. Figure 3 plots planet radius against star metallicity, and some of the planets are clearly of Earth-radius or smaller. I very much doubt that it is possible to form gas "giants" of Earth size, and in any case they would have a mass much lower than Earth mass, so would stand out immediately.

I forgot about photometry.

[-][anonymous]10y00

Not if evolution of multicellular organisms or complex nervous systems is a random (Poisson) process. That is to say, if the development of the first generation of multicellular life or intelligent life is a random fluke and not a gradual hill that can be optimized toward, then one should not expect behavior analagous to a progress bar. If it takes 12 billion years on average, and 12 billion years go by without such life developing, then such a result is stlil 12 billion years away.

[-][anonymous]10y00

I like it! I also like your gladiatorial solution, but I'm not convinced it would work.

You're walking through an alleyway and see a $20 bill on the ground. Your forensic skills tell you that it has lain there for 13.2 years, a suspiciously long time. There are several resolutions to this apparent paradox. Perhaps not many people have passed by. Perhaps the bill isn't really that old. Or perhaps some soft of filter prevents people from picking up the bill (e.g. an economist-eating monster consumes anyone who gets too close).

You take out spray paint and graffiti "I will pick up this $20 dollar bill -->", thereby extracting yourself from the reference class of people who tried to pick up the $20 and quietly failed. It's a clever plan, but even if you are the first passerby to think of the plan I don't believe it advances your interests.

[This comment is no longer endorsed by its author]Reply

This is isomorphic to the smoking lesion problem. EDT says it's a good idea to build the transmitter, but EDT is bonkers.

Excellent example, but I don't think it's 100% isomorphic because we don't understand the cause of the great filter and because thinking we are probably doomed might cause us to act in ways that increase the chances of us destroying ourselves.

Or everyone might think that way, causing them to blow themselves up with this "transmitter".

[-][anonymous]10y120

You don't get to update on evidence that you planted.

I read the logic as something like Timeless Decision Theory. If you reached this conclusion and planted this evidence, there is a reasonable chance that previous actors in similar situations did so as well, and given enough previous actors, you should see the results. Since you don't see those results, either there haven't actually been very many previous actors, or something else about your assumptions is invalid. In either case, the logical basis of the Great Filter conclusion has been drastically weakened.

You certainly do when placebo effects are involved. Also, even without placebo effects, is what you said true when taking into account anthropic reference classes? Finally, if we don't know the cause of the Fermi paradox, we can not be sure that flooding the galaxy with radio signals won't causally effect the chances of our escaping the paradox so the analogy here might be to updating on the chance of your getting sick because you got yourself vaccinated.

Still, I like how you put your objection and it's one I should more thoroughly address if I work further on the idea.

If the great filter comes after the arising of cooperative civilisation, then whatever it is, it is not something that can be reliably circumvented by following a strategy that more than 1% of all civilsations would come up with.

I agree with you because of your use of the word "reliably". But could we non-trivially increase our chances of circumventing it by following a strategy that lots of civilizations could have implemented, but almost none chose to actually implement?

The Fermi paradox and the gladiator ideas do not seem equivalent - unless someone erases the coliseum walls, or lies about previous gladiators.

For the Fermi paradox, either there's a late great filter, or not. In your setup, the evidence is strong for there being one. Then, assuming EDT or similar, we plan to spam info to the galaxy. If our plan succeeds, then, to the extent that we believe that other civilizations would follow the same plan, this pushes us away from the late great filter (actually there is a second hope - we could assume that we are different from all other civilizations - we spammed info to the galaxy, after all. Then if we act strongly conditional on this fact, we'll be exploring new approaches, untried by previous civs, putting us out of their reference class).

But for the gladiator, we can see that there are no names on the walls, and we know there were previous gladiators. Their failure is a fact for us, and sneakily getting our own name inscribed changes this not at all (it just tells we were atypical in this one regard, and unless we think we're atypical in relevant 7th victory regards, this doesn't help us). But if there was a possibility that names got erased, or that there were no or few previous gladiators - then our decision pushes probability in those directions, upping our chance of survival.

Anyway, cheers for the idea "should we set up powerful long-lived solar or nuclear powered automated radio transmitters in the desert and in space that stay silent so long as they receive a yearly signal from us, but then if they fail to get the no-go signal because our civilization has fallen, continuously transmit our dead voice to the stars".

This seems like a variation of the smoking lesion problem. The lesion is associated with dying, and with smoking, but smoking itself may not necessarily cause death. You are deciding whether to smoke. You can reason that even if smoking doesn't directly cause either death or the lesion, refusing to smoke is Bayseian evidence that you don't have the lesion and are less likely to die.

One problem with this reasoning is that "smoking is correlated with the lesion" can't sensibly mean "smoking for any reason whatsoever is correlated with the lesion". It probably means "there are several factors which lead to you smoking and some are correlated with the lesion and others aren't". For instance, the lesion might make you find smoking more fun. So refusing to smoke because it's not a lot of fun may be correlated with not dying, but refusing to smoke because you deduced that it reduced your chances of dying might not be.

Likewise, having your name up because you won may be correlated (at 100% probability, in fact) with surviving the fight, but having your name up because of some other reason (like bribing the owner) might not be. The same, of course, applies to transmitting a signal--transmitting a signal isn't really associated with survival; transmitting a signal for the normal reasons is associated with survival. You can't increase your chance of survival by choosing to transmit a signal anyway because "choosing to transmit anyway" is not one of the normal reasons.

I agree, but what if there was a smoking paradox that involved smokers dying at much higher rates than seemed to be justified by the genetic lesion?

What's the analog of that? Civilizations being missing at higher rates than can be justified by the lack of signal transmission? The lack of saignal transmission already justifies 100%, you can't go any farther.

Sorry but I don't understand what you mean. To clarify what I meant, the smoking lesion problem assumes (I think) that we fully understand all of the causal relations, which we don't with the Fermi paradox. So the analogy would be the lesion explains 20% of the cancer rates, we don't know the cause of the other 80%, and from an inside view it doesn't seem that smoking could cause cancer but something strange is going on so who knows.

My response to the smoking lesion paradox is that longer life is not associated with not smoking, but with not smoking that is done for reasons other than to avoid death. Likewise, for the extraterrestrial signal paradox, survival of civilizations is not associated with sending radio signals, but rather with sending radio signals for reasons other than to ensure survival.

If you're going to go with a partial correlation, then 20% of the correlation between smoking and death is caused by the lesion and 80% is caused by something else (such as the fact that smoke isn't good for you).

In the analogy, 20% of the correlation between no radio signals and death of a civilization is caused by something destroying the civilization before it gets to produce signals, and 80% is caused by something else. That doesn't make any sense.

Inscribing your name would do nothing to make your next opponent weaker, but once your name is engraved you no longer need fear the outside view assessment that you won’t be able to win because you are not special enough to alone have your name inscribed.

... what?

A quicker way to find out if you're going to be wiped out in a vast game is to break the rules which summons the referee to destroy you.

This is similar to people suggesting we do arbitrarily complex calculations to test the simulation hypothesis

I get a sense of doom whenever I think about this. But that's not evidence.

Let's make a list of different possible ways to break reality.

[This comment is no longer endorsed by its author]Reply

You are trying to do X. From an inside viewpoint it seems as if you could do X because you seem to understand what it would take to do X. But then you learn that everyone who has ever tried to do X has failed, and some of these people were probably much more skilled than you. Therefore from an outside view you think you will not be able to do X. For the gladiator X is getting his name engraved. For mankind X is making ourselves known to other civilizations at our level of development. For both, knowing that you will not be able to do X means you will likely soon die.

Now, however, pretend there was a way to cheat so that you will have technically done X but in a way that is much, much easier than you previously thought. From an outside viewpoint once you have done X you are less likely to die even if doing X doesn't change your inside view of anything. So should you cheat to accomplish X?

Now, however, pretend there was a way to cheat so that you will have technically done X but in a way that is much, much easier than you previously thought. From an outside viewpoint once you have done X you are less likely to die even if doing X doesn't change your inside view of anything. So should you cheat to accomplish X?

I still don't see how it makes you more likely to be able to do X. The gladiator isn't trying to have his name engraved, he's trying to survive. Likewise, we aren't trying to get our messages heard by foreign civilizations, we're trying to survive whatever the Great Filter is (assuming it lies before us).

This looks to me like a case of Goodhart's Law: when a measure becomes a target, it ceases to be a valid measure.

(Oh also, I just thought: by the same reasoning, the gladiator should expect something to stop him from being able to get his name engraved via bribery, since he would reason that previous gladiators in his position would do the same thing, and he still doesn't see any names.)

But then you learn that everyone who has ever tried to do X has failed, and some of these people were probably much more skilled than you. ... For mankind X is making ourselves known to other civilizations at our level of development.

Does not fit. We have no idea how many of them there were - and if there were, how skilled they were.

My analogy is based on my doomed assumption being true. I did not intend the post to be a justification of the assumption, but rather a discussion of something we might do if the assumption is true.

This doesn't seem to address the correct point. Our goal is to surpass late filters, effectively turning universes with a late filter into universes with no filter at all. Surely the value of such an action is comparable to the value of a universe with an early filter (provided that it has been passed)?

See my reply to James.

If you have the time I would be grateful for an intuitive explanation of why this is so. I don't think the linked comment explains this because if we go on to colonize the universe our influence will be the same regardless of whether we are the first civilization to have reached our current (2014) level of development, or whether 1000s have done so but all fell.

"Do we live in a late filter universe?" is not a meaningful question. The meaningful question is "should we choose strategy A suitable for early filter universes or strategy B suitable for late filter universes?" According to UDT, we should choose the strategy leading to maximum expected utility given all similar players choose it, where the expectation value averages both kind of universes. Naive anthropic reasoning suggests we should assume we are in a late filter universe, since there are much more players there. This, however, is precisely offset by the fact these players have a poor chance of success even when playing B so their contribution to the difference in expected utility between A and B is smaller. Therefore, we should ignore anthropic reasoning and focus on the a priori probability of having an early filter versus a late filter.

[-][anonymous]10y00

The anthropic reasoning in there isn't valid though. Anthropic reasoning can only be used to rule out impossibilities. If a universe were impossible, we wouldn't be in it. However any inference beyond that makes assumptions about prior distirbutions and selection which have no justification. There are many papers (e.g. http://arxiv.org/abs/astro-ph/0610330) showing how anthropic reasoning is really anthropic rationalization when it comes to selecting one model over another.

Actually, it's possible to always take anthropic considerations into account by using UDT + the Solomonoff prior. I think cosmologists would benefit from learning about it.

[-][anonymous]10y00

That's an empty statement. It is always possible to take anthropic considerations into account by using [insert decision theory] + [insert prior]. Why did you choose that decision theory and more importantly that prior?

We have knowledge about only one universe. A single data point is insufficient to infer any information about universe selection priors.

Thanks for the explanation!

[-][anonymous]10y00

I have a question. Does the probability that the colonization of the universe with light speed probes has occurred, but only in areas where we would not have had enough time to notice it yet, affect the Great Filter argument?

For instance, assume the closest universal colonization with near light speed probes to us started 100 light years away in distance, 50 years ago in time. When we look at the star where colonization started, we wouldn't see evidence of near light speed colonization yet, because we're seeing light from 100 years ago, before they started.

I think a simpler way of putting this might be "What is the probability our tests for colonial explosion are giving a false negative? If that probability was high, would it affect the Great Filter Argument?"

The great filter argument and Fermi's paradox take into account the speed of light, and the size and age of the galaxy. Both figure that there has been plenty of time for aliens to colonize the galaxy even if they traveled at, say, 1% of the speed of light. If our galaxy were much younger or the space between star systems much bigger there would not be a Fermi paradox and we wouldn't need fear the great filter.

To directly answer the question of your second sentence, yes but only by a very small amount.

[-][anonymous]10y00

I think that reading this and thinking it over helped me figure out a confusing math error I was making. Thank you!

Normally, to calculate the odds of a false negative, I would need the test accuracy, but I would also need the base rate.

I.E, If a test for the presence or absence of colonization is 99% accurate, and the base rate for evidence of colonization is present in 1% of stars, and my test is negative, then I can compute the odds of a false negative.

However, in this case, I was attempting to determine "Given that our tests aren't perfectly accurate, what if the base rate of colonization isn't 0%?" and while that may be a valid question, I was using the wrong math to work on it, and it was leading me to conclusions that didn't make a shred of sense.

I think "filter" itself is a bad metaphor. It implies that 1) some barrier really exists 2) it can be passed. One of this assumptions is likely wrong.

(2) Isn't wrong if the speed of light is really the maximum because then once we have started to colonize the universe our civilization will be beyond the scope of any one disaster and will likely survive until the free energy of the universe runs out

If "late filter" variant is true, it means interstellar colonization is just impossible, for reasons that can be outside the limits of modern scientific knowledge.

and a reason could include that advanced civilizations destroy themselves with very high probability before they can colonize space.

"Very high probability" of this kind in practice means "always". So either technological progress brings some dangers that just cannot be avoided before building starships is possible, or starships thenselves are not viable.