Astronomical research has what may be an under-appreciated role in helping us understand and possibly avoiding the Great Filter. This post will examine how astronomy may be helpful for identifying potential future filters. The primary upshot is that we may have an advantage due to our somewhat late arrival: if we can observe what other civilizations have done wrong, we can get a leg up.

This post is not arguing that colonization is a route to remove some existential risks. There is no question that colonization will reduce the risk of many forms of Filters, but the vast majority of astronomical work has no substantial connection to colonization. Moreover, the case for colonization has been made strongly by many others already, such as Robert Zubrin's book "The Case for Mars" or this essay by Nick Bostrom

Note: those already familiar with the Great Filter and proposed explanations may wish to skip to the section "How can we substantially improve astronomy in the short to medium term?"


What is the Great Filter?

There is a worrying lack of signs of intelligent life in the universe. The only intelligent life we have detected has been that on Earth. While planets are apparently numerous, there have been no signs of other life. There are three possible lines of evidence we would expect to see if civilizations were common in the universe: radio signals, direct contact, and large-scale constructions. The first two of these issues are well-known, but the most serious problem arises from the lack of large-scale constructions: as far as we can tell the universe look natural. The vast majority of matter and energy in the universe appears to be unused. The Great Filter is one possible explanation for this lack of life, namely that some phenomenon prevents intelligent life from passing into the interstellar, large-scale phase. Variants of the idea have been floating around for a long time; the term was first coined by Robin Hanson in this essay. There are two fundamental versions of the Filter: filtration which has occurred in our past, and Filtration which will occur in our future. For obvious reasons the second of the two is more of a concern. Moreover, as our technological level increases, the chance that we are getting to the last point of serious filtration gets higher since as one has a civilization spread out to multiple stars, filtration becomes more difficult.  

Evidence for the Great Filter and alternative explanations:

At this point, over the last few years, the only major updates to the situation involving the Filter since Hanson's essay have been twofold:

First, we have confirmed that planets are very common, so a lack of Earth-size planets or planets in the habitable zone are not likely to be a major filter.

Second, we have found that planet formation occurred early in the universe. (For example see this article about this paper.) Early planet formation weakens the common explanation of the Fermi paradox that the argument that some species had to be the first intelligent species and we're simply lucky. Early planet formation along with the apparent speed at which life arose on Earth after the heavy bombardment ended, as well as the apparent speed with which complex life developed from simple life,  strongly refutes this explanation. The response has been made that early filtration may be so common that if life does not arise early on a planet's star's lifespan, then it will have no chance to reach civilization. However, if this were the case, we'd expect to have found ourselves orbiting a more long-lived star like a red dwarf. Red dwarfs are more common than sun-like stars and have much longer lifespans by multiple orders of magnitude. While attempts to understand the habitable zone of red dwarfs are still ongoing, current consensus is that many red dwarfs contain habitable planets

These two observations, together with further evidence that the universe looks natural  makes future filtration seem likely. If advanced civilizations existed, we would expect them to make use of the large amounts of matter and energy available. We see no signs of such use.  We've seen no indication of ring-worlds, Dyson spheres, or other megascale engineering projects. While such searches have so far been confined to around 300 parsecs and some candidates were hard to rule out, if a substantial fraction of stars in a galaxy have Dyson spheres or swarms we would notice the unusually high infrared spectrum. Note that this sort of evidence is distinct from arguments about contact or about detecting radio signals. There's a very recent proposal for mini-Dyson spheres around white dwarfs  which would be much easier to engineer and harder to detect, but they would not reduce the desirability of other large-scale structures, and they would likely be detectable if there were a large number of them present in a small region. One recent study looked for signs of large-scale modification to the radiation profile of galaxies in a way that should show presence of large scale civilizations. They looked at 100,000 galaxies and found no major sign of technologically advanced civilizations (for more detail see here). 

We will not discuss all possible rebuttals to case for a Great Filter but will note some of the more interesting ones:

There have been attempts to argue that the universe only became habitable more recently. There are two primary avenues for this argument. First, there is the point that  early stars had very low metallicity (that is had low concentrations of elements other than hydrogen and helium) and thus the universe would have had too low a metal level for complex life. The presence of old rocky planets makes this argument less viable, and this only works for the first few billion years of history. Second, there's an argument that until recently galaxies were more likely to have frequent gamma bursts. In that case, life would have been wiped out too frequently to evolve in a complex fashion. However, even the strongest version of this argument still leaves billions of years of time unexplained. 

There have been attempts to argue that space travel may be very difficult. For example, Geoffrey Landis proposed that a percolation model, together with the idea that interstellar travel is very difficult, may explain the apparent rarity of large-scale civilizations. However, at this point, there's no strong reason to think that interstellar travel is so difficult as to limit colonization to that extent. Moreover, discoveries made in the last 20 years that brown dwarfs are very common  and that most stars do contain planets is evidence in the opposite direction: these brown dwarfs as well as common planets would make travel easier because there are more potential refueling and resupply locations even if they are not used for full colonization.  Others have argued that even without such considerations, colonization should not be that difficult. Moreover, if colonization is difficult and civilizations end up restricted to small numbers of nearby stars, then it becomes more, not less, likely that civilizations will attempt the large-scale engineering projects that we would notice. 

Another possibility is that we are underestimating the general growth rate of the resources used by civilizations, and so while extrapolating now makes it plausible that large-scale projects and endeavors will occur, it becomes substantially more difficult to engage in very energy intensive projects like colonization. Rather than a continual, exponential or close to exponential growth rate, we may expect long periods of slow growth or stagnation. This cannot be ruled out, but even if growth continues at only slightly higher than linear rate, the energy expenditures available in a few thousand years will still be very large. 

Another possibility that has been proposed are variants of the simulation hypothesis— the idea that we exist in a simulated reality. The most common variant of this in a Great Filter context suggests that we are in an ancestor simulation, that is a simulation by the future descendants of humanity of what early humans would have been like.

The simulation hypothesis runs into serious problems, both in general and as an explanation of the Great Filter in particular. First, if our understanding of the laws of physics is approximately correct, then there are strong restrictions on what computations can be done with a given amount of resources. For example, BQP, the set of problems which can be solved efficiently by quantum computers is contained in PSPACE,  the set of problems which can solved when one has a polynomial amount of space available and no time limit.  Thus, in order to do a detailed simulation, the level of resources needed would likely be large since one would even if one made a close to classical simulation still need about as many resources. There are other results, such as Holevo's theorem, which place other similar restrictions.  The upshot of these results is that one cannot make a detailed simulation of an object without using at least much resources as the object itself. There may be potential ways of getting around this: for example, consider a simulator  interested primarily in what life on Earth is doing. The simulation would not need to do a detailed simulation of the inside of planet Earth and other large bodies in the solar system. However, even then, the resources involved would be very large. 

The primary problem with the simulation hypothesis as an explanation is that it requires the future of humanity to have actually already passed through the Great Filter and to have found their own success sufficiently unlikely that they've devoted large amounts of resources to actually finding out how they managed to survive. Moreover, there are strong limits on how accurately one can reconstruct any given quantum state which means an ancestry simulation will be at best a rough approximation. In this context, while there are interesting anthropic considerations here, it is more likely that the simulation hypothesis  is wishful thinking.

Variants of the "Prime Directive" have also been proposed. The essential idea is that advanced civilizations would deliberately avoid interacting with less advanced civilizations. This hypothesis runs into two serious problems: first, it does not explain the apparent naturalness, only the lack of direct contact by alien life. Second, it assumes a solution to a massive coordination problem between multiple species with potentially radically different ethical systems. In a similar vein, Hanson in his original essay on the Great Filter raised the possibility of a single very early species with some form of faster than light travel and a commitment to keeping the universe close to natural looking. Since all proposed forms of faster than light travel are highly speculative and would involve causality violations this hypothesis cannot be assigned a substantial probability. 

People have also suggested that civilizations move outside galaxies to the cold of space where they can do efficient reversible computing using cold dark matter. Jacob Cannell has been one of the most vocal proponents of this idea. This hypothesis suffers from at least three problems. First, it fails to explain why those entities have not used the conventional matter to any substantial extent in addition to the cold dark matter. Second, this hypothesis would either require dark matter composed of cold conventional matter (which at this point seems to be only a small fraction of all dark matter), or would require dark matter which interacts with itself using some force other than gravity. While there is some evidence for such interaction, it is at this point, slim. Third, even if some species had taken over a large fraction of dark matter to use for their own computations, one would then expect later species to use the conventional matter since they would not have the option of using the now monopolized dark matter. 

Other exotic non-Filter explanations have been proposed but they suffer from similar or even more severe flaws.

It is possible that future information will change this situation.  One of the more plausible explanations of the Great Filter is that there is no single Great Filter in the past but rather a large number of small filters which come together to drastically filter out civilizations. However, the evidence for such a viewpoint at this point is slim but there is some possibility that astronomy can help answer this question.

For example, one commonly cited aspect of past filtration is the origin of life. There are at least three locations, other than Earth, where life could have formed: Europa, Titan and Mars. Finding life on one, or all of them, would be a strong indication that the origin of life is not the filter. Similarly, while it is highly unlikely that Mars has multicellular life, finding such life would indicate that the development of multicellular life is not the filter. However, none of them are as hospitable to the extent of Earth, so determining whether there is life will require substantial use of probes. We might also look for signs of life in the atmospheres of extrasolar planets, which would require substantially more advanced telescopes. 

Another possible early filter is that planets like Earth frequently get locked into a "snowball" state which planets have difficulty exiting. This is an unlikely filter since Earth has likely been in near-snowball conditions multiple times— once very early on during the Huronian and later, about 650 million years ago. This is an example of an early partial Filter where astronomical observation may be of assistance in finding evidence of the filter. The snowball Earth filter does have one strong virtue: if many planets never escape a snowball situation, then this explains in part why we are not around a red dwarf: planets do not escape their snowball state unless their home star is somewhat variable, and red dwarfs are too stable. 

It should be clear that none of these explanations are satisfactory and thus we must take seriously the possibility of future Filtration. 

How can we substantially improve astronomy in the short to medium term?

Before we examine the potentials for further astronomical research to understand a future filter we should note that there are many avenues in which we can improve our astronomical instruments. The most basic way is to simply make better conventional optical, near-optical telescopes, and radio telescopes. That work is ongoing. Examples include the European Extreme Large Telescope and the Thirty Meter Telescope. Unfortunately, increasing the size of ground based telescopes, especially size of the aperture, is running into substantial engineering challenges. However,  in the last 30 years the advent of adaptive optics, speckle imaging, and other techniques have substantially increased the resolution of ground based optical telescopes and near-optical telescopes. At the same time, improved data processing and related methods have improved radio telescopes. Already, optical and near-optical telescopes have advanced to the point where we can gain information about the atmospheres of extrasolar planets although we cannot yet detect information about the atmospheres of rocky planets. 

Increasingly, the highest resolution is from space-based telescopes. Space-based telescopes also allow one to gather information from types of radiation which are blocked by the Earth's atmosphere or magnetosphere. Two important examples are x-ray telescopes and gamma ray telescopes. Space-based telescopes also avoid many of the issues created by the atmosphere for optical telescopes. Hubble is the most striking example but from a standpoint of observatories relevant to the Great Filter, the most relevant space telescope (and most relevant instrument in general for all Great Filter related astronomy), is the planet detecting Kepler spacecraft which is responsible for most of the identified planets. 

Another type of instrument are neutrino detectors. Neutrino detectors are generally very large bodies of a transparent material (generally water) kept deep underground so that there are minimal amounts of light and cosmic rays hitting the the device. Neutrinos are then detected when they hit a particle  which results in a flash of light. In the last few years, improvements in optics, increasing the scale of the detectors, and the development of detectors like IceCube, which use naturally occurring sources of water, have drastically increased the sensitivity of neutrino detectors.  

There are proposals for larger-scale, more innovative telescope designs but they are all highly speculative. For example, in the ground based optical front, there's been a suggestion to make liquid mirror telescopes with ferrofluid mirrors which would give the advantages of liquid mirror telescopes, while being able to apply adaptive optics which can normally only be applied to solid mirror telescopes.  An example of potential space-based telescopes is the Aragoscope which would take advantage of diffraction to make a space-based optical telescope with a resolution at least an order of magnitude greater than Hubble. Other examples include placing telescopes very far apart in the solar system to create effectively very high aperture telescopes. The most ambitious and speculative of such proposals involve such advanced and large-scale projects that one might as well presume that they will only happen if we have already passed through the Great Filter.

 

What are the major identified future potential contributions to the filter and what can astronomy tell us? 

Natural threats: 

One threat type where more astronomical observations can help are natural threats, such as asteroid collisions, supernovas, gamma ray bursts, rogue high gravity bodies, and as yet unidentified astronomical threats. Careful mapping of asteroids and comets is ongoing and requires more  continued funding rather than any intrinsic improvements in technology. Right now, most of our mapping looks at objects at or near the plane of the ecliptic and so some focus off the plane may be helpful. Unfortunately, there is very little money to actually deal with such problems if they arise. It might be possible to have a few wealthy individuals agree to set up accounts in escrow which would be used if an asteroid or similar threat arose. 

Supernovas are unlikely to be a serious threat at this time. There are some stars which are close to our solar system and are large enough that they will go supernova. Betelgeuse is the most famous of these with a projected supernova likely to occur in the next 100,000 years. However, at its current distance, Betelgeuse is unlikely to pose much of a problem unless our models of supernovas are very far off. Further conventional observations of supernovas need to occur in order to understand this further, and better  neutrino observations will also help  but right now, supernovas do not seem to be a large risk. Gamma ray bursts are in a situation similar to supernovas. Note also that if an imminent gamma ray burst or supernova is likely to occur, there's very little we can at present do about it. In general, back of the envelope calculations establish that supernovas are highly unlikely to be a substantial part of the Great Filter. 

Rogue planets, brown dwarfs or other small high gravity bodies such as wandering black holes can be detected and further improvements will allow faster detection. However, the scale of havoc created by such events is such that it is not at all clear that detection will help. The entire planetary nuclear arsenal would not even begin to move their orbits a substantial extent. 

Note also it is unlikely that natural events are a large fraction of the Great Filter. Unlike most of the other threat types, this is a threat type where radio astronomy and neutrino information may be more likely to identify problems. 

Biological threats: 

Biological threats take two primary forms: pandemics and deliberately engineered diseases. The first is more likely than one might naively expect as a serious contribution to the filter, since modern transport allows infected individuals to move quickly and come into contact with a large number of people. For example, trucking has been a major cause of the spread of HIV in Africa and it is likely that the recent Ebola epidemic had similar contributing factors. Moreover, keeping chickens and other animals in very large quanities in dense areas near human populations makes it easier for novel variants of viruses to jump species. Astronomy does not seem to provide any relevant assistance here; the only plausible way of getting such information would be to see other species that were destroyed by disease. Even with resolutions and improvements in telescopes by many orders of magnitude this is not doable.  

Nuclear exchange:

For reasons similar to those in the biological threats category, astronomy is unlikely to help us detect if nuclear war is a substantial part of the Filter. It is possible that more advanced telescopes could detect an extremely large nuclear detonation if it occurred in a very nearby star system. Next generation telescopes may be able to detect a nearby planet's advanced civilization purely based on the light they give off and a sufficiently large  detonation would be of the same light level. However, such devices would be multiple orders of magnitude larger than the largest current nuclear devices. Moreover, if a telescope was not looking at exactly the right moment, it would not see anything at all, and the probability that another civilization wipes itself out at just the same instant that we are looking is vanishingly small. 

Unexpected physics: 

This category is one of the most difficult to discuss because it so open. The most common examples people point to involve high-energy physics. Aside from theoretical considerations, cosmic rays of very high energy levels are continually hitting the upper atmosphere. These particles frequently are multiple orders of magnitude higher energy than the particles in our accelerators. Thus high-energy events seem to be unlikely to be a cause of any serious filtration unless/until humans develop particle accelerators whose energy level is orders of magnitude higher than that produced by most cosmic rays.  Cosmic rays with energy levels  beyond what is known as the GZK energy limit are rare.  We have observed occasional particles with energy levels beyond the GZK limit, but they are rare enough that we cannot rule out a risk from many collisions involving such high energy particles in a small region. Since our best accelerators are nowhere near the GZK limit, this is not an immediate problem.

There is an argument that we should if anything worry about unexpected physics, it is on the very low energy end. In particular, humans have managed to make objects substantially colder than the background temperature of 4 K with temperature as on the order of 10-9 K. There's an argument that because of the lack of prior examples of this, the chance that something can go badly wrong should be higher than one might estimate (See here.) While this particular class of scenario seems unlikely, it does illustrate that it may not be obvious which situations could cause unexpected, novel physics to come into play. Moreover, while the flashy, expensive particle accelerators get attention, they may not be a serious source of danger compared to other physics experiments.  

Three of the more plausible catastrophic unexpected physics dealing with high energy events are, false vacuum collapse, black hole formation, and the formation of strange matter which is more stable than regular matter.  

False vacuum collapse would occur if our universe is not in its true lowest energy state and an event occurs which causes it to transition to the true lowest state (or just a lower state). Such an event would be almost certainly fatal for all life. False vacuum collapses cannot be avoided by astronomical observations since once initiated they would expand at the speed of light. Note that the indiscriminately destructive nature of false vacuum collapses make them an unlikely filter.  If false vacuum collapses were easy we would not expect to see almost any life this late in the universe's lifespan since there would be a large number of prior opportunities for false vacuum collapse. Essentially, we would not expect to find ourselves this late in a universe's history if this universe could easily engage in a false vacuum collapse. While false vacuum collapses and similar problems raise issues of observer selection effects, careful work has been done to estimate their probability

People have mentioned the idea of an event similar to a false vacuum collapse but which occurs at a speed slower than the speed of light. Greg Egan used it is a major premise in his novel, "Schild's Ladder." I'm not aware of any reason to believe such events are at all plausible. The primary motivation seems to be for the interesting literary scenarios which arise rather than for any scientific considerations. If such a situation can occur, then it is possible that we could detect it using astronomical methods. In particular, if the wave-front of the event is fast enough that it will impact the nearest star or nearby stars around it, then we might notice odd behavior by the star or group of stars. We can be confident that no such event has a speed much beyond a few hundredths of the speed of light or we would already notice galaxies behaving abnormally. There is a very narrow range where such expansions could be quick enough to devastate the planet they arise on but take too long to get to their parent star in a reasonable amount of time. For example, the distance from the Earth to the Sun is on the order of 10,000 times the diameter of the Earth, so any event which would expand to destroy the Earth would reach the Sun in about 10,000 times as long. Thus in order to have a time period which would destroy one's home planet but not reach the parent star it would need to be extremely slow.

The creation of artificial black holes are unlikely to be a substantial part of the filter— we expect that small black holes will quickly pop out of existence due to Hawking radiation.  Even if the black hole does form, it is likely to fall quickly to the center of the planet and eat matter very slowly and over a time-line which does not make it constitute a serious threat.  However, it is possible that black holes would not evaporate; the fact that we have not detected the evaporation of any primordial black holes is weak evidence that the behavior of small black holes is not well-understood. It is also possible that such a hole would eat much faster than we expect but this doesn't seem likely. If this is a major part of the filter, then better telescopes should be able to detect it by finding very dark objects with the approximate mass and orbit of habitable planets. We also may be able to detect such black holes via other observations such as from their gamma or radio signatures.  

The conversion of regular matter into strange matter, unlike a false vacuum collapse or similar event, might  be naturally limited to the planet where the conversion started. In that case, the only hope for observation would be to notice planets formed of strange matter and notice changes in the behavior of their light. Without actual samples of strange matter, this may be very difficult to do unless we just take notice of planets looking abnormal as similar evidence. Without substantially better telescopes and a good idea of what the range is for normal rocky planets, this would be tough.  On the other hand, neutron stars which have been converted into strange matter may be more easily detectable. 

Global warming and related damage to biosphere: 

Astronomy is unlikely to help here. It is possible that climates are more sensitive than we realize and that comparatively small changes can result in Venus-like situations.  This seems unlikely given the general variation level in human history and the fact that current geological models strongly suggest that any substantial problem would eventually correct itself. But if we saw many planets that looked Venus-like in the middle of their habitable zones, this would be a reason to be worried. Note that this would require detailed ability to analyze atmospheres on planets well beyond current capability. Even if it is possible Venus-ify a planet, it is not clear that the Venusification would last long. Thus there may be very few planets in this state at any given time.  Since stars become brighter as they age, so high greenhouse gas levels have more of an impact on climate when the parent star is old.  If civilizations are more likely to arise in a late point of their home star's lifespan, global warming becomes a more plausible filter, but even given given such considerations, global warming does not seem to be sufficient as a filter. It is also possible that global warming by itself is not the Great Filter but rather general disruption of the biosphere including possibly for some species global warming, reduction in species diversity, and other problems. There is some evidence that human behavior is collectively causing enough damage to leave an unstable biosphere

A change in planetary overall temperature of 10o C would likely be enough to collapse civilization without leaving any signal observable to a telescope. Similarly, substantial disruption to a biosphere may be very unlikely to be detected. 

Artificial intelligence

AI is a complicated existential risk from the standpoint of the Great Filter. AI is not likely to be the Great Filter if one considers simply the Fermi paradox. The essential problem has been brought up independently by a few people. (See for example Katja Grace's remark here and my blog here.) The central issue is that if an AI takes over it is likely to attempt to control all resources in its future light-cone. However, if the AI spreads out at a substantial fraction of the speed of light, then we would notice the result. The argument has been made that we would not see such an AI if it expanded its radius of control at very close to the speed of light but this requires expansion at 99% of the speed of light or greater. It is highly questionable that velocities more than 99% of the speed of light are practically possible due to collisions with the interstellar medium and the need to slow down if one is going to use the resources in a given star system. Another objection is that AI may expand at a large fraction of light speed but do so stealthily. It is not likely that all AIs would favor stealth over speed. Moreover, this would lead to the situation of what one would expect when multiple slowly expanding, stealth AIs run into each other. It is likely that such events would have results would catastrophic enough that they would be visible even with comparatively primitive telescopes.

While these astronomical considerations make AI unlikely to be the Great Filter, it is important to note that if the Great Filter is largely in our past then these considerations do not apply. Thus, any discovery which pushes more of the filter into the past makes AI a larger fraction of total expected existential risks since the absence of observable AI becomes  much weaker evidence against strong AI if there are no major civilizations out there to hatch such explosions. 

Note also that AI as a risk cannot be discounted if one assigns a high probability to existential risk based on non-Fermi concerns, such as the Doomsday Argument

Resource depletion:

Astronomy is unlikely to provide direct help here for reasons similar to the problems with nuclear exchange, biological problems, and global warming.  This connects to the problem of civilization bootstrapping: to get to our current technology level, we used a large number of non-renewable resources, especially energy sources. On the other hand, large amounts of difficult-to-mine and refine resources (especially aluminum and titanium) will be much more accessible to future civilization. While there remains a large amount of accessible fossil fuels, the technology required to obtain deeper sources is substantially more advanced than the relatively easy to access oil and coal. Moreover, the energy return rate, how much energy one needs to put in to get the same amount of energy out, is lower.  Nick Bostrom has raised the possibility that the depletion of easy-to-access resources may contribute to making civilization-collapsing problems that, while not  full-scale existential risks by themselves, prevent the civilizations from recovering. Others have begun to investigate the problem of rebuilding without fossil fuels, such as here.

Resource depletion is unlikely to be the Great Filter, because small changes to human behavior in the 1970s would have drastically reduced the current resource problems. Resource depletion may contribute to existential threat to humans if it leads to societal collapse, global nuclear exchange, or motivate riskier experimentation.  Resource depletion may also combine with other risks such as a global warming where the combined problems may be much greater than either at an individual level. However there is a risk that large scale use of resources to engage in astronomy research will directly contribute to the resource depletion problem. 

Nanotechnology: 

Nanotechnology disasters are one of the situations where astronomical considerations could plausibly be useful. In particular, planets which are in the habitable zone, but have highly artificial and inhospitable atmospheres and surfaces, could plausibly be visible. For example, if a planet's surface were transformed into diamond, telescopes not much more advanced beyond our current telescopes could detect that surface. It should also be noted that at this point, many nanotechnologists consider the classic "grey goo" scenario to be highly unlikely. See, for example, Chris Phoenix's comment here. However, catastrophic replicator events that cause enough damage to the biosphere without grey-gooing everything are a possibility and it is unclear if we would detect such events. 

Aliens:

Hostile aliens are a common explanation of the Great Filter when people first find out about it. However, this idea comes more from science fiction than any plausible argument. In particular, if a single hostile alien civilization were wiping out or drastically curtailing other civilizations, then one would still expect the civilization to make use of available resources after a long enough time. One could do things like positing such aliens who also have a religious or ideological ideal of leaving the universe looking natural but this is an unlikely speculative hypothesis that also requires them to dominate a massive region, not just a handful of galaxies but many galaxies. 

Note also that astronomical observations might be able to detect the results of extremely powerful weapons but any conclusions would be highly speculative. Moreover, it is not clear that knowing about such a threat would allow us at all to substantially mitigate the threat. 

Other/Unkown: 

Unknown risks are by nature very difficult to estimate. However, there is an argument that we should expect that the Great Filter is an unknown risk, and is something so unexpected that no civilization gets sufficient warning.  This is one of the easiest ways for the filter to be truly difficult to prevent. In that context, any information we can possibly get about other civilizations and what happened to them would be a major leg-up.
 

Conclusions 


Astronomical observations have potential to give us data about the Great Filter, but many potential filters will leave no observable astronomical evidence unless one's astronomical ability is so high that one has likely already passed all major filters. Therefore, one potential strategy to pass the Great Filter is to drastically increase the skill of our astronomy capability to the point where it would be highly unlikely that a pre-Filter civilization would have access to those observations.  Together with our comparatively late arrival, this might allow us to actually detect failed civilizations that did not survive the Great Filter and see what they did wrong.

Unfortunately, it is not clear how cost-effective this sort of increase in astronomy would be compared to other existential risk mitigating uses. It may be more useful to focus on moving resources in astronomy into those areas most relevant to understanding the Great Filter. 
New Comment
68 comments, sorted by Click to highlight new comments since:

Simulation and computer graphics expert here. I have some serious issues with your characterization of the computational complexity of advanced simulations.

The simulation hypothesis runs into serious problems, both in general and as an explanation of the Great Filter in particular. First, . ..

First everything in any practical simulation is always and everywhere an approximation. An exact method is an enormously stupid idea - a huge waste of resources. Simulations have many uses, but in general simulations are a special case of general inference where we have a simple model of the system dynamics (the physics), combined with some sparse approximate, noisy, and partial knowledge of the system's past trajectory, and we are interested in modeling some incredibly tiny restricted subset of the future trajectory for some sparse subset of system variables.

For example in computer graphics, we only simulate light paths that will actually reach the camera, rather than all photon paths, and we can use advanced hierarchical approximations of cones/packets of related photons rather than individual photons. Using these techniques, we are already getting close - with just 2015 GPU technology - to real time photorealistic simulation of light transport for fully voxelized scenes using sparse multiscale approximations with octrees.

The optimal approximation techniques use hierachical multiscale expansion of space-time combined with bidirectional inference (which bidirectional path tracing is a special case of). The optimal techniques only simulate down to the quantum level when a simulated scientist/observer actually does a quantum experiment. In an optimal simulated world, stuff literally only exists to the extent observers are observing or thinking about it.

The limits of optimal approximation appear to be linear in observer complexity - using output sensitive algorithms. Thus to first approximation, the resources required to simulate a world with a single human-intelligence observer is just close to the complexity of simulating the observer's brain.

Furthermore, we have strong reasons to suspect that there are numerous ways to compress brain circuitry, reuse subcomputations, and otherwise optimize simulated neural circuits such that the optimal simulation of something like a human brain is far more efficient than simulating every synapse - but that doesn't even really matter because the amount of computation required to simulate 10 billion human brains at the synapse level is tiny compared to realistic projections of the computational capabilities of future superintelligent civilization.

The upshot of these results is that one cannot make a detailed simulation of an object without using at least much resources as the object itself.

Ultra-detailed accurate simulations are only high value for quantum level phenomena. Once you have a good model of the quantum scale, you can percolate those results up to improve your nano-scale models, and then your micro-scale models, and then your milli-meter scale models, and so on.

There may be potential ways of getting around this: for example, consider a simulator interested primarily in what life on Earth is doing. The simulation would not need to do a detailed simulation of the inside of planet Earth and other large bodies in the solar system. However, even then, the resources involved would be very large.

We already can simulate entire planets using the tiny resources of today's machines. I myself have created several SOTA real-time planetary renderers back in the day. Using multiscale approximation the size of the simulated universe is completely irrelevant. This is so hard for some people to understand because they tend to think of simulations on regular linear grids rather than simulations on irregular domains such as octrees or on regular but nonlinear adapted grids or on irregular sparse sets or combinations thereof. If you haven't really studied the simulation related branches of comp sci, it is incredibly difficult to even remotely estimate the limits of what is possible.

First everything in any practical simulation is always and everywhere an approximation. An exact method is an enormously stupid idea - a huge waste of resources.

We haven't seen anything like evidence that our laws of physics are only approximations at all. If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or b) they are engaging in an extremely detailed simulation.

The optimal techniques only simulate down to the quantum level when a simulated scientist/observer actually does a quantum experiment. In an optimal simulated world, stuff literally only exists to the extent observers are observing or thinking about it.

And our simulating entities would be able to tell that someone was doing a deliberate experiment how?

The limits of optimal approximation appear to be linear in observer complexity - using output sensitive algorithms.

I'm not sure what you mean by this. Can you expand?

The upshot of these results is that one cannot make a detailed simulation of an object without using at least much resources as the object itself.

Ultra-detailed accurate simulations are only high value for quantum level phenomena. Once you have a good model of the quantum scale, you can percolate those results up to improve your nano-scale models, and then your micro-scale models, and then your milli-meter scale models, and so on.

Only up to a point. It is going to be for example very difficult to percolate up simulations from micro to milimeter for many issues, and the less detail in a simulation, the more likely that someone notices a statistical artifact in weakly simulated data.

We already can simulate entire planets using the tiny resources of today's machines. I myself have created several SOTA real-time planetary renderers back in the day.

Again, the statistical artifact problem comes up, especially when there are extremely subtle issues going on, such as the different (potential) behavior of neutrinos.

Your basic point that I may be overestimating the difficulty of simulations may be valid; since simulations don't explain the Great Filter for other reasons I discussed, this causes an update in the direction of us being in a simulation but doesn't really help explain the Great Filter much at all.

We haven't seen anything like evidence that our laws of physics are only approximations at all.

And we shouldn't expect to, as that is an inherent contradiction. Any approximation crappy enough that we can detect it doesn't work as a simulation - it diverges vastly from reality.

Maybe we live in a simulation, maybe not, but this is not something that we can detect. We can never prove we are in a simulation or not.

However, we can design a clever experiment that would at least prove that it is rather likely that we live in a simulation: we can create our own simulations populated with conscious observers.

On that note - go back and look at the first video game pong around 1980, and compare to the state of the art 35 years later. Now project that into the future. I'm guessing that we are a little more than half way towards Matrix style simulations which essentially prove the simulation argument (to the limited extent possible).

If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or

Depends what you mean by 'laws of physics'. If we are in a simulation, then the code that creates our observable universe is a clever efficient approximation of some simpler (but vastly less efficient) code - the traditional 'laws of physics'.

Of course many simulations could be of very different physics, but those are less likely to contain us. Most of the instrumental reasons to create simulations require close approximations. If you imagine the space of all physics for the universe above, it has a sharp peak around physics close to our own.

b) they are engaging in an extremely detailed simulation.

Detail is always observer relevant. We only observe a measly few tens of millions of bits per second, which is nothing for a future superintelligence.

The limits of optimal approximation appear to be linear in observer complexity - using output sensitive algorithms.

I'm not sure what you mean by this. Can you expand?

Consider simulating a universe of size N (in mass, bits, whatever) which contains M observers of complexity C each, for T simulated time units.

Using a naive regular grid algorithm (of the type most people think of), simulation requires O(N) space and O(NT) time.

Using the hypothetical optimal output sensitive approximation algorithm, simulation requires ~O(MC) space and ~O(MCT) time. In other words the size of the universe is irrelevant and the simulation complexity is only output dependent - focused on computing only the observers and their observations.

We already can simulate entire planets using the tiny resources of today's machines. I myself have created several SOTA real-time planetary renderers back in the day.

Again, the statistical artifact problem comes up, especially when there are extremely subtle issues going on, such as the different (potential) behavior of neutrinos.

What is a neutrino such that you would presume to notice it? The simulation required to contain you - and indeed has contained you your entire life - has probably never had to instantiate a single neutrino (at least not for you in particular, although it perhaps has instantiated some now and then inside accelerators and other such equipment).

Your basic point that I may be overestimating the difficulty of simulations may be valid; since simulations don't explain the Great Filter for other reasons I discussed, this causes an update in the direction of us being in a simulation but doesn't really help explain the Great Filter much at all.

I agree that the sim arg doesn't explain the Great Filter, but then again I'm not convinced there even is a filter. Regardless, the sim arg - if true - does significantly effect ET considerations, but not in a simple way.

Lots of aliens with lots of reasons to produce sims certainly gains strength, but models in which we are alone can also still produce lots of sims, and so on.

Using the hypothetical optimal output sensitive approximation algorithm, simulation requires ~O(MC) space and ~O(MCT) time.

For any NP problem of size n, imagine a universe of size N = O(2^n), in which computers try to verify all possible solutions in parallel (using time T/2 = O(n^p)) and then pass the first verified solution along to a single (M=1) observer (of complexity C = O(n^p)) who then repeats that verification (using time T/2 = O(n^p)).

Then simulate the observations, using your optimal (O(MCT) = O(n^{2p})) algorithm. Voila! You have the answer to your NP problem, and you obtained it with costs that were polynomial in time and space, so the problem was in P. Therefore NP is in P, so P=NP.

Dibs on the Millennium Prize?

For any NP problem of size n, imagine a universe of size N = O(2^n), in which computers try to verify all possible solutions in parallel (using time T/2 = O(n^p)) and then pass the first verified solution along to a single (M=1) observer (of complexity C = O(n^p)) who then repeats that verification (using time T/2 = O(n^p)).

I never claimed "hypothetical optimal output sensitive approximation algorithms" are capable of universal emulation of any environment/turing machine using constant resources. The use of the term approximation should have informed you of that.

Computers are like brains and unlike simpler natural phenomena in the sense that they do not necessarily have very fast approximations at all scales (due to complexity of irreversibility), and the most efficient inference of one agent's observations could require forward simulation of the recent history of other agents/computers in the system.

Today the total computational complexity of all computers in existence is not vastly larger than the total brain complexity, so it is still ~O(MCT).

Also, we should keep in mind that the simulator has direct access to our mental states.

Imagine the year is 2100 and you have access to a supercomputer that has ridiculous amount of computation, say 10^30 flops, or whatever. In theory you could use that machine to solve some NP problem - verifying the solution yourself, and thus proving to yourself that you don't live in a simulation which uses less than 10^30 flops.

Of course, as the specific computation you performed presumably had no value to the simulator, the simulation could simply slightly override neural states in your mind, such that the specific input parameters you chose were instead changed to match a previous cached input/output pair.

We haven't seen anything like evidence that our laws of physics are only approximations at all. If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or b) they are engaging in an extremely detailed simulation.

It depends on what you consider a simulation. Game of Life-like cell automaton simulations are interesting in terms of having a small number of initial rules and being mathematically consistent. However, using them for large-scale project (for example, a whole planet populated with intelligent beings) would be really expensive in terms of computer power required. If the hypothetical simulators' resources are in any way limited then for purely economic reasons the majority of emulations would be of the other kind - the ones where stuff is approximated and all kinds of shortcuts are taken.

And our simulating entities would be able to tell that someone was doing a deliberate experiment how?

Very easily - because a scientist doing an experiment talks about doing it. If the simulated beings are trying to run LHC, one can emulate the beams, the detectors, the whole accelerator down to atoms - or one can generate a collision event profile for a given detector, stick a tracing program on the scientist that waits for the moment when the scientist says "Ah... here is our data coming up" and then display the distribution on the screen in front of the scientist. The second method is quite a few orders of magnitude cheaper in terms of computer power required, and the scientist in question sees the same picture in both cases.

If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails)

It doesn't has to be simulation of ancestor, we may be example of any civilisation, life, etc. While our laws of physics seem complex and weird (for macroscopic effects they generate), they may be actually very primitive in comparison to parent universe physics. We cannot possibly estimate computation power of parent universe computers.

Yes, but at that point this becomes a completely unfalsifiable or evaluatable claim and even less relevant to Filtration concerns.

Also, a quick note. Cell Bio Guy provided the estimate that supernovas are not a substantial part of the Filter, by noting that the the volume of the galactic disc (~2.6*10^13 cubic light years), a sterilization radius or around 10 light years (3 parsecs about) , and one every 50 years. This gives a sterilization of around 100 cubic light years per year and an odds of sterilization of a star system per year of one in three or four trillion.

To expand on their reasoning: One can argue that the supernova sterilization distance is too small, and if one increases that by a factor of 5 as an upper bound, that increases the amount of sterilized volume by a factor of about 1000, giving a sterilization event per a star system of about one in ten billion which is still very rare. One can also argue that it may be that much of the galactic center is too violent to have habitable life, which might cut down the total volume available by about 20%, and if one assumes (contrary to fact) that none of the supernovas are in that region, this still gives about one event every 6 billion years, so this seems to be strongly not the Filter.

[-][anonymous]40

There's some reason to think that the core is unsuitable for other reasons. There are a few lines of evidence (see link below for one I could find fast) that the core of our galaxy undergoes periodic starbursts every few hundred megayears in which it has a brief episode of very concentrated rapid star formation, followed by a period of a few tens of megayears in which the supernova rate in the core temporarily exceeds the steady state galactic supernova rate by a factor of 50+. The high rate and small volume (less than 1/500 the galactic volume) would lead to quite the occasional local sterilization event if this does indeed happen on a regular basis.

http://www.nasa.gov/mission_pages/GLAST/news/new-structure.html

For me, one of the strongest arguments against the simulation hypothesis is one I haven't seen other make yet. I'm curious what people here think of it.

My problem with the idea of us living in a simulation is that it would be breathtakingly cruel. If we live in a simulation, that means that all the suffering in the world is there on purpose. Our descendants in the far future are purposefully subjecting conscious entities to the worst forms of torture, for their own entertainment. I can't imagine an advanced humanity that would allow something so blatantly immoral.

Of course, this problem largely goes away if you posit that the simulation contains only a small number of conscious entities (possibly 1), and that all other humans just exist as background NPCs, whose conciousness is faked. Presumably all the really bad stuff would only happen to NPCs. That would also significantly reduce the computational power required for a simulation. If I'm the only real person in the world, only things I'm looking at directly would have to be simulated in any sort of detail. Entire continents could be largely fictional.

That explanation is a bit too solipsistic for my taste though. It also raises the question of why I'm not a billionaire playboy. If the entire world is just an advanced computer game in which I'm the player, why is my life so ordinary?

[-]dxu20

We could be living in an ancestor-simulation. Maybe our descendants are really, really devoted to realistic simulations. (I'm not sure how much I'd like a future in which such descendants existed, but it's a definite possibility.)

My problem with the idea of us living in a simulation is that it would be breathtakingly cruel. If we live in a simulation, that means that all the suffering in the world is there on purpose. Our descendants in the far future are purposefully subjecting conscious entities to the worst forms of torture, for their own entertainment.

There is a rather obvious solution/answer: the purpose of the simulation is to resurrect the dead. Any recreation of historical suffering is thus presumably more than compensated for by the immense reward of an actual afterlife.

We could even have an opt out clause in the form of suicide - if you take your own life that presumably is some indicator that you prefer non-existence to existence. On the other hand, this argument really only works if the person committing suicide was fully aware of the facts (ie that the afterlife is certain) and of sound mind.

If we live in a simulation, that means that all the suffering in the world is there on purpose.

This is commonly known as theodicy.

Well yes. I wasn't claiming that "why is there suffering" is a new question. Just that I haven't seen it applied to the simulation hypothesis before (if it has been discussed before, I'd be interested in links).

And religion can't really answer this question. All they can do is dodge it with non-answers like "God's ways are unknowable". Non-answers like that become even more unsatisfactory when you replace 'God' with 'future humans'.

Just that I haven't seen it applied to the simulation hypothesis before

Well, the simulation hypothesis is essentially equivalent to saying our world was made by God the Creator so a lot of standard theology is applicable X-)

And religion can't really answer this question.

What, do you think, can really answer this question?

There is a thing which really upsets me with the "Great Filter" idea/terminology, is that implies that it's a single event (which is either in the past or the future).

My view on the "Fermi paradox" is not that there is a single filter, cutting ~10 orders of magnitude (ie, from 10 billions of planets in our galaxy with could have a life to just one), but more a combination of many small filters, each taking their cuts.

To have intelligent space-faring life, we need a lot of things to happen without any disaster (nearby supernova, too big asteroid, ...) disrupting it too much. It's more a like a "game of the goose", where the optimization process steadily advance but events will either accelerate it or make it go backwards, and you need to reach the "win" cell before time runs out (ie, your star becoming too hot as the Sun will be in less than a billion of years) or reaching a '"you lose" cell (a nearby supernova blasting away your atmosphere, or a thermonuclear warfare).

I don't see any reason to believe there is a single "Great Filter", instead of a much more continuous process with many intermediate filters you have to pass through.

My view on the "Fermi paradox" is not that there is a single filter, cutting ~10 orders of magnitude (ie, from 10 billions of planets in our galaxy with could have a life to just one), but more a combination of many small filters, each taking their cuts.

I don't think that the Great Filter implies only one filter, but I think that if you're multiplying several numbers together and they come out to at least 10^10, it's likely that at least one of the numbers is big. (And if one of the numbers is big, that makes it less necessary for the other numbers to be big.)

Put another way, it seems more likely to me that there is one component filter of size 10^6 than two component filters each of size 10^3, both of which seem much more likely than that there are 20 component filters of size 2.

I don't see why it's likely one of the numbers has to be big. There are really are lots of complicated steps you need to cross to go from inert matter to space-faring civilizations, it's very easy to point a dozen of such steps that could fail in various ways or just take too long, and there many disasters that can happen to blow everything down.

If you've a long ridge to climb in a limited time and most people fail to do it, it's not very likely there is a very specific part of it which is very hard, but (unless you've actual data that most people fail at the same place) more likely that are lots of moderately difficult parts and few people succeed in all of them in time.

Or if you've a complicated project that takes 4x longer than expected to be done, it's much less likely that there was a single big difficulty you didn't foresee than many small-to-moderate difficulties you didn't foresee stacking on top of each other. The planning fallacy isn't usually due to black swans, but to accumulating smaller factors. It's the same here.

I don't see why it's likely one of the numbers has to be big.

This is a statement about my priors on the number of filters and the size of a filter, and I'm not sure I can shortly communicate why I have that prior. Maybe it's a statement on conceptual clumpiness.

If you've a long ridge to climb in a limited time and most people fail to do it, it's not very likely there is a very specific part of it which is very hard, but (unless you've actual data that most people fail at the same place) more likely that are lots of moderately difficult parts and few people succeed in all of them in time.

To me, your claim is a statement that the number of planets at each step follows a fairly smooth exponential, and a specific hard part means you would have a smooth exponential before a huge decrease, then another smooth exponential. But we don't know what the distribution of life on planets looks like, so we can't settle that argument.

Similarly, we know about the planning fallacy because we make many plans and complete many projects--if there was only one project ever that completed, we probably could not tell in retrospect which parts were easy and which were hard, because we must have gotten lucky even on the "hard" components. Hanson wrote a paper on this in 1996 that doesn't appear to be on his website anymore, but it's a straightforward integration given exponential distributions over time to completion, with 'hardness' determining the rate parameter, and conditioning on early success.

I would instead look at the various steps in the filter, and generalize the parameters of those steps, which then generate universes with various levels of noise / age at first space-colonizing civilization. If you have fat-tailed priors on those parameters, I think you'll get that it's more likely for there to be one dominant factor in the filter. Maybe I should make the effort to formalize that argument.

Yep; for some reason the links I found all point at a .ps file that no longer exists.

I would instead look at the various steps in the filter, and generalize the parameters of those steps, which then generate universes with various levels of noise / age at first space-colonizing civilization. If you have fat-tailed priors on those parameters, I think you'll get that it's more likely for there to be one dominant factor in the filter. Maybe I should take the effort to formalize that argument.

Another way of thinking about the filter/steps is as a continuous developmental trajectory. We have a reasonable good idea of one sample trajectory - the history of our solar system - and we want to determine if this particular civilization-bearing subspace we are in is like the main sequence or more like a tightrope.

If the development stages have lots of conjuctive/multiplicative dependencies (for example: early life requires a terrestrial planet in the habitable zone with the right settings for various parameters), then a lognormal distribution might be a good fit. This seems reasonable, and the lognormal of course is extremely heavy tailed.

On the other hand, one problem with this is that seeing a single trajectory example doesn't give one much evidence for any disjunctive/additive components in the distribution. These would be any independent alternate developmental pathways which could bypass the specific developmental chokepoints we see in our single example history.

[-][anonymous]50

Unfortunately, there is very little money to actually deal with such problems if they arise. It might be possible to have a few wealthy individuals agree to set up accounts in escrow which would be used if an asteroid or similar threat arose.

I think you underestimate our species. If a devastating asteroid strike were imminent, I believe we would quickly raise an enormous amount of money to deal with the problem. Recent Ebola scare is a relevant (albeit smaller) example.

[-]dxu90

These things take time to implement, though--raising the money a week before the strike won't help if you don't have any measures already in place, where by "in place" I mean physically extant.

[-][anonymous]00

I guess you are right, but I honestly have no idea how long it would take the world to create and launch an effective countermeasure.

However, I am fairly confident that available money would not be insufficient as the OP suggests, hence my comment.

Hanson's original great filter article is a great piece - especially for it's time (given that the physical limits of compuation were less understood), but the entire terminology and framework leads to privileging the hypothesis that we are alone.

Part of the reason for that - I suspect - is that ever since the split between science and religion, science has strongly favored the natural explanation for any phenomena. If advanced aliens exist then it leads to some potentially unsettling implications.

People have also suggested that civilizations move outside galaxies to the cold of space where they can do efficient reversible computing using cold dark matter. Jacob Cannell has been one of the most vocal proponents of this idea.

Do you have other examples of this idea? I'm just curious where you may have encountered it.

Also - "outside the galaxies" is perhaps going to far. In more general form, the cold dark matter model just proposes that postbiological life will migrate away from stars, in contrast to the dyson-sphere stellavore model which proposes that advanced life stays near stars.

This hypothesis suffers from at least three problems. First, it fails to explain why those entities have not used the conventional matter to any substantial extent in addition to the cold dark matter.

Actually, the cold dark model does explain how advanced aliens use conventional matter - they turn it into reversible computronium arcilects, install shielding to cool the arcilect down, and then possibly eject the arcilect out into space. The ejection part is less certain and wouldn't necessarily make sense for every planemo.

Unlike the stellavore model, the cold dark model actually matches our observations.

In general "Cold dark matter" includes any matter that is cold and does not emit significant radiation (dark), but my analysis was focused on conventional (baryonic) matter.

My key point is that reversible computing appears to be possible. If reversible computing is possible, no alien civilizations will build anything like dyson spheres, because those designs are incredibly inefficient compared to reversible computing based arcilects. Cold dark reversible arcilects are a vastly more computationally powerful use of the same mass resources. There is no need to dissamble the system and collect all the solar energy because you barely need solar energy at all.

Second, this hypothesis would either require dark matter composed of cold conventional matter (which at this point seems to be only a small fraction of all dark matter), or would require dark matter which interacts with itself using some force other than gravity.

No - the hypothesis is not somehow required to 'explain dark matter', even though that would be nice (does the dyson sphere hypothesis explain dark matter?), and the hypothesis works either way. If it is possible to build advanced computing arcilects out of unconventional matter, then we should indeed expect aliens to colonize/use the unconventional matter, but that would presumably come only after first colonizing the conventional matter. If computers can only be built of conventional baryonic matter, then they only use that. Either way the baryonic matter gets used first and it doesnt change much.

Third, even if some species had taken over a large fraction of dark matter to use for their own computations, one would then expect later species to use the conventional matter since they would not have the option of using the now monopolized dark matter.

Again my model primarily focuses on conventional (baryonic) matter. You seem to be confusing the baryonic/non-baryonic issue with the temperature/emission issue.

In my cold dark matter model, aliens start on warm planets, they transition to posbiological AI, then they colonize their system, turning many of the solid bodies (and perhaps to some limited extent the gas giants) into advanced reversible computing arcilects, which they cool as much as they can in place using reflective shielding/high albedo, etc. Then they possibly begin to eject some of the arcilects from the system, using complex gravitational assists.

From a distance, these changes are not hugely obvious, but there are some ways to potentially observe changes to albedo. And we could model ejections - if artificial ejections do occur then we may already being seeing the results. As another example, perhaps jovian planets migrating close into the sun is not a natural phenomenon. And so on.

We now know that there is more stuff (planemos) outside of stellar systems - nomads - then attached, which is supportive. Nomads which formed or ejected naturally are also colonized over time.

People have also suggested that civilizations move outside galaxies to the cold of space where they can do efficient reversible computing using cold dark matter. Jacob Cannell has been one of the most vocal proponents of this idea.

Do you have other examples of this idea? I'm just curious where you may have encountered it.

Hanson mentions it in the original Great Filter piece, and I've seen it discussed elsewhere on the internet in internet fora (for example, r/futurology on Reddit).

You're correct that I should do a better job distinguishing the various versions of this hypothesis. I do think they run into essentially the same problems; reversible computing doesn't mean one has unlimited computational power. If the material in question is conventional baryonic matter then it cannot be a large fraction of actual dark matter, so it loses the appeal of being an explanation for that (and yes, the other explanations for the Filter don't explain dark matter, but versions of this at one point had this as their main selling point). Moreover, it isn't at all clear how you would have multiple such objects communicate with each other.

A few months ago, I asked you here what you thought the form of these dark matter entities were and you didn't reply. It seems that since then you've thought a lot more about this. I'm going to have to think carefully about what you have said above and get back to you later.

Hanson mentions it in the original Great Filter piece, and I've seen it discussed elsewhere on the internet in internet fora (for example, r/futurology on Reddit).

It's true he discusses dark matter. He doesn't mention reversible computing or the Landauer limit though, although to be fair when he wrote that article reversible computing was relatively unknown.

I'm now realizing that my use of the term 'dark matter' is unfortunate because it typically means an exotic form of matter, whereas I am talking about dark regular baryonic matter.

If the material in question is conventional baryonic matter then it cannot be a large fraction of actual dark matter, so it loses the appeal of being an explanation for that (and yes, the other explanations for the Filter don't explain dark matter, but versions of this at one point had this as their main selling point).

The dark energy/matter problem in cosmology is still very mysterious. The current main solutions call for new forms of exotic matter/energy, and many of the detection experiments have generated null/wierd results. There's alot going there in cosmology and it will take a while to sort out.

Also, there was some new research just recently showing that type Ia supernovae are more diverse than originally thought, which is causing a rethink of the rate of expansion, and thus the whole dark energy issue - as the supernova measurements were used as distance beacons.

Now regardless of what is going on with dark energy and non-baryonic dark matter. the issue of dark vs light baryonic matter is separate, and the recent evidence indicates a favorable high ratio of extrasolar dark baryonic planemos in the form of nomads.

The actual mass ratios for dark vs light baryonic matter are also unimportant for this model in the sense that what really matters is the fraction of metallic mass - most of the matter is hydrogen/helium and such that is probably not as useful for computation.

Moreover, it isn't at all clear how you would have multiple such objects communicate with each other.

? The planck satellite maintained a 0.1K temperature for some key components and we had no issues communicating with it. External communication with a reversible computer doesn't even require energy expenditure in theory, but it probably does in practice, but the expenditures for communication can be tiny - especially given enormous computational resources for compression.

A few months ago, I asked you here what you thought the form of these dark matter entities were and you didn't reply. It seems that since then you've thought a lot more about this.

I have written about this before (on my blog) - I didn't reply back then because I was busy and didn't have much time for LW or thinking about aliens ;)

I'm not sure if this piece should go here or in Main (opinions welcome).

Thanks to Mass_Driver, CellBioGuy, and Sniffnoy for looking at drafts, as well as Josh Mascoop, and J. Vinson. Any mistakes or errors are my own fault.

Definitely Main, I found your post (including the many references) and the discussion very interesting.

My personal suspicion is that intelligent life requires a wide variety of complex cellular machinery, which requires multiple global extinction-level events to weed out more highly specialized species utilizing more efficient but less adaptable survival mechanisms; each extinction-level event, in this suspicion, would raise the potential complexity of the environment, until intelligence becomes more advantageous than expensive. However, there'd have to be spacing between global extinction events, in order to permit a recovery period in which that complexity can actually arise. Any planet which experiences multiple global extinction events is likely to experience more, however, so the conditions which give rise to intelligent life would usually result in its ultimate destruction.

No hard evidence, granted. Just suspicion.

This hypothesis is interesting and not one I've seen at all before. It seems to run partially afoul of the same problem that many small early filters would run into- one would be more likely to find civilizations around red dwarfs. Is there a way around that?

The low luminosity of red dwarf stars makes them unsuitable for an earth-like environment, I believe. I don't have enough information to comment on a non-earthlike environment supporting life.

The stability of red dwarves, however, could work as a filter in itself, limiting the number of global extinction events.

The low luminosity of red dwarf stars makes them unsuitable for an earth-like environment, I believe. I don't have enough information to comment on a non-earthlike environment supporting life.

Red dwarfs have a smaller habitable zone than our sun, but if you have a planet close enough to a red dwarf this isn't an issue. This is exactly the problem: if there are some set of not so likely series of events that will occur, then one expects to find civilizations around red dwarfs. If one expects that's not the case then the big habitable zones on somewhat bigger stars make one more likely to expect a civilization around those stars. We see the second.

The stability of red dwarves, however, could work as a filter in itself, limiting the number of global extinction events.

Possibly, but I don't think that any of the major extinction events in Earth history are generally attributed to large solar flares or coronal mass ejections or the like. So it seems like asteroids and geological considerations are more than enough to provide extinction events.

Red dwarfs have a smaller habitable zone than our sun, but if you have a planet close enough to a red dwarf this isn't an issue. This is exactly the problem: if there are some set of not so likely series of events that will occur, then one expects to find civilizations around red dwarfs. If one expects that's not the case then the big habitable zones on somewhat bigger stars make one more likely to expect a civilization around those stars. We see the second.

  • AFAIK planets close enough to a Red Dwarf to get enough lumosity stop being earth-like due to other effects (likely rotational periods, tidal forces).

The situation is a bit more complicated. Wikipedia has a good summary. There's also been more recent work which suggests that the outer end of the habitable zone around red dwarfs may be larger than than earlier estimates. See my earlier comments here on this subject.

There is no question that colonization will reduce the risk of many forms of Filters

Actually there is. It just hasn't been thought about AFAIK. The naive belief is that there's safety in numbers, or that catastrophes have local impact. But filters, after all, are disruptions that don't stop locally. World wars. The Black Death. AI. The Earth is already big enough to stop most things that can be stopped locally, except astronomical ones like giant asteroids.

There is a probability distribution of catastrophies of different sizes. If that's a power-law distribution, as it very likely is, and if the big catastrophes are mostly human-caused, as they probably are, it means that the more we spread out, the more likely someone somewhere in our colonized space will trigger a catastrophe that will wipe us all out.

I don't think that follows. Whether people are spread out or clustered won't impact whether they engage in catrastrophic activities as long as the same number of people.

For many of the one's you've mentioned they do stop fairly locally as long as there's a big separation. Diseases jumping from continent to continent is much easier than jumping from planet to planet. Similarly, a grey goo scenario or a nantotech diamonding scenario isn't going to go impact Mars.

these brown dwarfs as well as common planets would travel easier

Do you mean they would make travel easier, or that they could be moved?

Make travel easier. Thanks for catching that!

The response has been made that early filtration may be so common that if life does not arise early on a planet's star's lifespan, then it will have no chance to reach civilization. However, if this were the case, we'd expect to have found ourselves orbiting a more long-lived star like a red dwarf.

I don't understand how you jump from "so common" to "early on a planet's star's lifespan." What processes actually take a lot of time? A low-probability event, like perhaps biogenesis or some steps in evolution, would require many chances, but those chances wouldn't have to be in serial, but could be in parallel. The anthropic principle applied to such a scenario predicts that we would be near a star representative of such chances, not a star that by itself has many chances.

I may not have explained this very well. The essential idea is being examined is the proposal that one doesn't have a single low probability event but a series of low probability events that must happen in serial after life arises even as there are chances for each happen in parallel (say , multicellular life, development of neurons or equivalent, development of writing, etc.) In that case, most civilizations should show up around the long-lived stars as long as the others issues are only marginally unlikely.

Thus, the tentative conclusion is that life itself isn't that unlikely and we find ourselves around a star like the sun because they have much bigger habitable zones than red dwarfs (or for some similar reason) so the anthropics go through as your last bit expects.

I understood perfectly, I just think you're making a math error.

In that case, I'm confused as to what the error is. Can you expand?

No, you're right. Environments are favored by their lifespan raised to the power of the number of Poisson filters.

[-]Shmi10

My guess is that the "filter" is at the point of life formation, but not in the way you describe. I am not sure we could detect an alien intelligence unless it left human-like artifacts. It would look largely "natural" to us. I've asked the question earlier on this forum, "how would we detect a generalized optimizer?" without relying on it leaving structures that look "artificial" to us. I have repeatedly pointed out that we don't have a good definition of "life", and that stars and galaxies tend to qualify under any definition that is not specifically tailored to carbon-based life forms.

So, my guess is, there is plenty of intelligent life everywhere, we just don't recognize it as such, because it is unimaginably different from our own, and we treat it as a natural process.

Why would you expect to not see infrared emissions from them?

A maximally efficient reversible computing arcilect (if possible) would operate close to the CMB or even below it and emit next to nothing.

[-]Shmi00

You see IR and all kinds of other emissions from all kinds of sources, what would distinguish artificial from natural?

IR from waste heat should cold to warm black bodies radiating a lot of heat over a large area. It should have relatively few spectral lines. It might look a bit like a brown dwarf, but the output from a normal star is huge compared to a brown dwarf, so it should look like a really huge brown dwarf, which our normal models don't offer a natural explanation for.

[-]Shmi60

Consider that maybe existing brown dwarfs and the laws apparently governing them are the artifacts of an alien intelligence.

Or maybe dark matter are the waste left after all useful energy has been extracted from the normal matter.

Or maybe our current models of supernova explosions fail because they don't account for the alien intelligences using them for their purposes.

How do you tell natural from artificial? What would be a generic artifact of any powerful optimizer?

I expect intelligent life to replace itself with AIs or emulations that use resources for computation as efficiently as possible. I expect it to converge regardless of how it started. In particular, stars look very wasteful of entropy. I expect intelligent life to disassemble them for later, or at least build Dyson spheres. Even if there is more computation that can be extracted from dark matter, I expect a small amount of effort devoted to ordinary matter, which would change the face of the universe over galactic time scales.

In particular, stars look very wasteful of entropy. I expect intelligent life to disassemble them for later, or at least build Dyson spheres

Stars are disassembling all over - mostly by exploding, but some are getting slowly sucked dry by a black hole or other compact object.

What kind of practical dissembly process do you expect future technology to use, such that it is more efficient than what we already see?

Dyson spheres suck:

  • they require tons of energy to build,
  • they are wasteful from an architecture standpoint by dispersing matter and thus increasing communication overhead (compact is better)
  • they are inefficient from a cooling perspective, which is key to maximizing computation (landauer's principle)

Are you saying Dyson spheres are inefficient as computational substrate, as power collection, or both?

Because to me it looks like what you actually want is a Dyson sphere / swarm of solar collectors, powering a computer further out.

A huge swarm/sphere of solar collectors uses up precious materials (silicon, etc) that are far more valuable to use in ultimate compact reversible computers - which don't need much energy to sustain anyway.

You seem to be bottomlining. Earlier you gave cold reversible-computing civs reasonable probability (and doubt), now you seem to treat it as an almost sure scenario for civ developement.

No I don't see it as a sure scenario, just one that has much higher probability mass than dyson spheres. Compact, cold structures are far more likely than large hot constructions - due to speed of light and thermodynamic considerations.

[-]Shmi00

I don't expect any of that from an intelligence sufficiently dissimilar from our own.

Has there been any discussion of space junk?

Has there been any discussion of space junk?

In what context? I'm not sure how space junk would be a Great Filter. There is a risk that enough space junk in orbit could create a Kessler syndrome situation which would render much of Low-Earth-Orbit unusuable, and possibly in the worst cases impassable. But the very worst cases the event lasts for a few decades so you can ride it out. What are you thinking of?

Hm, Wikipedia seems a bit more pessimistic:

One implication is that the distribution of debris in orbit could render space exploration, and even the use of satellites, unfeasible for many generations.

I wonder if there is some tradeoff where larger planets have a bigger gravity well that’s much more difficult to get out of, whereas smaller planets don’t have as much of an atmosphere, which means that space junk sticks around much longer, and also there is less surface area for it to cruise around in. Either way going to space is an expensive proposition with a dubious economic payoff, and society ends up retreating in to VR/drugs/etc. “Why hasn’t your society built self-replicating spacecraft?” could be a question similar to “Why do you keep playing video games instead of doing your homework?”

Hm, Wikipedia seems a bit more pessimistic:

One implication is that the distribution of debris in orbit could render space exploration, and even the use of satellites, unfeasible for many generations.

Hmm, interesting. I have to confess I'm not at all an expert on the matter, but the general impression I get is that most serious discussions have looked at LEO becoming unusable for a few years. I'm surprised that one would think it could last for generations because the general maximum amount of time an object can stay in LEO before air resistance drags it down is generally on the order of decades at the maximum.

I wonder if there is some tradeoff where larger planets have a bigger gravity well that’s much more difficult to get out of, whereas smaller planets don’t have as much of an atmosphere, which means that space junk sticks around much longer, and also there is less surface area for it to cruise around in.

That is interesting, but I don't think it works as a strong filter. It would mean that every single species is being incredibly reckless with their use of low-earth-orbit, and even humans are already taking serious steps to minimize space debris production. The idea that planets slightly larger than Earth would have serious inconvenience for getting out of the gravity well, especially if they have a thick atmosphere is a plausible issue: the more likely problem with smaller planets is that they may end up then more like Mars.

Either way going to space is an expensive proposition with a dubious economic payoff, and society ends up retreating in to VR/drugs/etc. “Why hasn’t your society built self-replicating spacecraft?” could be a question similar to “Why do you keep playing video games instead of doing your homework?”

That might explain some species, but is very hard to see it as filtering out everyone. It means that no alien equivalent of Richard Branson, Elon Muks or Peter Thiel decides to break through that and go spread out, and that this happens for every intelligent species. Heck, spreading out at least somewhat makes sense purely for defensive purposes, in terms of things like asteroid shields which even if one is in a VR system one wants to take care of. To continue the analogy this would be akin to every class in every school having no student completing their homework.

Agree that nothing I mentioned would be a strong filter.

[-][anonymous]00

“Why hasn’t your society built self-replicating spacecraft?” could be a question similar to “Why do you keep playing video games instead of doing your homework?”

This is excellent. Also: we have the problem of science fiction fans predicting the future based on what looks cool for science fiction fans, based on what science fiction fans like, then finding clever justification after the bottom line was decided anyway, such as, having space colonies being a hedge against extinction events.

It would be really useful to try to empathically visualize the preferences of people who DON'T read SF, who think a fiction about spaceships, blasters and suchlike is downright silly. What kind of future they want? I would easily imagine one answer: send out robotic ships to haul from asteroids or foreign planets here everything we would want, while we ourselves sitting comfortable and safe at home.

Using only robots for unsafe missions i.e. everything outside Earth sounds like a fairly obvious thing a non-SF- fan would want.

Hi Joshua, nice post!

Moreover, this would lead to the situation of what one would expect when multiple slowly expanding, stealth AIs run into each other. It is likely that such events would have results would catastrophic enough that they would be visible even with comparatively primitive telescopes.

In general I consider the "stealth AI" scenario highly unlikely (I think an early filter is the best explanation). However, there is a loophole in that particular objection. I think it is plausible that a superintelligence that expects to encounter other superintelligences with significant probability will design some sort of a physical cryptography system that will allow it to provide strong evidence to the other superintelligence regarding its own "source code" or at least some of its decision-theoretic properties. By this means, the superintelligences will cooperate in the resulting prisoner's dilemma e.g. by a non-violent division of territory (the specific mode of cooperation will depend on the respective utility functions).

The emergence of our species acted as the Great Filter to wipe out all other possible candidates for intelligence in the universe, mechanism unknown.

This explanation does not work. Say exactly one species gets to emerge and do this. Then why would that one species show up so late?