That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox

Anders Sandberg, Stuart Armstrong, Milan M. Cirkovic

If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 1030 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

As far as I can tell, the paper's physics is correct (most of the energy comes not from burning stars but from the universe's mass).

However, the conclusions are likely wrong, because it's rational for "sleeping" civilizations to still want to round up stars that might be ejected from galaxies, collect cosmic dust, and so on.

The paper is still worth publishing, though, because there may other, more plausible ideas in the vicinity of this one. And it describes how future civilization may choose to use their energy.

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 12:16 PM

I'm surprised at the lack of any 'when the stars are right' quips. But anyway, this has the same problems that jcannell's 'cold slow computers' and most Fermi solutions:

  1. to satisfy the non-modification criteria, it handwaves away all the losses in the present day. Yes, maybe current losses from the stars burning out are relatively small compared to the 10^30 benefit of waiting for the universe to cool a la Dyson's eternal intelligence, but the losses are still astronomical. This should still produce waves of colonization and stellar engineering well beyond some modest anti-black hole and collision engineering
  2. This doesn't provide any special reason to expect universal non-defection, coordination, insensitivity to existential risk or model uncertainty, or universally shared near-zero interest rates. All of these would drive expansionism and stellar engineering. Appeal to coordination spurred by the goal of preventing long-term loss of resources provides no additional incentive above and beyond existing 'burning the cosmic commons' incentives, and actually contradicts the argument for non-modification: if it's fine to let the stars burn out and everything proceed normally because the ultimate loss is trivial, then why would they be concerned about some more impatient civilization re-arranging them into Dyson spheres to do some computations earlier? After all, it's so trivial compared to 10^30 - right?
[-][anonymous]7y40

More to the point, anything that DOES use matter and energy would rapidly dominate over things that do not and be selected for. Replicators spread until they can't and evolve towards rapid rates of growth and use of resources (compromising between the two), not things orthagonal to doubling time like computational efficiency.

Yes. I think this paper addresses it with the 'defense of territory' assumption ('4. A civilization can retain control over its volume against other civilizations'). I think the idea is that the species quickly establishes a sleeping presence in as many solar systems as possible, then uses its invincible defensive abilities to maintain them.

But in real life, you could well be right. Plausibly there are scenarios in which a superintelligence cant defend a solar system against an arbitrarily large quantity of hostile biomass.

At least one detector is enough to make visible astroengineering. And this is the problem with many suggested solutions of Fermi paradox: they explain why some civilization are not visible, but not why universal coordination between probably not communicating civilization is reached.

Circovic wrote another article with the same problem: "Geo-engineering Gone Awry: A New Partial Solution of Fermi's Paradox". https://arxiv.org/abs/physics/0308058 But partial solutions are not solutions in case fermi paradox.

[-][anonymous]7y70

If anyone was confused by the 1030, note that it's actually 10^30 which is significantly more impressive.

[This comment is no longer endorsed by its author]Reply

Ah yes, thanks. Corrected that now.

Fascinating paper!

I found Sandberg's 'popular summary' of this paper useful too: http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

One question after reading the article. The main problem of Fermi paradox is not why aliens are hiding, but why they don't interact with human civilization.

Imagine that the article assumptions are true and on a remote asteroid of the Solar System is hiding a colony of alien nanobots which waits better times. How will it interact with human civilization?

They may ignore our transformation into a supercivilization, but there is a risk that Earthlings will start to burn precious resources.

So after we reach a certain threshold they either have to terminate human civilization or enter into the negotiations with us. We did not yet reach the threshold, but the creation of our own superintelligence is probably after the threshold, where we could be easily stopped.

So the unknown threshold is in the near future and it clearly an x-risks.

But why bother waiting for that event, when a diverted meteor could have solved the "problem" centuries ago? Human created AI was plausible from the industrial revolution - not a certainty, but not less than 10^-6, say. Well worth a small meteor diversion.

There is probably an observation selection effect here in play - if the meteor were diverted, we would not exist by now.

So only late threshold "berserkers" may still exist. Maybe they are rather far - and the meteor is on the way.

But I think, if they exist, they must be already on Earth in the form of alien nanobots, which could exist everywhere in small quantities, maybe even in my brain. In that case, alien civilisation is able to understand what is going on on Earth and silently prevent bad outcomes.

Another problem is that no civilization can be sure that it knows how and when the universe will end. For example, there is a Big Rip theory or risks of false vacuum decay.

So its main goal would be finding the ways to prevent the death of the universe or to survive it. And some of the ways may require earlier actions. I explored the ways to survive the end of the universe here: http://lesswrong.com/lw/mfa/a_roadmap_how_to_survive_the_end_of_the_universe/

However, the conclusions are likely wrong, because it's rational for "sleeping" civilizations to still want to round up stars that might be ejected from galaxies, collect cosmic dust, and so on.

Stray thought this inspired: what if "rounding up the stars" is where galaxies come from in the first place?

However, the conclusions are likely wrong, because it's rational for "sleeping" civilizations to still want to round up stars that might be ejected from galaxies, collect cosmic dust, and so on.

What proportion of idiotic papers are written not by idiots, but by liars?

In Section 3, you write:

State value models require resources to produce high-value states. If happiness is the goal, using the resources to produce the maximum number of maximally happy minds (with a tradeoff between number and state depending on how utilities aggregate) would maximize value. If the goal is knowledge, the resources would be spent on processing generating knowledge and storage, and so on. For these cases the total amount of produced value increases monotonically with the amount of resources, possibly superlinearly.

I would think that superlinear scaling of utility with resources is incompatible with the proposed resolution of the Fermi paradox. Why?

Superlinear scaling of utility means (ignoring detailed numbers) that e.g. a distribution of 1% chance of 1e63 bit-erasures + 99% of fast extinction is preferable to almost certain 1e60 bit erasures. This seems (1) dubious from an, admittedly human-centric, common sense perspective, and more rigorously (2) is incompatible with the observation that possibilities for immediate resource extraction which don't affect later computations are not realized. In other words: You do not propose a mechanism how a dyson-swarm to collect current energy/entropy emitted by stars would decrease the total amount of computation to be done over the life-time of the universe. Especially the energy/negative entropy contained in unused emissions of current stars appears to dissipate into un-useful background glow.

I would view the following, mostly (completely?) contained in your paper as a much more coherent proposed explanation:

(1) Sending self-replicating probes to most stars in the visible universe appears to be relatively cheap [your earlier paywalled paper]

(2) This gives rise to a much stronger winner-takes-all dynamics than just colonization of a single galaxy

(3) Most pay-off, in terms of computation, is in the far future after cooling

(4) A stongly sublinear utility of computation makes a lot of sense. I would think more in direction of poly-log, in the relevant asymptotics, than linear.

(5) This implies a focus on certainty of survival

(6) This implies a lot of possible gain from (possibly a-causal) value-trade / coexistence.

(7) After certainty of survival, this implies diversification of value. If, for example, the welfare and possible existence of alien civilizations is valued at all, then the small marginal returns on extra computations on the main goals lead to gifting them a sizable chunk of cosmic real estate (sizable in absolute, not relative terms: A billion star systems for a billion years are peanuts compared to the size of the cosmic endowment in the cold far future)

This boils down to an aestivating-zoo scenario: Someone with strongly sublinear utility function and slow discounting was first to colonize the universe, and decided to be merciful to late-comers; either for acausal trade reasons, or for terminal values. Your calculations boil down showing the way towards a lower-bound on the amount of necessary mercy for late-comers: For example, if the first mover decided to sacrifice 1e-12 of its cosmic endowment to charity, this might be enough to explain the current silence (?).

The first-mover would send probes to virtually all star systems, which run nice semi-stealthy observatories, e.g. on an energy budget of a couple giga-watt of solar panels on asteroids. If a local civilization emerges, it could go "undercover". It appears unlikely that a locally emergent superintelligence could threaten the first colonizer: The upstart might be able to take its own home system, but invading a system that has already a couple thousand tons of technologically mature equipment appears physically infeasible, even for technically mature invaders. If the late-comer starts to colonize too many systems... well, stop their 30g probes once they arrive, containment done. If the late-comer starts to talk too loud on radio... well, ask them to stop.

In this very optimistic world, we would be quite far from "x-risk by crossing the berserker-threshold": We would be given the time and space to autonomously decide what to do with the cosmos, and afterwards be told "sorry, too late, never was an option for you; wanna join the party? Most of it is ours, but you can have a peanut!"

Question: What are the lower bounds on the charity-fraction necessary to explain the current silence? This is a more numerical question, but quite important for this hypothesis.

Note that this does not require any coordination beyond the internal coordination of the first mover: All later civs are allowed to flourish in their alloted part of the universe; it is just their expansion that is contained. This strongly reduces the effective amount of remaining filter to explain: We just need technological civilization to emerge rarely enough compared to the first-colonizer set upper expansion bound (instead of the size of the universe). For further reductions, the first-colonizer might set upper time-of-existence bounds, e.g. offer civilizations that hit their upper bound the following deal: "hey, guys, would you mind uploading and clearing your part of space for possible future civilizations? We will pay you in more computation in the far future than you have any way of accessing in other ways. Also, this would be good manners, since your predecessors' agreement to this arrangement is the reason for your existence".

PS, on (4) "strongly sublinear utility function". If high-risk high-payoff behaviour is possible at all, then we would expect the median universe to be taken by risk-adverse (sublinear utility scaling) civs, and would expect almost all risk-hungry (superlinear utility scaling) civs to self-destruct. Note that this is rational behaviour of the risk-hungry civs, and I am not criticizing them for it. However, I view this as a quite weak argument, since the only plausible risk/reward trade-off on a cosmic scale appears to be in uncertainty about terminal values (and time discounting). Or do you see plausible risk/reward trade-offs?

Also, the entire edifice collapses if the first colonizer is a negative utilitarian.

model uncertainty based discounting also.