I'm surprised at the lack of any 'when the stars are right' quips. But anyway, this has the same problems that jcannell's 'cold slow computers' and most Fermi solutions:
More to the point, anything that DOES use matter and energy would rapidly dominate over things that do not and be selected for. Replicators spread until they can't and evolve towards rapid rates of growth and use of resources (compromising between the two), not things orthagonal to doubling time like computational efficiency.
Yes. I think this paper addresses it with the 'defense of territory' assumption ('4. A civilization can retain control over its volume against other civilizations'). I think the idea is that the species quickly establishes a sleeping presence in as many solar systems as possible, then uses its invincible defensive abilities to maintain them.
But in real life, you could well be right. Plausibly there are scenarios in which a superintelligence cant defend a solar system against an arbitrarily large quantity of hostile biomass.
At least one detector is enough to make visible astroengineering. And this is the problem with many suggested solutions of Fermi paradox: they explain why some civilization are not visible, but not why universal coordination between probably not communicating civilization is reached.
Circovic wrote another article with the same problem: "Geo-engineering Gone Awry: A New Partial Solution of Fermi's Paradox". https://arxiv.org/abs/physics/0308058 But partial solutions are not solutions in case fermi paradox.
Fascinating paper!
I found Sandberg's 'popular summary' of this paper useful too: http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/
One question after reading the article. The main problem of Fermi paradox is not why aliens are hiding, but why they don't interact with human civilization.
Imagine that the article assumptions are true and on a remote asteroid of the Solar System is hiding a colony of alien nanobots which waits better times. How will it interact with human civilization?
They may ignore our transformation into a supercivilization, but there is a risk that Earthlings will start to burn precious resources.
So after we reach a certain threshold they either have to terminate human civilization or enter into the negotiations with us. We did not yet reach the threshold, but the creation of our own superintelligence is probably after the threshold, where we could be easily stopped.
So the unknown threshold is in the near future and it clearly an x-risks.
But why bother waiting for that event, when a diverted meteor could have solved the "problem" centuries ago? Human created AI was plausible from the industrial revolution - not a certainty, but not less than 10^-6, say. Well worth a small meteor diversion.
There is probably an observation selection effect here in play - if the meteor were diverted, we would not exist by now.
So only late threshold "berserkers" may still exist. Maybe they are rather far - and the meteor is on the way.
But I think, if they exist, they must be already on Earth in the form of alien nanobots, which could exist everywhere in small quantities, maybe even in my brain. In that case, alien civilisation is able to understand what is going on on Earth and silently prevent bad outcomes.
Another problem is that no civilization can be sure that it knows how and when the universe will end. For example, there is a Big Rip theory or risks of false vacuum decay.
So its main goal would be finding the ways to prevent the death of the universe or to survive it. And some of the ways may require earlier actions. I explored the ways to survive the end of the universe here: http://lesswrong.com/lw/mfa/a_roadmap_how_to_survive_the_end_of_the_universe/
However, the conclusions are likely wrong, because it's rational for "sleeping" civilizations to still want to round up stars that might be ejected from galaxies, collect cosmic dust, and so on.
Stray thought this inspired: what if "rounding up the stars" is where galaxies come from in the first place?
However, the conclusions are likely wrong, because it's rational for "sleeping" civilizations to still want to round up stars that might be ejected from galaxies, collect cosmic dust, and so on.
What proportion of idiotic papers are written not by idiots, but by liars?
In Section 3, you write:
State value models require resources to produce high-value states. If happiness is the goal, using the resources to produce the maximum number of maximally happy minds (with a tradeoff between number and state depending on how utilities aggregate) would maximize value. If the goal is knowledge, the resources would be spent on processing generating knowledge and storage, and so on. For these cases the total amount of produced value increases monotonically with the amount of resources, possibly superlinearly.
I would think that superlinear scaling of utility with resources is incompatible with the proposed resolution of the Fermi paradox. Why?
Superlinear scaling of utility means (ignoring detailed numbers) that e.g. a distribution of 1% chance of 1e63 bit-erasures + 99% of fast extinction is preferable to almost certain 1e60 bit erasures. This seems (1) dubious from an, admittedly human-centric, common sense perspective, and more rigorously (2) is incompatible with the observation that possibilities for immediate resource extraction which don't affect later computations are not realized. In other words: You do not propose a mechanism how a dyson-swarm to collect current energy/entropy emitted by stars would decrease the total amount of computation to be done over the life-time of the universe. Especially the energy/negative entropy contained in unused emissions of current stars appears to dissipate into un-useful background glow.
I would view the following, mostly (completely?) contained in your paper as a much more coherent proposed explanation:
(1) Sending self-replicating probes to most stars in the visible universe appears to be relatively cheap [your earlier paywalled paper]
(2) This gives rise to a much stronger winner-takes-all dynamics than just colonization of a single galaxy
(3) Most pay-off, in terms of computation, is in the far future after cooling
(4) A stongly sublinear utility of computation makes a lot of sense. I would think more in direction of poly-log, in the relevant asymptotics, than linear.
(5) This implies a focus on certainty of survival
(6) This implies a lot of possible gain from (possibly a-causal) value-trade / coexistence.
(7) After certainty of survival, this implies diversification of value. If, for example, the welfare and possible existence of alien civilizations is valued at all, then the small marginal returns on extra computations on the main goals lead to gifting them a sizable chunk of cosmic real estate (sizable in absolute, not relative terms: A billion star systems for a billion years are peanuts compared to the size of the cosmic endowment in the cold far future)
This boils down to an aestivating-zoo scenario: Someone with strongly sublinear utility function and slow discounting was first to colonize the universe, and decided to be merciful to late-comers; either for acausal trade reasons, or for terminal values. Your calculations boil down showing the way towards a lower-bound on the amount of necessary mercy for late-comers: For example, if the first mover decided to sacrifice 1e-12 of its cosmic endowment to charity, this might be enough to explain the current silence (?).
The first-mover would send probes to virtually all star systems, which run nice semi-stealthy observatories, e.g. on an energy budget of a couple giga-watt of solar panels on asteroids. If a local civilization emerges, it could go "undercover". It appears unlikely that a locally emergent superintelligence could threaten the first colonizer: The upstart might be able to take its own home system, but invading a system that has already a couple thousand tons of technologically mature equipment appears physically infeasible, even for technically mature invaders. If the late-comer starts to colonize too many systems... well, stop their 30g probes once they arrive, containment done. If the late-comer starts to talk too loud on radio... well, ask them to stop.
In this very optimistic world, we would be quite far from "x-risk by crossing the berserker-threshold": We would be given the time and space to autonomously decide what to do with the cosmos, and afterwards be told "sorry, too late, never was an option for you; wanna join the party? Most of it is ours, but you can have a peanut!"
Question: What are the lower bounds on the charity-fraction necessary to explain the current silence? This is a more numerical question, but quite important for this hypothesis.
Note that this does not require any coordination beyond the internal coordination of the first mover: All later civs are allowed to flourish in their alloted part of the universe; it is just their expansion that is contained. This strongly reduces the effective amount of remaining filter to explain: We just need technological civilization to emerge rarely enough compared to the first-colonizer set upper expansion bound (instead of the size of the universe). For further reductions, the first-colonizer might set upper time-of-existence bounds, e.g. offer civilizations that hit their upper bound the following deal: "hey, guys, would you mind uploading and clearing your part of space for possible future civilizations? We will pay you in more computation in the far future than you have any way of accessing in other ways. Also, this would be good manners, since your predecessors' agreement to this arrangement is the reason for your existence".
PS, on (4) "strongly sublinear utility function". If high-risk high-payoff behaviour is possible at all, then we would expect the median universe to be taken by risk-adverse (sublinear utility scaling) civs, and would expect almost all risk-hungry (superlinear utility scaling) civs to self-destruct. Note that this is rational behaviour of the risk-hungry civs, and I am not criticizing them for it. However, I view this as a quite weak argument, since the only plausible risk/reward trade-off on a cosmic scale appears to be in uncertainty about terminal values (and time discounting). Or do you see plausible risk/reward trade-offs?
Also, the entire edifice collapses if the first colonizer is a negative utilitarian.
That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
Anders Sandberg, Stuart Armstrong, Milan M. Cirkovic
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 1030 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.
As far as I can tell, the paper's physics is correct (most of the energy comes not from burning stars but from the universe's mass).
However, the conclusions are likely wrong, because it's rational for "sleeping" civilizations to still want to round up stars that might be ejected from galaxies, collect cosmic dust, and so on.
The paper is still worth publishing, though, because there may other, more plausible ideas in the vicinity of this one. And it describes how future civilization may choose to use their energy.