I read this in 2019; it helped me understand that the long-term future is astronomically more important than whatever happens on Earth this millennium. See also Astronomical Waste.
Edit: but as various commenters observe, the actual amount you should care about the long-term future and space stuff isn't super related to the
Have you seen my Is the potential astronomical waste in our universe too small to care about?
(These days I'll often read some piece of bad news and think to myself, "at least we're not in a much bigger/richer universe!" followed by "but what does it imply about what is happening in those universes?")
Of course!
I agree things aren't as simple as "
Annoying highly galaxy brained consideration, but pretty plausibly, considerations about how big the cosmic endowment is are dwarfed by considerations about how big our logical endowment is, eg, I would probably prefer to be in a much smaller but much simpler universe than in a bigger but more complicated universe for acausal trade reasons.
Between the cosmic endowment, the doomsday argument, and simulation theory, my sense of cosmic importance has been yoyoing across ~60 orders of magnitude.
If there is weirder physics, such that FTL or relaxations to the laws of thermodynamics are possible, I assume that the estimation increases. Then again, under those conditions there may not be a finite upper bound.
You already get arbitrarily high upper bounds with reversible computation, and waiting until the universe gets cooler can yield 10^30x additional computation, no need for weirder physics. Bostrom mentioned both above, he was pretty explicit about the 10^85 ops cosmic endowment being a conservative lower bound. Krauss & Starkman's 1.35e120 ops bound derived from the observed acceleration of the universe is the non-weird physics upper bound AFAIK (h/t Wei Dai).
Yes. FTL would be surprising given that we find ourselves in a 14 billion year old universe — you'd expect there to be aliens by now. But:
Although Robin Hanson's "grabby aliens" theory does cut in the other direction, suggesting it's much more likely than it naively appears that the universe is full of fast-expanding alien civilizations, and therefore that humanity's share of the cosmos might be much smaller than Bostrom guesses here. (ie instead of being bounded by light-speed and cosmological-expansion constraints, we much sooner butt up against the expanding borders of our neighboring alien civilizations on all sides.
A different analogy I came up with was, you have an Earth where every grain of sand was another Earth. Then you spend a simulated human year examining each grain of sand on every one of those Earths, and that would be about one billionth or so of the endowment. Well, it varies based on your order of magnitude, but it gets in the right ballpark.
I have slight discomfort with Bostrom's reasoning: I agree there is an enormous amount of resources that potentially are at stake in the future. But I struggle with putting a number on it or even thinking about how to think about it. The reason is that his analysis is almost entirely anchored on value as arising from human or biological-like things, i.e., relatively small, short-lived creatures that have a particular form of agency/identity, do things in communities and have the types of valenced states we have. He of course explicitly allows for digital beings, but at least until he explores that topic in more detail with Carl Shulman (2022), it's not clear whether these digital beings are just amped up human-like experiences. In fact, when he writes about it with Shulman, it's clear that they could be very different (~immortal, copyable, much larger hedonic range, mind-transparent); it seems to me that applying human-like standards of value to them (especially drawing large quantitative conclusions) seems pretty risky/premature. Now it might be that biological life like us is the only way that advanced societies can come to be (i.e. attractors in evolution, habitability of planets, etc.). But it might alternatively be possible that you could have swarm-like societies where the individual isn't the primary bearer of value (by our lights anyway: it might not be autonomous, have clear identity, have capacity to suffer, etc.). If that is a realistic possibility, then how do we make a quantitative evaluation on what the value of a future filled with swarm beings is relative to a future filled with human-like beings?
This isn't to disagree with his qualitative point that the future could be huge and we should be careful what we do with it. but I think putting numbers on it gives a quantitative vibe to something we know very little about.
For those of us who internalized these ideas years ago, there's not much new here. You mostly find yourself nodding along. But that's not a criticism. It's actually refreshing to see this kind of essay on LessWrong again. This is what made the site magnetic in the first place: staring at the actual scale of what's at stake.
@Nick Bostrom's line about our great common endowment of negentropy being irreversibly degraded into entropy on a cosmic scale still hits like nothing else. Once you see it, you can't unsee it. Every second of delay has a cost measured in entire galaxies of potential flourishing slipping beyond our light cone forever. @Wei Dai pushed that picture even further.
The hardest part is always explaining this to people outside this corner of the world. Not because the argument is complex, Bostrom lays it out with brutal clarity, but because the conclusion feels too large to take seriously. People pattern-match it to sci-fi and move on. But 10^58 lives is not a rhetorical flourish. It's a conservative lower bound.
More essays like this, please. It's easy to get lost in object-level debates and forget the sheer enormity of what we're actually trying to protect.
Superintelligence, pp. 122–3. 2014.
Consider a technologically mature civilization capable of building sophisticated von Neumann probes of the kind discussed in the text. If these can travel at 50% of the speed of light, they can reach some stars before the cosmic expansion puts further acquisitions forever out of reach. At 99% of c, they could reach some stars. These travel speeds are energetically attainable using a small fraction of the resources available in the solar system. The impossibility of faster-than-light travel, combined with the positive cosmological constant (which causes the rate of cosmic expansion to accelerate), implies that these are close to upper bounds on how much stuff our descendants acquire.
If we assume that 10% of stars have a planet that is—or could by means of terraforming be rendered—suitable for habitation by human-like creatures, and that it could then be home to a population of a billion individuals for a billion years (with a human life lasting a century), this suggests that around human lives could be created in the future by an Earth-originating intelligent civilization.
There are, however, reasons to think this greatly underestimates the true number. By disassembling non-habitable planets and collecting matter from the interstellar medium, and using this material to construct Earth-like planets, or by increasing population densities, the number could be increased by at least a couple of orders of magnitude. And if instead of using the surfaces of solid planets, the future civilization built O'Neill cylinders, then many further orders of magnitude could be added, yielding a total of perhaps human lives. (“O'Neill cylinders” refers to a space settlement design proposed in the mid-1970s by the American physicist Gerard K. O'Neill, in which inhabitants dwell on the inside of hollow cylinders whose rotation produces a gravity-substituting centrifugal force.)
Many more orders of magnitude of human-like beings could exist if we countenance digital implementations of minds—as we should. To calculate how many such digital minds could be created, we must estimate the computational power attainable by a technologically mature civilization. This is hard to do with any precision, but we can get a lower bound from technological designs that have been outlined in the literature. One such design builds on the idea of a Dyson sphere, a hypothetical system (described by the physicist Freeman Dyson in 1960) that would capture most of the energy output of a star by surrounding it with a system of solar-collecting structures. For a star like our Sun, this would generate watts. How much computational power this would translate into depends on the efficiency of the computational circuitry and the nature of the computations to be performed. If we require irreversible computations, and assume a nanomechanical implementation of the “computronium” (which would allow us to push close to the Landauer limit of energy efficiency), a computer system driven by a Dyson sphere could generate some operations per second.
Combining these estimates with our earlier estimate of the number of stars that could be colonized, we get a number of about ops/s once the accessible parts of the universe have been colonized (assuming nanomechanical computronium). A typical star maintains its luminosity for some s. Consequently, the number of computational operations that could be performed using our cosmic endowment is at least . The true number is probably much larger. We might get additional orders of magnitude, for example, if we make extensive use of reversible computation, if we perform the computations at colder temperatures (by waiting until the universe has cooled further), or if we make use of additional sources of energy (such as dark matter).
It might not be immediately obvious to some readers why the ability to perform computational operations is a big deal. So it is useful to put it in context. We may, for example, compare this number with our earlier estimate (Box 3, in Chapter 2) that it may take about ops to simulate all neuronal operations that have occurred in the history of life on Earth. Alternatively, let us suppose that the computers are used to run human whole brain emulations that live rich and happy lives while interacting with one another in virtual environments. A typical estimate of the computational requirements for running one emulation is ops/s. To run an emulation for 100 subjective years would then require some ops. This would mean that at least human lives could be created in emulation even with quite conservative assumptions about the efficiency of computronium.
In other words, assuming that the observable universe is void of extraterrestrial civilizations, then what hangs in the balance is at least 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 human lives (though the true number is probably larger). If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy.