While we dither on the planet, are we losing resources in space? Nick Bostrom has an article on astronomical waste, talking about the vast amounts of potentially useful energy that we're simply not using for anything:
As I write these words, suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy is being irreversibly degraded into entropy on a cosmic scale. These are resources that an advanced civilization could have used to create value-structures, such as sentient beings living worthwhile lives.
The rate of this loss boggles the mind. One recent paper speculates, using loose theoretical considerations based on the rate of increase of entropy, that the loss of potential human lives in our own galactic supercluster is at least ~1046 per century of delayed colonization.
On top of that, galaxies are slipping away from us because of the exponentially accelerating expansion of the universe (x axis in years since Big Bang, cosmic scale function arbitrarily set to 1 at the current day):
At the rate things are going, we seem to be losing slightly more than one galaxy a year. One entire galaxy, with its hundreds of billions of stars, is slipping away from us each year, never to be interacted with again. This is many solar systems a second; poof! Before you've even had time to grasp that concept, we've lost millions of times more resources than humanity has even used.
So it would seem that the answer to this desperate state of affairs is to rush thing: start expanding as soon as possible, greedily grab every hint of energy and negentropy before they vanish forever.
Not so fast! Nick Bostrom's point was not that we should rush things, but that we should be very very careful:
However, the lesson for utilitarians is not that we ought to maximize the pace of technological development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur.
If we rush things and lose the whole universe, then we certainly don't come out ahead in this game.
But let's ignore that; let's pretend that we've solved all risks, that we can expand safely, without fear of messing things up. Right. Full steam ahead to the starships, no?
No. Firstly, though the losses are large in absolute terms, they are small in relative terms. Most of the energy of a star is contained in its mass. The light streaming through windows in empty rooms? A few specks of negentropy, that barely diminish the value of the huge hoard that is the stars' physical beings (which can be harvested by eg dropping them slowly into small black holes and feeding off the Hawking radiation). And we lose a galaxy a year - but there are still billions out there. So waiting a while isn't a major problem, if we can gain something by doing so. Gain what? Well, maybe just going a tiny bit faster.
In a paper published with Anders Sandberg, we looked at the ease and difficulty of intergalactic or universal expansion. It seems to be surprisingly easy (which has a lot of implications for the Fermi Paradox), given sufficient automation or AI. About six hours of the sun's energy would be enough to launch self-replicating probes to every reachable galaxy in the entire universe. We could get this energy by constructing a Dyson swarm around the sun, by, for instance, disassembling Mercury. This is the kind of task that would be well within the capacities of an decently automated manufacturing process. A video overview of the process can be found in this talk (and a longer exposition, with slightly older figures, can be found here).
How fast will those probes travel? This depends not on the acceleration phase (which can be done fine with quench guns or rail guns, or lasers into solar sales), but on the deceleration. The relativistic rocket equation is vicious: it takes a lot of reaction mass to decelerate even a small payload. If fission power is used, decelerations from 50%c is about all that's reasonable. With fusion, we can push this to 80%c, while with matter-anti-matter reactions, we can get to 99%c. The top speed of 99%c is also obtainable if have more exotic ways of decelerating. This could be somehow using resources from the target galaxy (cunning gravitational braking or Bussard ramjets or something), or using the continuing expansion of the universe to bleed speed away (this is most practical for the most distant galaxies).
At these three speeds (and at 100% c), we can reach a certain distance into the universe, in current comoving coordinates, as shown by this graph (the x axis is in years since the Big Bang, with the origin set at the current day):
The maximum reached at 99%c is about 4 GigaParsecs - not a unit often used is casual conversation! If we can reach these distances, we can claim this many galaxies, approximately:
Speed | Distance (Parsecs) | # of Galaxies |
---|---|---|
50%c |
1.24x109 | 1.16x108 |
80%c |
2.33x109 | 7.62x108 |
99%c | 4.09x109 | 4.13x109 |
These numbers don't change much if we delay. Even wasting a million years won't show up on these figure: it's a rounding error. Why is this?
Well, a typical probe will be flying through space, at quasi-constant velocity, for several billion years. Gains in speed make an immense difference, as they compound over the whole duration of the trip; gains from an early launch, not so much. So if we have to wait a million years to squeeze an extra 0.1% of speed, we're still coming out ahead. So waiting for extra research is always sensible (apart from the closest galaxies). If we can get more efficient engines, more exotic ways of shielding the probe, or new methods for deceleration, the benefits will be immense.
So, in conclusion: To efficiently colonise the universe, take your time. Do research. Think things over. Go to the pub. Saunter like an Egyptian. Write long letters to mum. Complain about the immorality of the youth of today. Watch dry paint stay dry.
But when you do go... go very, very fast.
Similar considerations apply to possible singleton-ish AGIs that might be architecturally constrained to varying levels of efficiency in optimization, e.g. some decision theories might coordinate poorly and so waste the cosmic commons. Thus optimizing for an AGI's mere friendliness to existent humans could easily be setting much too low a bar, at least for perspectives and policies that include a total utilitarian bent—something much closer to "ideal" would instead have to be the target.
Life got so much simpler when I went anti-total utilitarianism :-)
But actually your point stands, in a somewhat weaker form, for any system that likes resources and dislikes waste.