tl/dr: If we'd build a working self-replicating spacecraft, that'd prove we're past the Great Filter. Therefore, certainty we can do that would eliminate much existential risk. It is a potentially highly visible project that gives publicity to reasons not to include AGI. Therefore, serious design work on a self-replicating spacecraft should have a high priority.

I'm assuming you've read Stuart_Armstrong's excellent recent article on the Great Filter. In the discussion thread for that, RussellThor observed:

if we make a simple replicator and have it successfully reach another solar system (with possibly habitable planets) then that would seem to demonstrate that the filter is behind us.

If that is obvious to you, skip to the next subheading.

The evolution from intelligent spacefaring species to producer of self-replicating spacecraft (henceforth SRS, used in the plural) is inevitable, if SRS are possible. This is simply because the matter and negentropy available in the wider universe is a staggeringly vast resource of staggering value. Even species who are unlikely to ever visit and colonize other stars in the form that evolution gave them (this includes us) can make use of these resources. For example, if we could build on (or out of) empty planets supercomputers that receive computation tasks by laser beam and output results the same way, we would be economically compelled to do so simply because those supercomputers could handle computational tasks that no computer on Earth could complete in less than the time it takes that laser beam to travel forth and back. That supercomputer would not need to run even a weak AI to be worth more than the cost of sending the probe that builds it.

Without a doubt there are countless more possible uses for these, shall we say, exoresources. If Dyson bubbles or mind uploads or multistellar hypertelescopes or terraforming are possible, each of these alone create another huge incentive to build SRS. Even mere self-replicating refineries that break up planets into more readily accessible resources for future generations to draw from would be an excellent investment. But the obvious existence of this supercomputer incentive is already reason enough to do it.

All the Great Filter debate boils down to the question of how improbable our existence really is. If we're probable, many intelligent species capable of very basic space travel should exist. If we're not, they shouldn't. We know there doesn't appear to be any species inside a large fraction of our light cone so capable of space travel it has sent out SRS. So the only way we could be probable is if there's a Great Filter ahead of us, stopping us (and everyone else capable of basic space travel) from becoming the kind of species that sends out SRS. If we became such a species, we'd know we're past the Filter and while we still wouldn't know how improbable which of the conditions that allowed for our existence was, we'd know that when putting them all together, they multiply into some very small probability of our existence, and a very small probability of any comparable species existing in a large section of our light cone.

LW users generally seem to think SRS are doable and that means we're quite improbable, i.e. the Filter is behind us. But lots of people are less sure, and even more people haven't thought about it. The original formulation of the Drake equation included a lifespan of civilizations partly to account for the intuition that a Great Filter type event could be coming in the future. We could be more sure than we are now, and make a lot of people much more sure than they are now, about our position in reference to that Filter. And that'd have some interesting consequences.

How knowing we're past the Great Filter reduces X-risk

The single largest X-risk we've successfully eliminated is the impact of an asteroid large enough to destroy us entirely. And we didn't do that by moving any asteroids; we simply mapped all of the big ones. We now know there's no asteroid that is both large enough to kill us off and coming soon enough that we can't do anything about it. Hindsight bias tells us this was never a big threat - but look ten years back and you'll find The Big Asteroid on every list of global catastrophic risks, usually near the top. We eliminated that risk simply by observation and deduction, by finding out it did not exist rather than removing it.

Obviously a working SRS that gives humanity outposts in other solar systems would reduce most types of X-risk. But even just knowing we could build one should decrease confidence in the ability of X-risks to take us out entirely. After all, if as Bostrom argues, the possibility that the Filter is ahead of is increases the probability of any X-risk, the knowledge that it is not ahead of us has to be evidence against all of them except those that could kill a Type 3 civilization. And if, as Bostrom says in that same paper, finding life elsewhere that is closer to our stage of development is worse news than finding life further from it, to increase the distance between us and either type of life decreases the badness of the existence of either.

Of course we'd only be certain if we had actually built and sent such a spacecraft. But in order to gain confidence we're past the filter, and to gain a greater lead to life possibly discovered elsewhere, a design that is agreed to be workable would go most of the way. If it is clear enough that someone with enough capital could claim incredible gains by doing that, we can be sure enough someone eventually (e.g. Elon Musk after SpaceX's IPO around 2035) will do that, giving high confidence we've passed the filter.

I'm not sure what would happen if we could say (with more confidence than currently) that we're probably the species that's furthest ahead at least in this galaxy. But if that's true, I don't just want to believe it, I want everyone else to believe it too, because it seems like a fairly important fact. And an SRS design would help do that.

We'd be more sure we're becoming a Type 3 civilization, so we should then begin to think about what type of risk could kill that, and UFAI would probably be more pronounced on that list than it is on the current geocentric ones.

What if we find out SRS are impossible at our pre-AGI level of technology? We still wouldn't know if an AI could do it. But even knowing our own inability would be very useful information, especially about the dangerousness of vatrious types of X-risk.

How easily this X-risk reducing knowledge can be attained

Armstrong and Sandberg claim the feasibility of self-replicating spacecraft has been a settled matter since the Freitag design of 1980. But that paper, while impressively detailed and a great read, glosses over the exact computing abilities such a system would need, does not mention hardening against interstellar radiation, assumes fusion drives and probably has a bunch of other problems that I'm not qualified to discover. I haven't looked at all the papers that cite it (yet), but the ones I've seen seem to agree self-replicating spacecraft are plausible. Sandberg has some good research questions that I agree need to be answered, but never seems to waver from his assumption that SRS are basically possible, although he's aware of the gaps in knowledge that preclude such an assumption from being safe.

There are certainly some questions that I'm not sure we can answer. For example:

  1. Can we build fission-powered spacecraft (let alone more speculative designs) that will survive the interstellar environment for decades or centuries?
  2. How can we be certain to avoid mutations that grow outside of our control, and eventually devour Earth?
  3. Can communication between SRS and colonies, especially software updates, be made secure enough?
  4. Can a finite number of probe designs (to be included on any of them) provide a vehicle for every type of journey we'd want the SRS network to make?
  5. Can a fiinite number of colony designs provide a blueprint for every source of matter and negentropy we'd want to develop?
  6. What is the ethical way to treat any life the SRS network might encounter?

But all of these except for the last one, and Sandberg's questions, are engineering questions and those tend to be answerable. If not, remember, we don't need to have a functioning SRS to manage X-risk, any reduction of uncertainty around their feasibility already helps. And again, the only design I could find that gives any detail at all is from a single guy writing in 1980. If we merely do better than he did (find or rule out a few of the remaining obstacles), we already help ascertain our level of X-risk. Compare the asteroid detection analogy: We couldn't be certain that we wouldn't be hit by an asteroid until we looked at all of them, but getting started with part of the search space was a very valuable thing to do anyway.

Freitag and others use to assume SRS should be run by some type of AGI. Sandberg says SRS without AGI, with what he calls "lower order intelligence", "might be adequate". I disagree with both assessments, and with Sandberg's giving this question less priority than, say, study of mass drivers. Given the issues of AGI safety, a probe that works without AGI should be distinctly preferable. And (unlike an intelligent one) its computational components can be designed right now, down to the decision tree it should follow. While at it, and in order to use the publicity such a project might generate, give an argument for this design choice that highlights the AGI safety issues. A scenario where a self-replicating computer planet out there decides for itself should serve to highlight the dangers of AGI far more viscerally than conventional "self-aware desktop box" scenarios.

If we're not looking for an optimal design, but the bare minimum necessary to know we're past the filter, that gives us somewhat relaxed design constraints. This probe wouldn't necessarily need to travel at a significant fraction of light speed, and its first generation wouldn't need to be capable of journeys beyond, say, five parsec. It does have to be capable of interstellar travel, and of progressing to intergalactic travel at some point, say when it finds all nearby star systems to contain copies of itself. A non-interstellar probe fit to begin the self-replication process on a planet like Jupiter, refining resources and building launch facilities there, would be a necessary first step.

New Comment
37 comments, sorted by Click to highlight new comments since: Today at 9:46 PM

Therefore, certainty we can do that would eliminate much existential risk.

It seems to me that you are making a map-territory confusion here. Existential risks are in the territory. Our estimates of existential risks are our map. If we were to build a self-replicating spacecraft, our estimates of existential risks would go down some. But the risks themselves would be unaffected.

Additionally, if we were to build a self-replicating spacecraft and become less worried about existential risks, that decreased worry might mean the risks would become greater because people would become less cautious. If the filter is early, that means we have no anthropic evidence regarding future existential risks... given an early filter, the sample size of civilizations that reach our ability level is small, and you can't make strong inferences from a small sample. So it's possible that people would become less cautious incorrectly.

If we were to build a self-replicating spacecraft, our estimates of existential risks would go down some. But the risks themselves would be unaffected.

To take an extreme example, building a self-replicating spacecraft, copying it off a few million times, and sending people to other galaxies would, if successful, reduce existential risks. I agree that merely making theoretical arguments constitutes making maps, not changing the territory. I also tentatively agree that just building a prototype spacecraft and not actually using it probably won't reduce existential risk.

It seems to me that you are making a map-territory confusion here. Existential risks are in the territory.

If I understand the reasoning correctly, it is that we only know the map. We do not know the territory. The territory could be many different kinds, as long as they are consistent with the map. Adding SRS to the map rules out some of the unsafer territories, i.e. reduces our existential risk. It is a Baysian type argument.

[-][anonymous]10y80

You do not update anthropic reasoning based on self-generated evidence. That's bad logic. Making a space-faring self-replicating machine gives you no new information.

It is also incredibly dangerous. Actual robust self-replicating machines is basically a AGI-complete problem. You can't solve one without the other. What you are making is a paperclip maximizer, just with blueprints of itself instead of paperclips.

Self-replication need not be autonomous, or use AGI. Factories run by humans self-replicate but are not threatening. Plants self-replicate but are not threatening. An AGI might increase performance but is not required or desirable. Add in error-checking to prevent evolution if that's a concern.

[-][anonymous]10y30

Building a self-replicating lunar mining & factory complex is one thing. Building a self-replicating machine that is able to operate effectively in any situation it encounters while expanding into the cosmos is another story entirely. Without knowing the environment in which it will operate, it'll have to be able to adapt to circumstances to achieve its replication goal in whatever situation it finds itself in. That's the definition of an AGI.

Bacteria perform quite well at expanding into an environment, and they are not intelligent.

[-][anonymous]9y10

I would argue they are, for some level of micro-intelligence, but that's entirely beside the point. A bacteria doesn't know how to create tools or self-modify or purposefully engineer its environment in such a way as to make things more survivable.

You do not update anthropic reasoning based on self-generated evidence. That's bad logic.

I disagree. You don't disregard evidence because it is "self-generated". Can you explain your reasoning?

[-][anonymous]10y10

In this case: can we build self-replicating machines? Yes. Is there any specific reason to think that the great filter might lie between now and deployment of the machines? No, because we've already had the capability for 35+ years, just not the political will or economic need. We could have made it already in an alternate history. So since we know the outcome (the universe permits self-replicating space-faring machines, and we have had the capability to build them for sufficient time), we can update based on that evidence now. Actually building the machines therefore provides zero new evideence.

In general: anthropic reasoning involves assuming that we are randomly selected from the space of all possible universes, according to some typically unspecified prior probability. If you change the state of the universe, that changed state is not a random selection against a universal prior. It's no longer anthropic reasoning.

A paperclip maximizer decides for itself how to maximize paperclips; it can ignore human instructions. This SRS network can't: It receives instructions and updates and deterministically follows them. Hence the question around secure communication between SRS and colonies: a paperclip maximizer doesn't need that.

What is your distinction between "self-generated" evidence and evidence I can update anthropic reasoning on?

Would using the spacefaring machine give new evidence? Presumably X-risk becomes lower as humanity disperses.

Therefore, serious design work on a self-replicating spacecraft should have a high priority.

What you're suggesting will reduce uncertainty, but won't change the mean probability. Suppose we assume the remaining filter risk is p. Then you're proposing a course of action which, if successful, would reduce it to q < p. So:

Either we reduce p to q immediately without doing any work (because we "could have worked on a self-replicating spacecraft", which gives us as much probability benefit as actually doing it), or it means that there is a residual great filter risk (p-q) between now and the completion of the project. This great filter risk would likely come from the project itself.

Essentially your model is playing Russian roulette, and you're encouraging us to shoot ourselves rapidly rather than slowly. This would make it clearer faster what the risk actually is, but wouldn't reduce the risk.

As I pointed out when James Miller made a related suggestion, this logic relies on Evidential Decision Theory, and EDT is a pretty bad decision theory.

[-][anonymous]10y00

.

[This comment is no longer endorsed by its author]Reply

How would one even start "serious design work on a self-replicating spacecraft"? It seems like the technologies that would require to even begin serious design do not exist yet.

I'd like to see a full conceptual design, especially for the software side. Each part seems doable with modern tools but the cost, development time and growth rate of this lunar factory needs a more detailed design study by people with industry experience. Fun to read but a bit hand-wave-y on the technical details.

[-][anonymous]10y30

There's a much longer report here:

http://www.islandone.org/MMSG/aasm/

It's one part of the results of a NASA workshop on the concept. Another related part not included in this web reference is section 4, which involved melting down and re-using space hardware -- especially shuttle external tanks -- in an orbital manufacturing complex.

These are high-level engineering studies, basically back of the envelope calculations that prove the concept feasible. The growth rate calculations are probably fairly accurate, but cost & development times require a project plan which this is not. if you have a few million dollars you could get it costed out though.

Hmm, it is interesting that that exists but it seems like it is cannot have been very serious because it dates from over 30 years ago and no followup activity happened.

[-][anonymous]10y30

Serious? It's a paper constructed as part of an official NASA workshop, the participants of which are all respected people in their fields and still working.

Why hasn't more work happened in the time since? It has at places like Zyvex and the Institute for Molecular Manufacturing. But at NASA there were political issues that weren't addressed at all by people advocating for a self-replication programme then or since.

Freitas has more recently done a book-length survey of work on self-replicating machines before and after the NASA workshop. It's available online:

http://www.molecularassembler.com/KSRM.htm

(BTW the same fallacy could be committed against AGI or molecular nanotechnology, both of which date to the 50's but have had little followup activity since, except spurts of interest here and there.)

I think this is because Freitas and Drexler and others who might have pursued clanking replicators became concerned with nanotechnology instead. It seems to me that clanking replicators are much easier, because we already have all the tools and components to build them (screwdrivers, electic motors, microchips, etc.). Nanotechnology, while incorporating the same ideas, is far less feasible and may be seen as a red herring that has cost us 30 years of progress in self-replicating machines. Clanking replicators are also much less dangerous, because it is much easier to pull the plug or throw in a wrench when something goes wrong.

The difficulty is in managing the complexity of an entire factory system. The technology for automated mining and manufacturing exists but it's very expensive and risky to develop, and a bit creepy, so politicians won't fund the research. On Earth, human labor is cheap so there's no incentive for commercial development either.

The key obstacle here is startup money. If self-replication is economically sound, then a convincing business case could be made, leading to startup funding and a trillion-dollar business.

Again, the problem is a realistic economic assessment of development cost and predicted profit. In the current funding market, there is a lot of capital chasing very few good ideas. If there is money to be made here (a big if, given low cost of manual labor) then this could be a good opportunity.

"Mining the moon" is probably too expensive to start up. Better to start with a self-replicating robot factory on Earth.

I might look at this financial analysis as a side project. Contact me if you want to get involved and have sufficient industry experience to know what you don't know.

One of the prototypical payoffs that could be had with self-replication that I have seen mentioned is solar farms in the desert that live off sand or rocks and produce arbitrarily large acreage of photovoltaics that can then be used as a replacement for oil. This requires full self-replication, including chemical raw material processing, which is not easy to demonstrate.

I am not sure a good business case could be made for the more limited form of self-replication where the "raw material" is machine parts that only need to be assembled. That would be much easier to demonstrate, so I think a business case for it would be extremely valuable.

Alternatively programs that try to produce self replicating spacecraft produce grey goo and should be avoided.

Self-replication does not imply nanotech. Grey goo is fictional evidence.

Grey goo is uncontrolled. The SRS network I'm talking about follows instructions sent by laser beam.

Grey goo is uncontrolled. The SRS network I'm talking about follows instructions sent by laser beam.

That's nothing that you can know. Given that a great filter exists, you might simply put energy into moving towards that filter.

In my LW post Quickly passing through the great filter I advocated flooding the galaxy with radio signals to prove we have escaped the great filter.

[-][anonymous]10y20

Which doesn't prove anything. Does making a radio transmitter make the chance of nuclear war less likely?

Yes, if you accepted that the Fermi paradox and great filter argument means we are probably doomed.

[-][anonymous]10y80

Please explain the causal connection which permits me to update that making radio transmitters decreases the chance of nuclear war.

It changes the map, not the territory. It may or may not update your own assigned probabilities, based on your own priors and accepted belief structures. But the actual chance of nuclear war is the same before and after.

It might not be causal, but you can still update as you can in the smoking lesion problem.

[-][anonymous]10y20

They are not analagous. In the smoking lesion problem the lesion causes a desire to smoke, and therefore wanting to smoke is evidence for updating your probability of getting cancer, which the smoking lesion also causes.

In this case there is no causal connection between building radio transmitters or self-replicating machines and mitigation of causal risks that are likely to underlie the great filter, so no you don't get to update.

Seems like James is using "probability is in the mind" and you are using "probability is in the universe." Please correct me if I'm wrong.

[-][anonymous]10y50

Probability is in both the mind and the universe, and neither approach is very useful in isolation. What is the objective of the OP?

To reduce perceived existential risk, build radio transmitters. This has negligible effect on the actual risk of a great filter event.

To reduce actual existential risk, work on nuclear disarmament, asteroid detection, permanent space settlement, friendly AI, etc.

Let me put it this way: of the possible great filter scenarios, how many causally depend to any significant degree on the construction of radio transmitters? I can think of a few weird outlandish, Hollywood inspired possibilities (e.g. aliens that purposefully hide all their activity from our sight, yet are nearby and wait for the signal to come destroy us. positive outcomes are also imaginable, though equally unlikely). Since the probability of such cases are approximately zero, when successfully build a radio transmitter, it tells us approximately nothing. We already know the outcome, so we might as well update on it now.

Then again, once we built the radio transmitter all those same threats of nuclear war, asteroid impacts, and UFAI still exist. We haven't lessened the actual chance of humanity wiping itself out. You may build your radio transmitter, only to have it destroyed or powered off five years later in the next nuclear war, cometary impact, or by an UFAI that doesn't see its merit.