[memetic status: stating directly despite it being a clear consequence of core AI risk knowledge because many people have "but nature will survive us" antibodies to other classes of doom and misapply them here.]

Unfortunately, no.[1]

Technically, “Nature”, meaning the fundamental physical laws, will continue. However, people usually mean forests, oceans, fungi, bacteria, and generally biological life when they say “nature”, and those would not have much chance competing against a misaligned superintelligence for resources like sunlight and atoms, which are useful to both biological and artificial systems.

There’s a thought that comforts many people when they imagine humanity going extinct due to a nuclear catastrophe or runaway global warming: Once the mushroom clouds or CO2 levels have settled, nature will reclaim the cities. Maybe mankind in our hubris will have wounded Mother Earth and paid the price ourselves, but she’ll recover in time, and she has all the time in the world.

AI is different. It would not simply destroy human civilization with brute force, leaving the flows of energy and other life-sustaining resources open for nature to make a resurgence. Instead, AI would still exist after wiping humans out, and feed on the same resources nature needs, but much more capably.

You can draw strong parallels to the way humanity has captured huge parts of the biosphere for ourselves. Except, in the case of AI, we’re the slow-moving process which is unable to keep up.

A misaligned superintelligence would have many cognitive superpowers, which include developing advanced technology. For almost any objective it might have, it would require basic physical resources, like atoms to construct things which further its goals, and energy (such as that from sunlight) to power those things. These resources are also essential to current life forms, and, just as humans drove so many species extinct by hunting or outcompeting them, AI could do the same to all life, and to the planet itself.

Planets are not a particularly efficient use of atoms for most goals, and many goals which an AI may arrive at can demand an unbounded amount of resources. For each square meter of usable surface, there are millions of tons of magma and other materials locked up. Rearranging these into a more efficient configuration could look like strip mining the entire planet and firing the extracted materials into space using self-replicating factories, and then using those materials to build megastructures in space to harness a large fraction of the sun’s output. Looking further out, the sun and other stars are themselves huge piles of resources spilling unused energy out into space, and no law of physics renders them invulnerable to sufficiently advanced technology.

Some time after a misaligned, optimizing AI wipes out humanity, it is likely that there will be no Earth and no biological life, but only a rapidly expanding sphere of darkness eating through the Milky Way as the AI reaches and extinguishes or envelops nearby stars.

This is generally considered a less comforting thought.

This is an experiment in sharing highlighted content from aisafety.info. Browse around to view some of the other 300 articles which are live, or explore related questions!

  1. ^

     There are some scenarios where this might happen, especially in extreme cases of misuse rather than agentic misaligned systems, or in edge cases where a system is misaligned with respect to humanity but terminally values keeping nature around, but this is not the mainline way things go.

  2. ^
New Comment
23 comments, sorted by Click to highlight new comments since:
[-]gilch3322

Yep. It would take a peculiar near-miss for an unfriendly AI to preserve Nature, but not humanity. Seemed obvious enough to me. Plants and animals are made of atoms it can use for something else.

By the way, I expect the rapidly expanding sphere of Darkness engulfing the Galaxy to happen even if things go well. The stars are enormous repositories of natural resources that happen to be on fire. We should put them out so they don't go to waste.

I think literal extinction is unlikely even conditional on misaligned AI takeover due to:

  • The potential for the AI to be at least a tiny bit "kind" (same as humans probably wouldn't kill all aliens). [1]
  • Decision theory/trade reasons

This is discussed in more detail here and here.

Insofar as humans and/or aliens care about nature, similar arguments apply there too, though this is mostly beside the point: if humans survive and have (even a tiny bit of) resources they can preserve some natural easily.

I find it annoying how confident this article is without really bothering to engage with the relevant arguments here.

(Same goes for many other posts asserting that AIs will disassemble humans for their atoms.)

Edit: note that I think AI takeover is probably quite bad and has a high chance of being violent.


  1. This includes the potential for the AI to generally have preferences that are morally valueable from a typical human perspective. ↩︎

I'm taking this article as being predicated on the assumption that AI drives humans to extinction. I.e. given that an AI has destroyed all human life, it will most likely also destroy almost all nature.

Which seems reasonable for most models of the sort of AI that kills all humans.

An exception could be an AI that kills all humans in self defense, because they might turn it off first, but sees no such threat in plants/animals.

[-]plex73

This is correct. I'm not arguing about p(total human extinction|superintelligence), but p(nature survives|total human extinction from superintelligence), as this is a conditional probability I see people getting very wrong sometimes.

It's not implausible to me that we survive due to decision theoretic reasons, this seems possible though not my default expectation (I mostly expect Decision theory does not imply we get nice things, unless we manually win a decent chunk more timelines than I expect).

My confidence is in the claim "if AI wipes out humans, it will wipe out nature". I don't engage with counterarguments to a separate claim, as that is beyond the scope of this post and I don't have much to add over existing literature like the other posts you linked.

Edit: Partly retracted, I see how the second to last paragraph made a more overreaching claim, edited to clarify my position.

[-]jaan127

three most convincing arguments i know for OP’s thesis are:

  1. atoms on earth are “close by” and thus much more valuable to fast running ASI than the atoms elsewhere.

  2. (somewhat contrary to the previous argument), an ASI will be interested in quickly reaching the edge of the hubble volume, as that’s slipping behind the cosmic horizon — so it will starlift the sun for its initial energy budget.

  3. robin hanson’s “grabby aliens” argument: witnessing a super-young universe (as we do) is strong evidence against it remaining compatible with biological life for long.

that said, i’m also very interested in the counter arguments (so thanks for linking to paul’s comments!) — especially if they’d suggest actions we could take in preparation.

I think point 2 is plausible but doesn't super support the idea that it would eliminate the biosphere; if it cared a little, it could be fairly cheap to take some actions to preserve at least a version of it (including humans), even if starlifting the sun.

Point 1 is the argument which I most see as supporting the thesis that misaligned AI would eliminate humanity and the biosphere. And then I'm not sure how robust it is (it seems premised partly on translating our evolved intuitions about discount rates over to imagining the scenario from the perspective of the AI system).

I've thought a bit about actions to reduce the probability that AI takeover involves violent conflict.

I don't think there are any amazing looking options. If goverments were generally more competent that would help.

Having some sort of apparatus for negotiating with rogue AIs could also help, but I expect this is politically infeasible and not that leveraged to advocate for on the margin.

actions we could take in preparation

In preparation for what?

[-]jaan30

AI takeover.

Wait, how does the grabby aliens argument support this? I understand that it points to "the universe will be carved up between expansive spacefaring civilizations" (without reference to whether those are biological or not), and also to "the universe will cease to be a place where new biological civilizations can emerge" (without reference to what will happen to existing civilizations). But am I missing an inferential step?

[-]jaan30

i might be confused about this but “witnessing a super-early universe” seems to support “a typical universe moment is not generating observer moments for your reference class”. but, yeah, anthropics is very confusing, so i’m not confident in this.

[-]owencb100

OK hmm I think I understand what you mean.

I would have thought about it like this:

  • "our reference class" includes roughly the observations we make before observing that we're very early in the universe
    • This includes stuff like being a pre-singularity civilization
  • The anthropics here suggest there won't be lots of civs later arising and being in our reference class and then finding that they're much later in universe histories
  • It doesn't speak to the existence or otherwise of future human-observer moments in a post-singularity civilization

... but as you say anthropics is confusing, so I might be getting this wrong.

[-]plex20

By my models of anthropics, I think this goes through.

[-]O O30

Additionally, the AI might think it's in an alignment simulation and just leave the humans as is or even nominally address their needs. This might be mentioned in the linked post, but I want to highlight it. Since we already do very low fidelity alignment simulations by training deceptive models, there is reason to think this.

I think an AI is slightly more likely to wipe out or capture humanity than it is to wipe out all life on the planet.

While any true Scottsman ASI is so far above us humans as we are above ants and does not need to worry about any meatbags plotting its downfall, as we don't generally worry about ants, it is entirely possible that the first AI which has a serious shot at taking over the world is not quite at that level yet. Perhaps it is only as smart as von Neumann and a thousand times faster. 

To such an AI, the continued thriving of humans poses all sorts of x-risks. They might find out you are misaligned and coordinate to shut you down. More worrisome, they might summon another unaligned AI which you would have to battle or concede utility to  later on, depending on your decision theory.

Even if you still need some humans to dust your fans and manufacture your chips, suffering billions of humans to live in high tech societies you do not fully control seems like the kind of rookie mistake I would not expect a reasonably smart unaligned AI to make. 

By contrast, most of life on Earth might get snuffed out when the ASI gets around to building a Dyson sphere around the sun. A few simple life forms might even be spread throughout the light cone by an ASI who does not give a damn about biological contamination. 

The other reason I think the fate in store for humans might be worse than that for rodents is that alignment efforts might not only fail, but fail catastrophically. So instead of an AI which cares about paperclips, we get an AI which cares about humans, but in ways we really do not appreciate.

But yeah, most forms of ASI which turn out for out bad for homo sapiens also turn out bad for most other species. 

It is not at all clear to me that most of the atoms in a planet could be harnessed for technological structures, or that doing so would be energy efficient. Most of the mass of an earthlike planet is iron, oxygen, silicon and magnesium, and while useful things can be made out of these elements, I would strongly worry that other elements that are needed also in those useful things will run out long before the planet has been disassembled. By historical precedent, I would think that an AI civilization on Earth will ultimately be able to use only a tiny fraction of the material in the planet, similarly to how only a very small fraction of a percent of the carbon in the planet is being used by the biosphere, in spite of biological evolution having optimized organisms for billions of years towards using all resources available for life.

The scenario of a swarm of intelligent drones eating up a galaxy and blotting out its stars I think can empirically be dismissed as very unlikely, because it would be visible over intergalactic distances. Unless we are the only civilization in the observable universe in the present epoch, we would see galaxies with dark spots or very strangely altered spectra somewhere. So this isn't happening anywhere.

There are probably some historical analogs for the scenario of a complete takeover, but they are very far in the past, and have had more complex outcomes than intelligent grey goo scenarios normally portray. One instance I can think of is the Great Oxygenation Event. I imagine an observer back then might have envisioned that the end result of the evolution of cyanobacteria doing oxygenic photosynthesis would be the oceans and lakes and rivers all being filled with green slime, with a toxic oxygen atmosphere killing off all other life. While indeed this prognosis would have been true to a first order approximation - green plants do dominate life on Earth today - the reality of what happened is infinitely more complex than this crude picture suggests. And even anaerobic organisms survive to this day in some niches.

The other historical precedent that comes to mind would be the evolution of organisms that use DNA to encode genetic information using the specific genetic code that is now universal to all life, in whatever pre-DNA world existed at the beginning of life. These seem to have indeed completely erased all other kinds of life (claims of a shadow biosphere of more primitive organisms are all dubious to my knowledge), but also have not resulted in a less complex world.

It's always seemed strange to me what preferences people have for things well outside their own individual experiences, or at least outside their sympathized experiences of beings they consider similar to themselves.

Why would one particularly prefer unthinking terrestrial biology (moss, bugs, etc.) over actual thinking being(s) like a super-AI?  It's not like bacteria are any more aligned than this hypothetical destroyer.

[-]plex82

The space of values is large, and many people have crystalized into liking nature for fairly clear reasons (positive experiences in natural environments, memetics in many subcultures idealizing nature, etc). Also, misaligned, optimizing AI easily maps to the destructive side of humanity, which many memeplexes demonize.

[-]gilch123

Humans instinctively like things like flowers and birdsong, because it meant a fertile area with food to our ancestors. We literally depended on Nature for our survival, and despite intensive agriculture, we aren't independent from it yet.

[-]Nnotm5-1

Bugs could potentially result in a new sentient species many millions of years down the line. With super-AI that happens to be non-sentient, there is no such hope.

If it's possible for super-intelligent AI to be non-sentient, wouldn't it be possible for insects to evolve non-sentient intelligence as well?  I guess I didn't assume "non-sentient" in the definition of "unaligned".

[-]Nnotm4-2

In principle, I prefer sentient AI over non-sentient bugs. But the concern that is if non-sentient superintelligent AI is developed, it's an attractor state, that is hard or impossible to get out of. Bugs certainly aren't bound to evolve into sentient species, but at least there's a chance.

Note to the LW team: it might be worth considering making links to AI Safety Info live-previewable (like links to other LW posts/sequences/comments and Arbital pages), depending on how much effort it would take and how much linking to AISI on LW we expect in the future.