Review

TLDR:If an AI kills all the humans how does it power the datacenters/replace the human economy? Green Goo. (IE:bioengineering)

  • Natural biotech is amazing
  • Evolution is dumb and slow
  • Very simple strategies (EG:vine that climbs a tree and squeezes) are "innovative" and work well in nature.
  • Designed organisms could very quickly reproduce and then provide necessary resources (EG:biological solar panels to power data centers)

TLDR end

In response to: grey-goo-is-unlikely

Overview of existing natural biology:

  • Minimum doubling time for
    • Plants: single digit days
    • Algae: 1.5 days (ideal conditions)
    • E.Coli: 20 minutes (nutrient rich conditions)

No single organism (humans aside) has taken over the biosphere because evolution is slow and dumb.

Human agriculture is based on:  plant sub 1 gram seeds, (water, pesticides/herbicides, fertilizer etc.) , collect 1 kg+ plants a few months later. Biology has absurd growth rates[1][2].

Invasive species show the implications. A naive biosphere stands no chance against an intelligent opponent with real biotechnology.

kudzu Kudzu, "the vine that ate the south"

Intelligence allows adapting strategies much more quickly. Humans can design vaccines faster than viruses can mutate. A population of well coordinated humans will not be significantly preyed upon by viruses despite viruses being the fastest evolving threat.

Intelligence + ability to bioengineer organisms --> can create unlimited invasive species much more capable than anything natural

  • groups of invasive species can be symbiotic
    • IE:after the bugs eat all the plants, replace all the plants with an invasive plant of your own
  • invasive species can have a simple handshake based backdoor to allow resource exchange and control
    • EG: bug/plant chemical handshake to allow bug to get nectar so it can stick around and eat any competitor plants
    • EG: biochemical signal to allow collecting sugar rich sap for other purposes
  • invasive species can allow "updates" via engineered in viral backdoor
    • EG: specific protein to identify "update virus" carrying new genetic material
      • If a fungus figures out how to eat your plant give all existing plants a new antifungal to secrete

Current life is sub-optimal

  • all cells in an organism have full copy of genome
    • increases cancer risk (a cell that doesn't have genes for DNA copying can't become cancerous)
    • copying takes much longer (Human cell replication time is 24 hours, 8 of which is DNA copying. Similarly complex eukaryotes like yeast can double in under 2 hrs)
  • organisms are grown from a single parent cell repeatedly splitting and differentiating via complex biochemical state machines.
    • printing an organism from already grown differentiated cells or blocks of cells would be much faster
    • but biology can't build such a printer and translate designs to the new format
  • multi cellular organisms can't adopt/steal new biochemistry
    • human ancestor organisms lost vitamin C synthesis and never got it back despite later problems with scurvy. Humans had to find food sources to deal with missing biochemistry. Many organisms use symbiotic bacteria with significant efficiency gains in keeping them on task.

Hypothetical very invasive shoggoth/kudzu type organism

Core capabilities:

  • constructed from modular components
    • does not grow a leaf, sends leaf parts via internal transport network to be assembled on site
  • dedicated networks for long distance distribution of water and high concentration sap (think honey)
    • sap is concentrated and low in phosphorus, sulfur and micronutrients, inhibits bacterial growth
    • water contains some inorganic micro-nutrients
    • sap and water can be combined inside parts to run normal biology
  • general purpose transportation system to move encapsulated blobs of stuff
    • EG: inorganic nutrients, excavated dirt/rock, matter to be digested
    • think a pneumatic tube system with lots of switches
      • maybe use water instead of air
    • containers can be re-used
  • distributed nervous system
    • all parts communicate conditions and requirements (EG:I am a leaf making sugar, send more water pls)
    • contains many simple biological computers that route messages
    • slightly more complex computers control building + management
      • EG: arrays of 100hz processors with 10MB of memory
    • just neurons laid out according to an encoded genetic template
    • lots of error correction, voting logic and self test cycles
    • can send messages/info a long distance pretty quickly (myelinated nerves carry signals at 300m/s)
      • AI can manage a lot of stuff remotely if problems are encountered

Core organism competency is covering ground with photosynthesising mat of itself.

edges contain assimilating parts

  • Traps for annoying animals
  • slow moving mouth parts to reduce trees/animal corpses in size for shipping to digesters
  • digesters/factories to convert trees/animals/soil into "build a giant leaf"/"build a trap"/"build an X" parts kit
    • digesters can do much better than nature by extracting and re-using existing cellular machinery
      • digesting into amino acids and rebuilding is not efficient
  • assemblers that assemble parts kits into parts to integrate with the whole
  • some parts may be mobile/dockable EG:mouth parts could travel a bit on a tether then come back

As organism scales, modularity allows for much faster growth since growth at edge only requires assembling prefab components

  • growth could easily be 100m/day spread with 1-100KM/day spreading via dedicated pipeline based "shoots" to increase linear spread speed.
    • IE: elephant sized worm like part powered by pipeline it lays down. Lays down expansion nodes every kilometer where normal growth initiates
  • can grow flying things to spread much faster, then link up with other parts of itself

This is just biochemistry and organism templating. No need to build complex brains. Human killing pathogens + this gets an AGI the biosphere. A bit more engineering has to be done to power data centers so the AI can continue to think (EG:build electrical generator that runs on sap+water). I'm assuming a biological DNA printer/reader is part of the "build a kudzu/shoggoth" bootstrap process anyways.

Other approaches are also possible and likely more efficient. For example, Flying things spreading a plant targeted virus causing construction of shoggoth/kudzu organisms. Extracting cellular machinery from plant leaves could allow sub 10 minute doubling times for example. That+flyers could lead to biosphere control within a few days rather than weeks to months.

Concrete world takeover plan

  • get smart enough to figure out biology
  • bootstrap biotech capabilities using mail order DNA synthesis and human patsies
    • design a DNA reader/printer to speed up experimentation
    • make a shoggoth/kudzu hybrid
    • make human killing pathogens
  • prepare for humans dying and resulting instability
    • stage some Kudzugoth near a data center you control for power generation hookup
    • set up small GPU clusters (a dozen or so) powered by a yard's worth of Kudzugoth for extra reliability.
  • Patsies ship biotech seeds all around world via Airmail
  • biotech seed packets burst and release Kudzugoth seeds and human killing pathogens
  • Kudzugoth grows and humans die
  • hook up datacenter to green power
  • manage Kudzugoth growth and scavenge anything of value from human society ruins
    • EG:collect all the GPUs/computers
    • EG:restart minimum needed to run chip fabs

Of course that's just one way to do it. I think strategies involving computer hacking or coercion are easier.

  1. ^

    A corn seed weighs 0.25 grams. A corn cob weighs 0.25kg. It takes 60-100 days to grow. Assuming 1 cob per plant and 80 days that's 80/log(1000,2)=8 days doubling time not counting the leaves and roots. Estimate closer to 7 days including stalk, leaves and roots.

  2. ^

    Yes, nitrogen fertilizer is an energy input, but there are plenty of plants that don't need it, and efforts for in plant nitrogen synthesis in eg:corn are underway.

New Comment
31 comments, sorted by Click to highlight new comments since:

This is especially notable because a lot of what we'd want AGI to do for us is build something like this that not only doesn't kill us (tall order, right?) but also solves global warming and climate contamination and acts as a power & fuel grid. That and bio immortality is basically everything I personally want out of AGI. So I'd really like to have some idea how to build a machine that teaches a plant to do something like a safe, human-compatible version of this.

Some good news, though: I suspect it may be more practical to defend against this sort of attack using finite intelligence than previously assumed. We need to make the machine that knows how to guard against these sorts of things, but if we can make the vulnerability-closer, we don't need to hit max ASI to stop other ASIs from destroying all pre-ASI life on earth.

I suspect it may be more practical to defend against this sort of attack using finite intelligence than previously assumed. We need to make the machine that knows how to guard against these sorts of things, but if we can make the vulnerability-closer, we don't need to hit max ASI to stop other ASIs from destroying all pre-ASI life on earth.

If you read between the lines in my Human level AI can plausibly take over the world post, hacking computers is probably the lowest difficulty "take over the world" strategy and has the side benefit of giving control over all the internet connected AI clusters.

The easiest way to keep a new superintelligence from emerging is to seize control of the computers it would be trained on. The AI only needs to hack far enough to monitor AI researchers and AI training clusters and sabotage later AI runs in a non-suspicious way. It's entirely plausible this has already happened and we are either in the clear or completely screwed depending on the alignment of the AI that won the race.

Also, hacking computers and writing software is something easy to test and therefore easy to train. I doubt that training an LLM to be a better hacker/coder is much harder than what's already been done in the RL space by OpenAI and Deepmind (EG: playing DOTA and Starcraft).

Biotech is a lot harder to deal with since ground truth is less accessible. This can be true for computer security too but to a much lesser extent (EG: lack of access to chips in the latest Iphone and lack of complete understanding therof with which to develop/test attacks).

but also solves global warming and climate contamination and acts as a power & fuel grid. That and bio immortality is basically everything I personally want out of AGI. So I'd really like to have some idea how to build a machine that teaches a plant to do something like a safe, human-compatible version of this.

Pshh, low expectations. Mind uploading or bust!

Pshh, low expectations. Mind uploading or bust!

I'll take mind backups, but for exactly the reasons you highlight here, I don't think we're going to find electronics to be more efficient than microkinetic computers like biology. I'm much more interested in significant refinements to what it means to be biological. Eventually I'll probably substrate translate over to a reversible computer but that's probably hundreds to thousands of years out

So I'd really like to have some idea how to build a machine that teaches a plant to do something like a safe, human-compatible version of this.

🤔 This is actually a path to progress, right? The difficulty in alignment is figuring out what we want precisely enough that we can make an AI do it. It seems like a feasible research project to map this out for kudzugoth.

Seems convincing enough that I'm gonna make a Discord and maybe switch to this as a project. Come join me at Kudzugoth Alignment Center! ... 😅 I might close again quickly if the plan turns out to be fatally flawed, but until then, here we go.

Building new organisms from scratch (synthetic biology) is an engineering problem. Fundamentally we need to build the right parts and assemble them.

Without major breakthroughs (Artificial Superintelligence) there's no meaningful "alignment plan", just a scientific discipline. There's no sense in which you can really "align" an AI system to do this. The closest things would be:

  • building a special purpose model (EG:alphafold) useful for solving sub-problems like protein folding
  • teaching an LLM to say "I want to build green biotech" and associated ideas/opinions.
    • which is completely useless

Problem is that biology is difficult to mess with. DNA sequencing is somewhat cumbersome, DNA writing is much more so, costing on the order of 25¢/base currently.

Also imaging the parts to figure out what they do and if they're doing it can be very cumbersome because they're too small to see with a light microscope. Everything is indirect. Currently we try to crystalize them and then use X-rays (which are small enough but also very destructive) to image the crystal and infer the structure. There's continuous progress here but it's slow.

AI techniques can be applied to some of these problems (EG:inferring protein structure from amino acids (Alphafold), or doing better quantum level simulation Ferminet)

Note that AI techniques are replacing existing ones based on human coded algorithms rooted in physics and often have issues with out of distribution inputs (EG: work well for wildtype protein but give garbage when mutations are added.)

Like any ML system, we just have to feed it more data which means we need to do more wet lab work, x-ray crystallography etc.

Synthetic biology is the best way forwards but it's a giant scientific/engineering discipline, not an "alignment approach" whatever that's supposed to mean.

Without major breakthroughs (Artificial Superintelligence) there's no meaningful "alignment plan", just a scientific discipline. There's no sense in which you can really "align" an AI system to do this.

Do you expect humanity to bioengineer this before we develop artificial superintelligence? If not, presumably this objection is irrelevant.

Basically if artificial superintelligence happens before sufficiently advanced synthetic biology, then one way to frame the alignment problem is "how do we make an ASI create a nice kudzugoth instead of a bad kudzugoth?".

I guess but that's not minimal and doesn't add much.

"how do we make an ASI create a nice (highly advanced technology) instead of a bad (same)?".

IE: kudzugoth vs robots vs (self propagating change to basic physics)

Put differently:

If we build a thing that can make highly advanced technology, make it help rather than kill us with that technology.

Neat biotech is one such technology but not a special case.

Aligning the AI is a problem mostly independent of what the AI is doing (unless you're building special purpose non AGI models as mentioned above)

I agree that one could do something similar with other tech than neat biotech, but I don't think this proves that Kudzugoth Alignment is as difficult as general alignment. I think aligning AI to achieve something specific is likely to be a lot easier than aligning AI in general. It's questionable whether the latter is even possible and unclear what it means to achieve it.

Before AI-based bioengineering has reached the point where it can create "green goo", wouldn't it first reach the point where it can create targeted germs which destroy specific species, with green goo being a potential target for destruction? Seems like this would make defense feasible.

Maybe, Still, there are ways to harden an organism against parasitic intrusion. TLDR you isolate and filter external things. Plants are pretty good at this already (they have no mammalian style immune system) and employ regularly spaced filters with holes too small for bacteria in their water tubes.

The other option is to do the biological equivalent of "commoditize your complement". Don't get good at making leaves and roots, get good at being a robust middleman between leaves and roots and treat them as exploitable breedable workers. Obviously don't optimise too hard in such a way as to make the system brittle (EG:massive uninterrupted monocultures). Have fallback options ready to deploy if something goes wrong.

If you want to make any victory pyrric, just re-use other common earth plant parts wholesale. If you want to kill the organism you'll need root eating fungi for all the food crops and common trees/grasses. If you want a leaf fungus/bacteria same. Organism can select between plant varieties to remain effective so the defender has to release bio weapons to kill most important plants.

Im skeptical about the timeline here. Unless we allow for the laws of physics,chemistry and biology to be completely suspended, this plan will take centuries to get accomplished, even if we assume the shoggokudzu had absolute "peak" possible growth rate for a biological organism. Biology is hard capped in its ability to metabolize captured matter, and for a good reason: if it could be done faster, the life would simply cook itself with the energy spillover. 


Shoggokudzu could conceivably make the AI victory inevitable in a long enough timeline, but not particularly fast, when a determined human with a chainsaw and a lighter can destroy years of its growth in 10 seconds. Human civilization is almost perfectly designed to be the ultimate "pest" against vast biological systems. Destroying biomass and destabilizing complex ecosystems is basically our core trait.

Let's talk growth rates.

A corn seed weighs 0.25 grams. A corn cob weighs 0.25kg. It takes 60-100 days to grow. Assuming 1 cob per plant and 80 days that's 80/log(1000,2)=8 days doubling time not counting the leaves and roots. I'd guess it's closer to 7 days including stalk leaves and roots.

Kudzu can grow one foot per day.

Suppose a doubling time of one week which is pretty conservative. This means a daily growth rate of 2^(1/7) --> 10% so whatever area it's covering, It grows 10% of that. For a square patch measuring 100m*100m that means each side grows 0.25 meters per day. This is in line with kudzu initially.

  • initial : (100m)² 0.25m/day linear
  • month1 : (450m)² 1.2m/day linear
  • month2 : (2km)² 5m/day linear
  • month3 : (2km)² 22m/day linear
  • month4 : (9km)² 100m/day linear
  • month5 : (40km)² 440m/day linear
  • month6 : (180km)² 2km/day linear
  • month7 : (800km)² 9km/day linear
  • month8 : (16000km)² 40km/day linear (half of earth surface area covered)
  • 8m1w : all done

1 week doubling times are enough to get you biosphere assimilation in under a year. If going full Tyranid and eating the plants/trees/houses can speed things up then things go faster. Much better efficiencies are achievable by eating the plants and reusing most of the cellular machinery. Doubling time of two days takes the 8 month global coverage time down to 10 weeks. Remember e-coli is doubling in 20 minutes so if we can literally re-use the whole tree (jack into the sap being produced) while eating the structural wood, doubling times could get pretty absurd.

The reason for specifying modular construction is to enable faster linear growth rates which are necessary for fast spread. Starting from multiple points is also important. Much better to have 10000 small 1m*1m patches spread out globally than a single 100m*100m patch. Same timeline but 100x lower required linear expansion rate.

So at month8 the edge grows 0.46 m/s. That doesn't sound very plausible to me.
In this timeline the area doubles about every week, so all the growth must happen in two dimensions (opposed to the corns weight gain), it couldn't get thicker. It means it's bandwidth for nutrient transport would not change, thus it couldn't support the exponential growth on the edges.
(although as between month2 and month3 it took a break of growth, some restructuring might have happened)

First, more patches growing from different starting locations is better. That cuts required linear expansion rate proportional to ratio of (half earth circumference,max(dist b/w patches))

Note that 0.46 m/s is walking speed. two layer fractal growth is practical (IE:specialised spikes grow outwards at 0.46m/s initiating slower growth fronts that cover the area between them more slowly.)

Material transport might become the binding constraint but transport gets more efficient as you increase density. Larger tubes have higher flow velocities with the same pressure gradient. (less benefits once turbulence sets in). Air bearings (think very long air hockey table) are likely close to optimal and easy enough to construct.

As for biomass/area. Corn grows to 10Mg/ha = 1kg/m²

for a kilometer long front that implies half a tonne per second. Trains cars mass in the 10s to hundreds of tonnes. assuming 10 tonnes and 65' that's half a tonne per meter of train. So move a train equivalent at (1m/s+0.5m/s) --> 1.5m/s (running speed) and that supplies a kilometer of frontage.

There's obviously room to scale this.

I'm also ignoring oceans. Oceans make this easier since anything floating can move like a boat for which 0.5m/s is not significant speed.

Added notes:

I would assume the assimilation front has higher biomass/area than inner enclosed areas since there's more going on there and potentially conflict with wildlife. This makes things trickier and assembly/reassembly could be a pain so maybe put it on legs or something?

that is only plausible from a "perfect conditions" engineering perspective where the Earth is a perfect sphere with no geography or obstacles, resources are optimally spread, and there is no opposition. Neither kudzu, or even microbes can spread optimally. 

And this assumes that the only issues the shoggokudzu faces is soil/water issues, mountains, rivers, pests, natural blights and diseases, mold,bad weather, its own mutations etc. One man with a BIC lighter can destroy weeks of work. Wildfires spread faster than plants. Planes with herbicides, or combine harvesters with a chipper, move much faster than plants grow. As bad as engineered Green Goo is, the Long Ape is equally formidable at destruction.

This is not to say Kudzuapocalypse would not be absolutely awful. It might, over long enough timeline, beat the natural Earth ecosystem, and decades/centuries after, humanity itself. But this would not be an instantaneous process.

*Fire*

Forest fires are a tragedy of the commons situation. If you are a tree in a forest, even if you are not contributing to a fire you still get roasted by it. Fireproofing has costs so trees make the individually rational decision to be fire contributing. An engineered organism does not need to do this.

Photosynthetic top layer should be flat with active pumping of air. Air intakes/exausts seal in fire conditions. This gives much less surface area for ignition than existing plants.

Easiest option is to keep some water in reserve to fight fires directly. possibly add some silicates and heat activated foaming agents to form an intumescent layer. secrete from the top layer on demand.

That is only plausible from a "perfect conditions" engineering perspective where the Earth is a perfect sphere with no geography or obstacles, resources are optimally spread, and there is no opposition. Neither kudzu, or even microbes can spread optimally.

I'll clarify that a very important core competency is transport of (water/nutrients). Plants don't currently form desalination plants (seagulls do this to some extent) and continent spanning water pumping networks. The fact that rivers are dumping enormous amounts of fresh water into the oceans shows that nature isn't effective at capturing precipitation. Some plats have reservoirs where they store precipitation. This organism should capture all precipitation and store it. Storage tanks get cheaper with scale.

Plant growth currently depends on pulling inorganic nutrients and water out of the soil, C, O and N can be extracted from the atmosphere.

An ideal organism roots itself into the ground, extracts as much as possible from that ground then writes it off once other newly covered ground is more profitably mined. Capturing precipitation directly means no need to go into the soil for that although it might be worthwhile to drain the water table when reachable or ever drill wells like humans do. No need for nutrient gathering roots after that. If it covers an area of phosphate rich rock it starts excavating and ships it far and wide as humans currently do.

As for geographic obstacles 2/3rds of the earth is ocean. With a design for a floating breakwater that can handle ocean waves, the wavy area can be enclosed and eventually eliminated. Covered area behind the breakwater can prevent formation of waves by preventing ripple formation (IE:act as a distributed breakwater).

If it's hard to cover mountains, then the AI can spend a bit of time solving the problem during the first few months, or accept a small loss in total coverage until it does get around to the problem.

One man with a BIC lighter can destroy weeks of work. Wildfires spread faster than plants. Planes with herbicides, or combine harvesters with a chipper, move much faster than plants grow. As bad as engineered Green Goo is, the Long Ape is equally formidable at destruction.

I even bolded the parts about killing all the humans first. Yes humans can do a lot to stop the spread of something like this. I suspect humans might even find a use for it (EG:turn sap into ethanol fuel) and they're likely clever enough to tap it too.

I'm not going to expand on "kill humans with pathogens" for Reasons. We can agree to disagree there.

I completely agree we should not be talking pathogen use strategies online, for...obvious reasons, even if we put aside the threat of malicious AI. Humans taking ideas from that would be bad enough. I simply don't see the pathogen route to be as dangerous as many people say, due to inherent limitations of organic systems (and microscopic systems in general). But further explaining how, why, etc is a bad idea, so lets agree to disagree.

I think you glossed over the section where the malevolent AI simultaneously releases super-pathogens to ensure that there aren't any pesky humans left to meddle with its kudzugoth.

I did not, I just do not think any kind of scientifically plausible pathogen can wipe out humanity, or even seriously diminish our numbers. There is a trade-off between lethality and virality of any pathogen; if it kills too fast or too surely, it cannot spread. If it spreads quickly, it cannot be too deadly. Dead men do not travel or cough. 

Probably the worst outcome would be something like Super-Covid, a disease that spreads easily, usually does not kill, but causes long term detriment to human health.  Anything more deadly than that would sound all of the post-Covid alarms, and lead to quarantine, rampant disinfectant use, and masks/gloves/protection being commonplace. No biological pathogen can reliably beat those, unless it is straight up dry nanotech that can spread via onboard propulsion, survive caustic chemicals, and burrow through latex: in other words, science fiction/magic.

I don't think getting into much detail here is a good idea, but a pathogen could have a long incubation period after which it's disastrous. HIV is a classic example, and something engineered could be far worse.

raises finger

realizes I'm about to give advice on creating superpathogens

I'm not going to go into details besides stating two facts:

A common reasoning problem I see is:

  • "here is a graph of points in the design space we have observed"
    • EG:pathogens graphed by lethality vs speed of spread
  • There's an obvious trendline/curve!
    • therefore the trendline must represent some fundamental restriction on the design space.
    • Designs falling outside the existing distribution are impossible.

This is the distribution explored by nature. Nature has other concerns that lead to the distribution you observe. That pathogens have a "lethality vs spread" relationship tells you about the selection pressures selecting for pathogens, not the space of possible designs.

This post is important to setting a lower bound on AI capabilities required for an AI takeover or pivotal act. Biology as an existence proof that some kind of "goo" scenario is possible. It somewhat lowers the bar compared to Yudkowsky's dry nanotech scenario but still requires AI to practically build an entire scientific/engineering discipline from scratch. Many will find this implausible.

Digital tyranny is a better capabilities lower bound for a pivotal act or AI takeover strategy. It wasn't nominated though which is a shame.

You can still nominate posts until Dec 14th?

I agree that Green infrastructure is more plausible way to killing humans and getting independent infrastructure for malicious AI. However, building green infrastructure is slower than nanotech – and thus it will be more visible for outsiders and more vulnerable. Even if it takes weeks, it could be enough to trigger alarms. 

Nanotech would definitely be nice but some people have expressed skepticism so I'm proposing an alternative non-(dry)nanotech route.

I'm assuming the AGI is going to kill off all the humans quickly with highly fatal pathogens with long incubation times. Whatever works to minimize transitional chaos and damage to valuable infrastructure.

The meat of this is a proposed solution for thriving after humans are dead. The green infrastructure doesn't have to be that large to sustain the AI's needs initially. A small cluster of a few dozen consumer gpus + biotech interfacing hardware may be the AI's temporary home until it can build up enough to re-power datacenters and do more scavenging.

Although I'd go with multiple small clusters for redundancy. Initial power consumption can be more than handled by literally a backyard's worth of kudzugoth and a small bio-electric generator. Plant based solar to sugar to electricity should give 50w/m² so for a 6kw cluster with 20 GPUs a 20m*10m patch should do and could be unobtrusive, blending into the surrounding vegetation.

A population of well coordinated humans will not be significantly preyed upon by viruses despite viruses being the fastest evolving threat.

Perhaps for natural viruses. But this has not been tested under a sustained adversary developing synthetic viruses.

Even for the latest strain of COVID there may be possibilities for another 10x in virulence with only a modest decrease in lethality.

well coordinated

Yes, assume no intelligent adversary.

  • Well coordinated -->
    • enforced norms preventing individuals from making superpathogens.
    • large scale biomonitoring
    • can and will rapidly deploy vaccines
    • will rapidly quarantine based on bio monitoring to prevent spread
    • might deploy sterilisation measures (EG:UV-C sterilizers in HVAC systems)

There is a tradeoff to be made between level of bio monitoring, speed of air travel, mitigation tech and risk of a pathogen slipping past. Pathogens that operate on 2+day infection-->contagious times should be detectable quickly and might kill 10000 worst case. That's for a pretty aggressive point in the tradeoff space.

Earth is not well coordinated. Success of some places in keeping out COVID shows what actual competence could accomplish. A coordinated earth won't see much impact from the worst of natural pathogens much less COVID-19.

Even assuming a 100% lethal long incubation time highly infective pathogen for which no vaccine can be made. Biomonitoring can detect it prior to symptoms, then quarantine happens and 99+% of the planet remains uninfected. Pathogens travel because we let them.

  • enforced norms preventing individuals from making superpathogens.

How could this enforcement be carried out within every nation? Who will be the enforcer(s)?

The adversary here is assumed to be nature/evolution. I'm not referring to scenarios where intelligent agents are designing pathogens.

Humans can design vaccines faster than viruses can mutate. A population of well coordinated humans will not be significantly preyed upon by viruses despite viruses being the fastest evolving threat.

Nature is the threat in this scenario as implied by that last bit.

No adversary, or group of adversaries, in the real world exists in isolation. Humans will take advantage of viruses, viruses will take advantage of humans, as in the case of toxoplasmosis gondii. 

In other words, all possible threats, are co-determinants to varying degrees, of the real threat faced by actual humans. Even those without intelligent agency.

So this assumption would quickly breakdown outside a fantasy world.