Biological global catastrophic risks were neglected for years, while AGI risks were on the top. The main reason for this is that AGI was presented as a powerful superintelligent optimizer, and germs were just simple mindless replicators. However, germs are capable to evolve and they are very extensively searching the space of possible optimisations via quick replication of viruses and quick replication rate. In other words, they perform an enormous amount of computations, far above what our computers can do.

This optimisation power creates several dangerous effects: antibiotic resistance (for bacteria) and obsolescence of vaccines (for flu) as well as a zoonosis: the transfer of viruses from animals to humans. Sometimes it could be also beneficial, as in the case of evolving in the direction of less CFR.

In other words, we should think about coronavirus not as of an instance of a virus on a doorknob, but as a large optimisation process evolving in time and space.

Thus, the main median-term (3-6 months) question is how it will evolve and how we could make it evolve in better ways. In other words, what will be the next wave?

There was a claim that second wave of Spanish flu was more dangerous, because of the large hospitals: the virus was “interested” to replicate in hospitals, so it produced more serious illness; Infected people had to go to hospitals, which were overcrowded after the war, and they infected other people there, including the medical personal which moved it to the next hospital.

Another point is that the size of the virus optimisation power depends on the number of infected and of the number of virus generations, as well as on the selective pressure. The idea of “flattening the curve” is the worst, as it assumes a large number of infections AND a large number of virus generation AND high selective pressure. Cruel but short-term global quarantine may be better.

New Comment
6 comments, sorted by Click to highlight new comments since:

Biological global catastrophic risks were neglected for years, while AGI risks were on the top.

This is a true statement about the attention allocation on LessWrong, but definitely not a true statement about the world's overall resource allocation. Total spending on pandemic preparedness is and was orders of magnitude greater than spending on AGI risk. It's just a hard problem, which requires a lot of expensive physical infrastructure to prepare for.

But pandemics was not explored as possible x-risks.

It was in our censi when we asked people for x-risks.

Biological global catastrophic risks were neglected for years, while AGI risks were on the top. The main reason for this is that AGI was presented as a powerful superintelligent optimizer, and germs were just simple mindless replicators.

I think that that is an inaccurate description of why people on LessWrong have focused on AI risk over pandemic risk.

A pandemic certainly could be an existential risk, but the chance of that seems to be low. COVID-19 is a once-in-a-century level event, and its worst case scenario is killing ~2% of the human population. Completely horrible, yes, but not at all an existential threat to humanity. Given that there hasn't been an existential threat from a pandemic before in history, it seems unlikely that one would happen in the next few hundred years. On the other hand, AI risk is relevant in the coming century, or perhaps sooner (decades?). It at least seems plausible to me that the danger from the two is on the same order of magnitude, and that humans should pay roughly equal attention to the x-risk from both.

However, while there are many people out there who have been working very hard on pandemic control, there aren't many who focus on AI risk. The WHO has many researchers specializing in pandemics, along with scientists across nations, while the closest thing for AI safety might be MIRI or FHI, meaning that an individual on LW might have an impact to AI risk in a sense that an individual wouldn't have an impact to pandemic risk. On top of that, the crowd on LW tends to be geared towards working on AI (knowledge of software, philosophy) and not so much geared towards pandemic risk (knowledge of biology, epidemiology).

Finally, while they weren't the top priority, LW has definitely talked about pandemic risk over the years. See the results on https://duckduckgo.com/?q=pandemic+risk+site%3Alesswrong.com+-coronavirus+-covid&t=ffab&ia=web

> The idea of “flattening the curve” is the worst, as it assumes a large number of infections AND a large number of virus generation AND high selective pressure

Flattening _per se_ doesn't affect the evolution of the virus much. It doesn't evolve on a time grid, but rather on an event grid where an event is spreading from a person to another. As long as it spreads the same number of times it will have the same number of opportunities to evolve.

The hospitals are already becoming main centers of virus infections and it is very dangerous from evolutionary perspective:

"Major hospitals such as Bergamo’s “are themselves becoming sources of [coronavirus] infection,” Cereda said, with Covid-19 patients indirectly transmitting infections to non-Covid-19 patients. Ambulances and infected personnel, especially those without symptoms, carry the contagion both to other patients and back into the community."