While I wasn't at 80% of a lab leak when Eliezer asseted it a month ago, I'm now at 90%. It will take a while till it filters through society but I feel like we can already look at what we ourselves got wrong.
In 2014, in the LessWrong survey more people considered bioengineered pandemics a global catastrophic risk then AI. At the time there was a public debate about gain of function research. On editoral described the risks of gain of function research as:
Insurers and risk analysts define risk as the product of probability times consequence. Data on the probability of a laboratory-associated infection in U.S. BSL3 labs using select agents show that 4 infections have been observed over <2,044 laboratory-years of observation, indicating at least a 0.2% chance of a laboratory-acquired infection (5) per BSL3 laboratory-year. An alternative data source is from the intramural BSL3 labs at the National Institutes of Allergy and Infectious Diseases (NIAID), which report in a slightly different way: 3 accidental infections in 634,500 person-hours of work between 1982 and 2003, or about 1 accidental infection for every 100 full-time person-years (2,000 h) of work (6).
A simulation model of an accidental infection of a laboratory worker with a transmissible influenza virus strain estimated about a 10 to 20% risk that such an infection would escape control and spread widely (7). Alternative estimates from simple models range from about 5% to 60%. Multiplying the probability of an accidental laboratory-acquired infection per lab-year (0.2%) or full-time worker-year (1%) by the probability that the infection leads to global spread (5% to 60%) provides an estimate that work with a novel, transmissible form of influenza virus carries a risk of between 0.01% and 0.1% per laboratory-year of creating a pandemic, using the select agent data, or between 0.05% and 0.6% per full-time worker-year using the NIAID data.
Even at the lower bar of 0.05% per full-time worker-year it seems crazy that society continued playing Russian Roulette. We could have seen the issue and protested. EA's could have created organizations to fight against gain-of-function research. Why didn't we speak every Petrov day about the necessity to stop gain of function research? Organizations like OpenPhil should go through the 5 Why's and model why they messed this up and didn't fund the cause. What needs to change so that we as rationalists and EA's are able to organize to fight against tractable risks that our society takes without good reason?
The first step would be to do similar things as we do with other X-risks. For the case of OpenPhil, the topic should have been important enough for them to task a researcher with summarizing the state of the topic and what should be done. That's the OpenPhil procedere to deal with topics that matter.
That analysis might have resulted in the observation that this Marc Lipsitch guy seems to have a good grasp of the subject to then fund him with a million per year to do something.
It's not clear that funding Lipsitch would have been enough, but it would be on course with "we tried to do something with our toolkit".
With research it's hard to know before what you find if you invest in a bunch of smart people to think about a topic and how to deal with it.
In retrospect finding out that the NIH illegally funneled money to Baric and Shi in circumvention of the moratorium imposed by the Office of Science and Technology Policy and then challenging that publically might have prevented this pandemic. Being part of a scandal about illegal transfer of funds likely would have seriously damanged Shi's career given the importance of being seen as respectful in China.
Finding that out at the time would have required reading a lot of papers to understand what's going on but I think it's quite plausible that a researcher who reads through the top 200 gain of function research papers attentively and tried to get a good model of what's happening might have caught it.