While I wasn't at 80% of a lab leak when Eliezer asseted it a month ago, I'm now at 90%. It will take a while till it filters through society but I feel like we can already look at what we ourselves got wrong.
In 2014, in the LessWrong survey more people considered bioengineered pandemics a global catastrophic risk then AI. At the time there was a public debate about gain of function research. On editoral described the risks of gain of function research as:
Insurers and risk analysts define risk as the product of probability times consequence. Data on the probability of a laboratory-associated infection in U.S. BSL3 labs using select agents show that 4 infections have been observed over <2,044 laboratory-years of observation, indicating at least a 0.2% chance of a laboratory-acquired infection (5) per BSL3 laboratory-year. An alternative data source is from the intramural BSL3 labs at the National Institutes of Allergy and Infectious Diseases (NIAID), which report in a slightly different way: 3 accidental infections in 634,500 person-hours of work between 1982 and 2003, or about 1 accidental infection for every 100 full-time person-years (2,000 h) of work (6).
A simulation model of an accidental infection of a laboratory worker with a transmissible influenza virus strain estimated about a 10 to 20% risk that such an infection would escape control and spread widely (7). Alternative estimates from simple models range from about 5% to 60%. Multiplying the probability of an accidental laboratory-acquired infection per lab-year (0.2%) or full-time worker-year (1%) by the probability that the infection leads to global spread (5% to 60%) provides an estimate that work with a novel, transmissible form of influenza virus carries a risk of between 0.01% and 0.1% per laboratory-year of creating a pandemic, using the select agent data, or between 0.05% and 0.6% per full-time worker-year using the NIAID data.
Even at the lower bar of 0.05% per full-time worker-year it seems crazy that society continued playing Russian Roulette. We could have seen the issue and protested. EA's could have created organizations to fight against gain-of-function research. Why didn't we speak every Petrov day about the necessity to stop gain of function research? Organizations like OpenPhil should go through the 5 Why's and model why they messed this up and didn't fund the cause. What needs to change so that we as rationalists and EA's are able to organize to fight against tractable risks that our society takes without good reason?
I think Eliezer ignores how important prestige is for the Chinese. We got them to outlaw human cloning by telling them that doing it would put the Chinese academic community in a bad light.
We likely could have done the same with gain of function research. Having their first biosafety level 4 lab for the Chinese likely was mostly about prestige. Having no biosafety 4 labs while a lot of other countries had biosafety 4 labs wasn't something that was okay for the Chinese because it suggests that they aren't advanced enough.
I do think that it would be possible to make a deal that gives China the prestige for their scientists that they want without having to endanger everyone for it.
The Chinese took their database with the database about all the viruses that they had in their possession down in September 26 2019. In their own words they took it down because of a hacking attack during the pandemic (which suggests that starts for them somewhere in September). If we would have the database we likely would find a more related virus in it. Given that the point of creating the database in the first place was to help us in a coronavirus pandemic taking it down and not giving it to anyone is a clear sign that there's something that would implicate them.
Basically, people outside of the virology community told them that they have to stop after exposing 75 CDC scientists to anthrax and a few weeks later other scientists finding a few vials of small pox in their freezer.
The reaction of the virology community was to redefine what gain of function research happens to be and continue endangering everyone.
It's like Wall Street people when asked whether they do insider training saying: "According to our definition of what insider training means we didn't".
I have written all my sources up at https://www.lesswrong.com/posts/wQLXNjMKXdXXdK8kL/fauci-s-emails-and-the-lab-leak-hypothesis