I feel like this just happened? There was a good amount of articles written about this, see for example this article by the Global Priorities Project on GoF research:
http://globalprioritiesproject.org/wp-content/uploads/2016/03/GoFv9-3.pdf
I also remember a number of other articles by people working on biorisk, but would have to dig them up. But overall I had a sense there was both a bunch of funding and a bunch of advocacy and a bunch of research on this topic.
I searched LessWrong itself for "gain of function" and it didn't bring up much. Searching for it on OpenPhil finds it mentioned a few times, so it seems that while OpenPhil got in contact with the topic they failed to identify it as a cause area that needs funding.
All the hits on OpenPhil are 2017 and before and 2018 the Trump administration ended the ban on gain of function research. That should have been a moment of public protest by our community.
I now read the paper and given what we saw last year the market mechanism they proposed seems flawed. If we would have an insurance company that would be responsible to paying out the damage created by the pandemic that company would be insolvent and not be able to pay for the damage and at the same time the suppression of the lab leak hypothesis (and all the counterparty risk that comes with a major insurance company going bankrupt) would have been even stronger when the existence of a billion dollar company depends on people not believing in the la...
Here is a data point not directly relevant to Less Wrong, but perhaps to the broader rationality community:
Around this time, Marc Lipsitch organized a website and an open letter warning publicly about the dangers of gain-of-function research. I was a doctoral student at HSPH at the time, and shared this information with a few rationalist-aligned organizations. I remember making an offer to introduce them to Prof. Lipsitch, so that maybe he could give a talk. I got the impression that the Future of Life Institute had some communication with him, and I see from their 2015 newsletter that there is some discussion of his work, but I am not sure if anything more concrete came out of of this
My impression was that while they considered this important, this was more of a catastrophic risk than an existential risk, and therefore outside their core mission.
While this crisis was a catastrophe and no existential challenge, it's unclear why that has to be generally the case.
The claim that global catastrophic risk isn't part of the FLI mission seems strange to me. It's the thing the Global Priorities Project of CEA focus on (global catastrophic risk is more primarily mentioned on in the Global Priorities Project then X-risk).
FLI does say on it's website that out of five areas one of them is:
...Biotechnology and genetics often inspire as much fear as excitement, as people worry about the possibly neg
Here is a video of Prof. Lipsitch at EA Global Boston in 2017. I haven't watched it yet, but I would expect him to discuss gain-of-function research: https://forum.effectivealtruism.org/posts/oKwg3Zs5DPDFXvSKC/marc-lipsitch-preventing-catastrophic-risks-by-mitigating
I can't speak for less wrong as a whole, but I looked into this a little bit around that time, and concluded that actually it looked like things were heading in the sensible direction. In particular, towards the end of 2014, the US government stopped funding gain of function research: https://www.nature.com/articles/514411a, and there seemed to be a growing consensus/understanding that it was dangerous. think anyone doing (at least surface level) research in 2014/early 2015 could have reasonably concluded that this wasn't a neglected area. That does leave open the question of what I did wrong in not noticing that the moratorium was lifted 3 years later...
It seems that when there's a discussion of a dangerous practice being stopped pending safety review it makes sense to shedule into the future a moment to review how the safety review turned out.
Maybe a way forward would be:
Whenever there's something done by a lot of scientists is categorically stopped pending safety review, make a metaculus question about how the safety review is likely to turn out.
That way when the safety review turns out negatively, it triggers an event that's seen by a bunch of people who can then write a LessWrong post about it?
That leaves the question whether there are any comparable moratoriums out there that we should look at more.
Eliezer seemed to think that the ban on funding for gain of function research in the US simply led to research grants going to labs outside the US (Wuhan Institute of Virology in particular). he doesn't really cite any sources here so I can't do much to fact check his hypothesis.
Upon further googling, this gets murkier. Here's a very good article that goes into depth about what the NIH did and didn't fund at WIV and whether such research counts as "gain of function research".
Some quotes from the article:
...The NIH awarded a $3.4 million grant to the non-pro
I found the original website for Prof. Lipsitch's "Cambridge Working Group" from 2014 at http://www.cambridgeworkinggroup.org/ . While the website does not focus exclusively on gain-of-function, this was certainly a recurring theme in his public talks about this.
The list of signatories (which I believe has not been updated since 2016) includes several members of our community (apologies to anyone who I have missed):
Interestingly, there was an opposing group arguing in favor of this kind of research, at http://www.scientistsforscience.org/. I do not recognize a single name on their list of signatories
That's interesting. That leaves the question of why the FHI mostly stopped caring about it after 2016.
Past that point https://www.fhi.ox.ac.uk/wp-content/uploads/Lewis_et_al-2019-Risk_Analysis.pdf and https://www.fhi.ox.ac.uk/wp-content/uploads/C-Nelson-Engineered-Pathogens.pdf seem to be about gain of function research while completely ignoring the issue of potential lab leaks and only talking about it as an interesting biohazard topic.
My best guess is that it's like in math where applied researchers are lower status then theoretical researche...
In 2014, in the LessWrong survey more people considered bioengineered pandemics a global catastrophic risk then AI.
It goes back further than that. Pandemic (especially the bioengineered type) was also rated as the greater risk in the 2012 survey,, and also the most feared in the 2011 survey, which was the earliest one I could find that asked the question.
It seems like it has been one of the global catastrophic risks we've taken most seriously here at LessWrong from the beginning. It's one of our cached memes. It's a large part of the reason that we rationalists, as a subculture, were able to react to the coronavirus threat so much more quickly than the mainstream. It was a possibility we had considered seriously a decade before it happened.
Eliezer's X-risk emphasis has always been about extinction-level events, and a pandemic ain't one, so it didn't get a lot of attention from... the top.
Events that kill 90% of the human population can easily be extinction level events and in 2014 more LessWrongers believed that pandemics do that then AI.
I don't disagree that it was discussed on LW... I'm just pointing out that there was little interest from the founder himself.
Killing 90% of the human population would not be enough to cause extinction. That would put us at a population of 800 million, higher than the population in 1700.
Shimux claims that Eliezer's emphasis was always about X-risk and not global catastrophic risks. If that's true why was the LW survey tracking global catastrophic risks and not X-risk?
I actually agree with you there, there was always discussion of GCR along with extinction risks(though I think Eliezer in particular was more focused on extinction risks). However, they're still distinct categories: even the deadliest of pandemics is unlikely to cause extinction.
Modern civilisation depends a lot on collaboration. I think it's plausible that downstream of the destabilization of a deadly pandemic extinction happens, especially as the tech level grows.
That doesn't ring true to me. I'm curious why you think that, even though I'm irrationally short-termist: "100% is actually much worse than 90%" says my brain dryly, but I feel like a 90% deadly event is totally worth worrying about a lot!
A related question is why the topic of GoF research still didn’t get much LW discussion in 2020
For my part I would say that in 2020 thinking about how to deal with the pandemic was a topic that reduced the available attention for other topics.
I also mistakenly thought that Fauci & Co are just incompetent and not actively hostile.
After spending two days reading more 90% now feels way to low and 99% more reasonable because so many different stings of evidence point at the same conclusion that it was a lab leak.
I already think a lab leak is more likely than not, and did months ago when I first heard the circumstantial case for the hypothesis, but I'm nowhere near 99%. I'd say ~65%, but that's just my current prior, and I might not be calibrated that well. The fact that other rationalists are more confident about this than I am makes me want to update in that direction, but I also don't want to double-count my evidence. I'm also worried about confirmation bias creeping in. Can you summarize the strongest points and their associated Bayes factors? Or factors with error bars, if you prefer?
(Sources and so on are in https://www.lesswrong.com/posts/wQLXNjMKXdXXdK8kL/fauci-s-emails-and-the-lab-leak-hypothesis)
I don't factor Bayes factors together when I'm on Metaculus either, so that's not really how I reason. If you want them I would be interested in yours for the following pieces of evidence.
How do you get a probability as high as 35% for a natural origin? Can you provide Bayes factors for that?
There's a database for Coronaviruses. We (the international community) funded it to help us for the time when we have to deal with a coronavirus pandemic. The Wuhan Institute of Virology (WIV) took it down in late 2019 and doesn't give it to anyone. Nobody complains about that in a way that brings this in the mainstream narrative even when all those virologists who believe in their field should think that the database is valuable for fighting the pandemic. If it isn't why are we funding that research in the first place?
According to the US government information there was unusual activity in the WIV in October 2019 with at least significantly reduced cell phone traffic and likely also road blocks.
Three people from the WIV seems to went to hospital in November 2019 with flu or COVID-19 like symptoms in the same week.
Huang Yanling who was in the beginning of the pandemic called patient zero was a WIV employee and US government requests who account for what's up with her currently go unanswered.
The security at the WIV was so bad that they asked the US for help in 2018 because they didn't have enough skilled people to operate their biosafety 4 lab safely. Chinese care about saving face, things need to have been bad to get them to tell the US that they didn't have enough people to operate their lab safely.
There are six separate biological reasons of why the virus looks like it came from a lab.
The bats are more then 1000km away from Wuhan. It's quite unclear how they would have naturally infected people in Wuhan.
An amazing amount of effort went into supressing the story, likely with a lot of collateral damage that made us react less well to the pandemic. Google, Facebook and Twitter started censoring in early February and that might have been part of the reason why it took us so long to respond.
As Bret Weinstein said, that given that the virus looks so much like it comes from Wuhan the next likely alternative explantion would be that someone went through a lot of effort and released it in Wuhan to make the WIV look bad. If that would be the case it's however unclear why the WIV doesn't allow outside inspections to clear their name.
If all the information we had was that Huang Yanling who was in the beginning called 'patient zero' was a WIV employee that alone might warrent more then 65%. I mean what are the odds that 'patient zero' for a pandemic caused by Coronavirus is randomly an employee of a lab studying Coronaviruses?
Then a lab studying Coronaviruses with known safety problems?
How do you get a probability as high as 35% for a natural origin? Can you provide Bayes factors for that?
I guess that's fair. I don't really think that way either, but I want to learn how. I think numbers become especially important when coordinating evidence with others like this. My older prior favored the natural origin hypothesis, because that's what was reported in the news. I heard the case for the lab leak and updated from there.
There's a database for Coronaviruses.
Authoritarians in general and the Chinese in particular would reflexively cover up anything that's even potentially embarrassing as a matter of course. I can't call a coverup more likely in a natural origin scenario, but it's still pretty likely, so this is weak evidence.
unusual activity in the WIV in October 2019
Didn't know this one, but that's pretty vague. Source?
Three people from the WIV seems to went to hospital in November 2019 with flu or COVID-19 like symptoms in the same week.
The first confirmed case wasn't until December 8th, last I heard. Still, Wuhan is Wuhan. Even assuming a natural origin, we'd expect people from WIV to be more vigilant than the general public. Three at once is hardly more evidence than one, because they could have given it to each other. I do think this favors the leak hypothesis, because the timing is suggestive, but it seems weak. Could this have been some other disease? How early in November?
Huang Yanling who was in the beginning of the pandemic called patient zero was a WIV employee and US government requests who account for what's up with her currently go unanswered.
Again, coverup is a matter of course for these guys.
The security at the WIV was so bad that they asked the US for help
Not very strong by itself.
There are six separate biological reasons of why the virus looks like it came from a lab.
Need more details here.
The bats are more then 1000km away from Wuhan.
I knew about this one. This combined with the fact that the biosafety 4 WIV is in Wuhan is most of what got me to thinking the leak was more likely than not.
An amazing amount of effort went into supressing the story [...] Google, Facebook and Twitter
Why? And does this have anything to do with whether it was a leak or not? These are primarily American companies that are already censored in China. This was during the Trump era, when the Left was trying to fight him any way they could. "Racist" has been their favorite ad hominem lately. Unless you can establish than China was behind this, and put in more effort than would be expected as a matter of course, I don't think this is evidence at all of anything other than normal American political bickering. But we've already counted the coverup as weak evidence. We can't count it again.
As Bret Weinstein said
This doesn't seem to be saying anything new. Weinstein does at least have gears in his models, but seems dangerously close to crackpot territory. I don't think he's a conspiracy theorist yet, but he also seems subject to the normal human biases, and doesn't seem to be trying to correct for them the way a rationalist would. It's not obvious to me that his next most likely explanation is the next most likely.
why the WIV doesn't allow outside inspections
Again, coverup as a matter of course. Nothing new here.
I mean what are the odds that 'patient zero' for a pandemic caused by Coronavirus is randomly an employee of a lab studying Coronaviruses?
"Patient zero" is the earliest that could be identified, not necessarily the first to get it. That an employee of a lab studying coronaviruses would notice first doesn't seem that strange, even if it had been circulating in Wuhan for a bit before. This does seem to favor a leak. How strong this evidence is depends a lot on more details. I could see this being very strong or fairly weak depending on the exact circumstances.
Authoritarians in general and the Chinese in particular would reflexively cover up anything that's even potentially embarrassing as a matter of course. I can't call a coverup more likely in a natural origin scenario, but it's still pretty likely, so this is weak evidence.
I think it's embarrasing to withold a database that was created to help us fight a pandemic in times of a pandemic. It's bad for any future Chinese researcher who wants to collaborate with the West if it's clear that we can't count of resources that we create together with China to help us in a crisis to actually be available in the crisis. Additionally, why did the coverup start in September 2019?
"Patient zero" is the earliest that could be identified, not necessarily the first to get it.
Yes, but if you take 1 billion Chinese and maybe 200 employees of the WIV what are the odds that "patient zero" is from the WIV?
5,000,000 to 1.
Unless you can establish than China was behind this, and put in more effort than would be expected as a matter of course, I don't think this is evidence at all of anything other than normal American political bickering.
No, it was American/International supression because of NIH funding gain of function involving the WIV in violation of the ban in 2015 and not putting it through the safety review process that was instituted in 2017.
Why? And does this have anything to do with whether it was a leak or not? These are primarily American companies that are already censored in China.
It's about how important it was for Farrar on the 2nd of February to get through to Tedros and have Tedros decide while talking about ZeroHedge and Tedros announcing the next day that he's cooperating with Google/Twitter to fight "misinformation" and ZeroHedge being banned from Twitter that day.
It's complex, but if you want to understand the point I have it written down in https://www.lesswrong.com/posts/wQLXNjMKXdXXdK8kL/fauci-s-emails-and-the-lab-leak-hypothesis
The first confirmed case wasn't until December 8th, last I heard.
Confirmed cases are different from "cases the US intelligence service knows about because they lauched a cyber attack on the WIV and all the private and professional emails of it's employees".
Didn't know this one, but that's pretty vague. Source?
It's the letter that the NIH send the EcoHealth Alliance with question that have to be explained before they want to give funding to the EcoHealth Alliance again. Generally, if you want sources read https://www.lesswrong.com/posts/wQLXNjMKXdXXdK8kL/fauci-s-emails-and-the-lab-leak-hypothesis
Yes, but if you take 1 billion Chinese and maybe 200 employees of the WIV what are the odds that "patient zero" is from the WIV?
5,000,000 to 1.
This is obviously not the right calculation, and I expected better from a rationalist. I've already counted the fact that it started in Wuhan where they happen to have a biosafety 4 lab studying coronaviruses as the strongest evidence in favor of the leak. You may feel I didn't count it strongly enough, but that's a different argument. What does the entire population of China have to do with it after that point? Nothing. You're being completely arbitrary by drawing the boundary there. Why not the entire world?
The population of Wuhan, maybe, but we can probably narrow it down more than that, and then we also have to account for the fact that the WIV employees would be much more likely to report anything out of the ordinary when it comes to illness. For the rest of Wuhan at the time, the most common symptoms would have been reported as "the flu" or "a cold". Mild cases are common, and at least a third of people have no noticeable symptoms at all, especially early on with the less virulent original variant.
The population of Wuhan is about 8.5 million, and the number of staff at WIV, I think was more like 600. So that's more like 14,000 : 1. I think WIV staff could be easily 20x more likely to notice that the disease was novel, so that's more like 700 : 1. That's still pretty strong evidence, but nowhere near what you're proposing.
This is obviously not the right calculation, and I expected better from a rationalist. I've already counted the fact that it started in Wuhan where they happen to have a biosafety 4 lab studying coronaviruses as the strongest evidence in favor of the leak.
I have 99% as my likelihood for the lab leak not 99,9999%, I don't suggest that 5,000,000 to 1 should be the end number. It's just a random calculation.
I am often enough at metaculus and played the credence game to not go for the 99.9% that Dr. Roland Wiesendanger proposes.
I think WIV staff could be easily 20x more likely to notice that the disease was novel, so that's more like 700 : 1.
If that's your calculation how can you justify only 65%, especially when that's only one of the pieces of evidence?
(This isn't an attempt to answer the question, but…) My best guess is that info hazard concerns reduced the amount of discourse on GoF research to some extent.
Can you be more specific? My vague impression is that if GoF research is already happening, talking about GoF research isn't likely to be an info hazard because the info is already in the heads of the people in whose heads it's hazardous.
The debate about gain of function research started as a debate about infohazards when Fouchier and Kawaoka modified H5N1 in 2011 and published the modified sequence.
It's possible that gain of function research is therefore mentally associated as being an infohazard. The more recent FHI papers for example mention gain of function research only in relation to infohazards and not the problem of lab leaks in labs doing gain of function research.
The OpenPhil analysis that speaks of gain of function research by calling it dual use research also has a frame that suggests that possible military use or someone stealing engineered viruses and intentionally spreading them is what the problem is about.
This seems to reflect the general human bias that we have an easier time imagining other humans intentionally creating harm then accidentially creating harm. It's quite similar to naive people thinking that the problem of AGI is humans using AGI's for nefarious ends.
(I'm not sure to what extent you're trying to "give background info" versus "be more specific about how people thought of GoF research as an infohazard" versus "be more specific about how GoF research actually was an infohazard" versus other things, so I might be talking past you a bit here.)
The debate about gain of function research started as a debate about infohazards when Fouchier and Kawaoka modified H5N1 in 2011 and published the modified sequence.
So this seems to me likely to be an infohazard that was found through GoF research, but not obviously GoF-research-as-infohazard. That is, even if we grant that the modified sequence was an infohazard and a mistake to publish, it doesn't then follow that it's a mistake to talk about GoF research in general. Because when GoF research is already happening, it's already known within certain circles, and those circles disproportionately contain the people we'd want to keep the knowledge from. It might be the case that talking about GoF research is a mistake, but it's not obviously so.
What I'm trying to get at is that "info hazard concerns" is pretty vague and not very helpful. What were people concerned about, specifically, and was it a reasonable thing to be concerned about? (It's entirely possible that people made the mental leap from "this thing found through GoF is an infohazard" to "GoF is an infohazard", but if so it seems important to realize that that's a leap.)
a frame that suggests that possible military use or someone stealing engineered viruses and intentionally spreading them is what the problem is about.
Here, too: if this is what we're worried about, it's not clear that "not talking about GoF research" helps the problem at all.
Now (after all the COVID-19 related discourse in the media), it indeed seems a lot less risky to mention GoF research. (You could have made the point that "GoF research is already happening" prior to COVID-19; but perhaps a very small fraction of people then were aware that GoF research was a thing, making it riskier to mention).
I agree probably only a small fraction of people were aware that GoF research was a thing until recently. I would assume that fraction included most of the people who were capable of acting on the knowledge. (That is, the question isn't "what fraction of people know about GoF research" but "what fraction of people who are plausibly capable of causing GoF research to happen know about it".) But maybe that depends on the specific way you think it's hazardous .
While I wasn't at 80% of a lab leak when Eliezer asseted it a month ago, I'm now at 90%. It will take a while till it filters through society but I feel like we can already look at what we ourselves got wrong.
In 2014, in the LessWrong survey more people considered bioengineered pandemics a global catastrophic risk then AI. At the time there was a public debate about gain of function research. On editoral described the risks of gain of function research as:
Even at the lower bar of 0.05% per full-time worker-year it seems crazy that society continued playing Russian Roulette. We could have seen the issue and protested. EA's could have created organizations to fight against gain-of-function research. Why didn't we speak every Petrov day about the necessity to stop gain of function research? Organizations like OpenPhil should go through the 5 Why's and model why they messed this up and didn't fund the cause. What needs to change so that we as rationalists and EA's are able to organize to fight against tractable risks that our society takes without good reason?