The Global Catastrophic Risks Institute conducted an anonymous survey of relevant experts on whether they thought COVID was more likely caused by a lab accident (aka lab leak) or zoonotic spillover. Their summary, bolding is mine:
The study’s experts overall stated that the COVID-19 pandemic most likely originated via a natural zoonotic event, defined as an event in which a non-human animal infected a human, and in which the infection did not occur in the course of any form of virological or biomedical research. The experts generally gave a lower probability for origin via a research-related accident, but most experts indicated some chance of origin via accident and about one fifth of the experts stated that an accident was the more likely origin. These beliefs were similar across experts from different geographic and academic backgrounds.
The experts mostly expressed the view that more research on COVID-19’s origin could be of value. About half of the experts stated that major gaps still remain in the understanding COVID-19’s origin, and most of the other experts also stated that some research is still needed. About 40% of experts stated that clarity on COVID-19 origins would provide a better understanding of the potential origins of future pandemics. Given clarity on COVID-19’s origin, experts also proposed a variety of governance changes for addressing future pandemics, including measures to prevent initial human infection, measures to prevent initial infection from becoming pandemic, and measures to mitigate the harm once the pandemic occurs.
The vast majority of the experts express the belief that a natural zoonotic event will likely be the origin of the next pandemic.
The experts also provided a set of clear recommendations for preventing, preparing for and responding to future pandemics, which generally align with many previous studies.
Link to the main report is here, and link to their (much longer) methodological and analytical annex is here.
[EDIT: I currently think there are enough problems with the survey that they should be mentioned alongside the results.
Firstly, the sample seems to have been based on personalized outreach, rather than mass emails. This runs the risk of selection bias. [EDIT 2: I'm told that 'personalized outreach' looks more like "sending individual emails to everyone on a big list" than "the authors emailing their friends".] Also, some participants were recruited by recommendations from other participants, which may reduce the effective sample size by over-drawing from pools of people who agree with each other. [EDIT 2: But this was mitigated to some degree by disciplinary and geographic diversity.] [EDIT 3: See this comment and replies by one of the authors of the report on the survey.]
Secondly, the survey asked people if they were familiar with a few different papers that analyse the evidence for or against a lab leak. They also included a fake paper to see how often people lied about their familiarity. A third of the sample claimed to be familiar with a fake study, more than were familiar with one relevant piece of evidence, the DEFUSE proposal, but much fewer than were familiar with most other pieces of relevant evidence.]
This reminds me of a passage in Richard Feynman's memoir "What do you care what other people think?". Four pages into the chapter Gumshoes, (page 163 in the Unwin Paperback edition):
Then this business of Thiokol changing its position came up. Mr. Rogers and Dr. Ride were asking two Thiokol managers, Mr. Mason and Mr. Lund, how many people were against the launch, even at the last moment.
"We didn't poll everyone," says Mr. Mason.
"Was there a substantial number against the launch, or just one or two?"
"There were, I would say, probably five or six in engineering who at that point would have said it is not as conservative to go with that temperature, and we don't know. The issue was we didn't know for sure that it would work."
"So it was evenly divided?"
"That's a very estimated number."
It struck me that the Thiokol managers were waffling. But I only knew how to ask simpleminded questions. So I said, "Could you tell me, sirs, the names of your four best seals experts, in order of ability?"
"Roger Boisjoly and Arnie Thompson are one and two. Then there's Jack Kapp, and, uh ... Jerry Burns."
I turned to Mr. Boisjoly, who was right there, at the meeting. "Mr. Boisjoly, were you in agreement that it was okay to fly?"
He says, "No, I was not."
I ask Mr. Thompson, who was also there.
"No. I was not."
I say "Mr. Kapp?"
Mr. Lund says, "He is not here, I talked to him after the meeting, and he said, 'I would have made that decision, given the information we had.'"
"And the fourth man?"
"Jerry Burns. I don't know what his position was."
"So," I said, "of the four, we have one 'don't know,' one 'very likely yes,' and the two who were mentioned right away as being the best seal experts, both said no." So this "evenly split" stuff was a lot of crap. The guys who knew the most about the seals --- what were they saying?
That is the end of that section of the chapter and Feynman turns to the infra-red thermometer and the temperatures on the launch pad.
That was my introduction to this aspect of bureaucratic infighting. The bureaucrat asks his technical experts, the one closest to the issue. If he gets the answer that he wants, it is accepted. If not, he widens the pool of experts. Those too close to the issue are at risk of ignoring the social cues to the desired answer, but the wider pool of experts can be more flexible at responding to the broader social context. Then the bureaucrat gets to take an unweighted average (that is not weighting the original experts more highly). Which boosts the probability of getting the desired answer and reduces the probability of getting the correct answer.
Back in 1988 this was perhaps a busted technique. But that was many years ago. The notion of broadening your survey of experts seems to be back in fashion.