I am very glad you did this because in worlds where survey results look like this, I think it's good and important to make that easily legible to AI safety community outsiders. [Edit: and good and important to set a good example for other labs.]
I think the survey results probably look a lot like this almost regardless of which world we are in?
Connor is at something like 90% doom, iirc, and explicitly founded Conjecture to do alignment work in a world with very short time lines. If we grant that organizations (probably) attract people with similar doom-levels and timelines to the leader of the organizations, maybe with some regression to the mean, then this is kinda what we expect, regardless of what the world is like. I'd advise against updating on it on the general grounds that updating on filtered evidence is generally a bad idea.
(On the other hand, if someone showed a survey from like, Facebook AI employees, and it had something like these numbers, that seems like much much stronger evidence.)
Thanks for doing this, it's pretty helpful to know where conjecture employees are in thinking about this. I'd encourage other orgs to do the same.
Charts also look good.
I wonder if the mode of the distribution on Figure 4 (which is at about 2027 on this April 2023 figure and is continuing to shift left on the Metaculus question page) has a straightforward statistical interpretation. This mode is considerably to the left of the median and tends to be near the "lower 25%" mark.
Is it really the case that 2026-2028 are effectively most popular predictions in some sense, or is it an artefact of how this Metaculus page processes the data?
We put together a survey to study the opinions of timelines and probability of human extinction of the employees at Conjecture. The questions were based on previous public surveys and prediction markets, to ensure that the results are comparable with people’s opinions outside of Conjecture.
The survey results were polled in April, 2023. There were 23 unique responses from people across teams.
Section 1. Probability of human extinction from AI
Setup and limitations
The specific questions the survey asked were:
The difference between the two questions is that the first focuses on risk from misalignment, whereas the second captures risk from misalignment and misuse.
The main caveats of these questions are the following:
Responses
Out of the 23 respondents, one rejected the premise, and two people did not respond to one of the two questions but answered the other one. The main issue respondents raised was answering without a time constraint.
Generally, people estimate the extinction risk from autonomous AI / AI getting out of control to be quite high at Conjecture. The median estimation is 70% and the average estimation is 59%. The plurality estimates the risk to be between 60% to 80%. A few people believe extinction risk from AGI is higher than 80%.
The second question surveying extinction risk from AI in general, which includes misalignment and misuse. The median estimate is 80% and the average is 71%. The plurality estimates the risk to be over 80%.
Section 2. When will we have AGI?
Setup and limitations
For this question, we asked respondents to predict when AGI will be built using this specification used on Metaculus, enabling us to compare to the community baseline (Figure 3).
The respondents were instructed to toggle with the probability density as seen in Figure 4. This was a deliberate choice to enable differences in confidence towards lower or higher values in uncertainty.
The main caveats of this question were:
Responses
Out of the 23 respondents, five did not answer this question, out of which one person rejected the premise. This resulted in 18 responses that were counted in the analysis.
Conjecture employees’ timelines are somewhat bimodal (Figure 5). Most people people answered either 2030 (7 year until AGI) or 2035 (12 years until AGI). The Metaculus community prediction at the time of the survey was 2031; respondents were likely anchored by this.
Table 1. Overview of additional statistics for when we will have AGI.
Table 1 shows additional markers for what the summary statistics look like across all respondents for the lower and upper bound predictions. Notably, the lower end of possible years where we will have AGI is maximum in the year 2030, which is still shorter timelines than what Metaculus users report as their overall median. The median prediction varies from 2027 to 2035. In terms of the upper 75% bound, the median prediction is the year 2039 but it varies all the way from 2029 to 2300, showing that the uncertainty towards the further end of the distribution is higher than for the years closer to 2023.