1. How likely do you think it is that the overall value of the future will be drastically less than it could have been, as a result of humanity not doing enough technical AI safety research?
The clarification note was helpful, because this is an odd question to me. There's lots of things that could prevent x-risk from AI, including e.g. better world governance. It's not as a result of not doing technical research, even if technical research is a great way to prevent it.
I agree. For me, the clarification note completely changed my interpretation of the question (and the answer I would give to my understanding of the question). I decided to record my answer as 50% for this reason.
Since this is a literally a question about soliciting predictions, it should have one of those embedded-interactive-predictions-with-histograms gadgets* to make predicting easier. Also, it might be worth it to have two prediction gadgets, since this is basically a prediction: one gadget to predict what Recognized AI Safety Experts (tm) predict about how much damage unsafe AIs will do, and one gadget to predict about how much damage unsafe AIs will actually do (to mitigate weird second-order effects having to do with predicting a prediction).
*I'm not sure what they're supposed to be called.
I think it might be more interesting to sketch what you expect the distribution of views to look like, as opposed to just giving a summary statistic. I can add probability Qs, but I avoided it initially so as not to funnel people into doing the less informative version of this exercise.
I've added six prediction interfaces: two for your own answers to the two Qs, two for your guess at the mean survey respondent answers, and two for your guess at the median respondent answers.
Complete aside here and not a dig on this post at all (which I think is proposing a cool and interesting idea):
I feel like AI researchers must spend 10% of their time answering surveys about the future of AI!
I sent a short survey to ~117 people working on long-term AI issues, asking about the level of existential risk from AI; 44 responded.
In ~6 days, I'm going to post the anonymized results. For now, I'm posting the methods section of my post so anyone interested can predict what the results will be.
[Added June 1: Results are now up, though you can still make predictions below before reading the results.]
Methods
You can find a copy of the survey here. The main questions (including clarifying notes) were:
I also included optional fields for "Comments / questions / objections to the framing / etc." and "Your affiliation", and asked respondents to
I sent the survey out to two groups directly: MIRI's research team, and people who recently left OpenAI (mostly people suggested by Beth Barnes of OpenAI). I sent it to five other groups through org representatives (who I asked to send it to everyone at the org "who researches long-term AI topics, or who has done a lot of past work on such topics"): OpenAI, the Future of Humanity Institute (FHI), DeepMind, the Center for Human-Compatible AI (CHAI), and Open Philanthropy.
The survey ran for 23 days (May 3–26), though it took time to circulate and some people didn't receive it until May 17.
Results
[Image redacted]
Each point is a response to Q1 (on the horizontal axis) and Q2 (on the vertical axis). Circles denote technical safety researchers, squares strategy researchers; triangles said they were neither, and diamonds said they were both.
Purple represents OpenAI, red FHI, brown DeepMind, green CHAI or UC Berkeley, orange MIRI, blue Open Philanthropy, and black "no affiliation specified". (This includes unaffiliated people, as well as people who decided to leave their affiliation out.)
[Rest of post redacted]
Added: I've included some binary predictions below on request, though I don't necessarily think these are the ideal questions to focus on. E.g., I'd expect it to be more useful to draw a rough picture of what you expect the distribution to look like (or, say, what you expect the range of MIRI views is, or the range of strategy researchers' views)
Q1:
Q2:
(Cross-posted to the Effective Altruism Forum)