Crossposted to the EA Forum. In October 2022, 91 EA Forum/LessWrong users answered the AI timelines deference survey. This post summarises the results. Context The survey was advertised in this forum post, and anyone could respond. Respondents were asked to whom they defer most, second-most and third-most, on AI timelines....
It's fashionable these days to ask people about their AI timelines. And it's fashionable to have things to say in response. But relative to the number of people who report their timelines, I suspect that only a small fraction have put in the effort to form independent impressions about them....
Various arguments have been made for why advanced AI systems will plausibly not have the goals their operators intended them to have (due to either outer or inner alignment failure). I would really like a distilled collection of the strongest arguments. Does anyone know if this has been done? If...
Epistemic status: lots of this involves interpreting/categorising other people’s scenarios, and could be wrong. We’d really appreciate being corrected if so. [ETA: so far, no corrections.] TLDR: see the summary table. In the last few years, people have proposed various AI takeover scenarios. We think this type of scenario building...
Cross-posted to the EA forum. Summary * In August 2020, we conducted an online survey of prominent AI safety and governance researchers. You can see a copy of the survey at this link.[1] * We sent the survey to 135 researchers at leading AI safety/governance research organisations (including AI Impacts,...
I'd like to have a clearer picture of the domains in which AI systems have already been deployed - particularly those in which they are having the largest impacts on the world, and what those impacts are. Some reasons why this might be useful: * AI systems don't (currently) get...