I agree that AI safety can be successfully pitched to a wider range of audiences even without mentioning superintelligence, though I'm not sure this will get people to "holy shit, x-risk." However, I do think that appealing to the more near-term concerns that people have could be sufficiently concerning to policymakers and other important stakeholders, and possibly speed up their willingness to implement useful policy.
Of course, this assumes that useful policy for near-term concerns will also be useful policy for AI x-risk. It seems plausible to me that th...
This was interesting and I would like to see more AI research organizations conducting + publishing similar surveys.
Thanks! For those interested in conducting similar surveys, here is a version of the spreadsheet you can copy (by request elsewhere in the comments).