There have been a few attempts to reach out to broader audiences in the past, but mostly in very politically/ideologically loaded topics.
After seeing several examples of how little understanding people have about the difficulties in creating a friendly AI, I'm horrified. And I'm not even talking about a farmer on some hidden ranch, but about people who should know about these things, researchers, software developers meddling with AI research, and so on.
What made me write this post, was a highly voted answer on stackexchange.com, which claims that the danger of superhuman AI is a non-issue, and that the only way for an AI to wipe out humanity is if "some insane human wanted that, and told the AI to find a way to do it". And the poster claims to be working in the AI field.
I've also seen a TEDx talk about AIs. The talker didn't even hear about the paperclip maximizer, and the talk was about the dangers presented by the AIs as depicted in the movies, like the Terminator, where an AI "rebels", but we can hope that AIs would not rebel as they cannot feel emotion, so we should hope the events depicted in such movies will not happen, and all we have to do is for ourselves to be ethical and not deliberately write malicious AI, and then everything will be OK.
The sheer and mind-boggling stupidity of this makes me want to scream.
We should find a way to increase public awareness of the difficulty of the problem. The paperclip maximizer should become part of public consciousness, a part of pop culture. Whenever there is a relevant discussion about the topic, we should mention it. We should increase awareness of old fairy tales with a jinn who misinterprets wishes. Whatever it takes to ingrain the importance of these problems into public consciousness.
There are many people graduating every year who've never heard about these problems. Or if they did, they dismiss it as a non-issue, a contradictory thought experiment which can be dismissed without a second though:
A nuclear bomb isn't smart enough to override its programming, either. If such an AI isn't smart enough to understand people do not want to be starved or killed, then it doesn't have a human level of intelligence at any point, does it? The thought experiment is contradictory.
We don't want our future AI researches to start working with such a mentality.
What can we do to raise awareness? We don't have the funding to make a movie which becomes a cult classic. We might start downvoting and commenting on the aforementioned stackexchange post, but that would not solve much if anything.
Only in the sense that sufficiently large quantitative differences are qualitative differences. There's not a fundamental difference in motivation between the Soviet manager producing worthless goods that will let them hit quota and the RL agent hitting score balloons instead of trying to win the race. AI existential risk is just AI misbehavior scaled up sufficiently--the same dynamic might cause an AI managing the global health system to cause undesirable and unrecoverable changes to all humans.
It seems to me like our core difference is that I look at a simple system and ask "what will happen when the simple system is replaced by a more powerful system?", and you look at a simple system and ask "how do I replace this with a more powerful system?"
For example, it seems to me possible that someone could write code that is able to reason about code, and then use the resulting program to find security vulnerabilities in important systems, and then take control of those systems. (Say, finding a root exploit in server OSes and using this to hack into banks to steal info or funds.)
I don't think there currently exist programs capable of this; I'm not aware of much that's more complicated than optimizing compilers, or AI 'proofreaders' that detect common programmer mistakes (which hopefully wouldn't be enough!). Demonstrating code that could do that would represent a major advance, and the underlying insights could be retooled to lead to significant progress in other domains. But that it doesn't exist now doesn't mean that it's science fiction that might never come to pass; it just means we have a head start on thinking about how to deal with it.