Here is another example of an outsider perspective on risks from AI. I think such examples can serve as a way to fathom the inferential distance between the SIAI and its target audience as to consequently fine tune their material and general approach.
This shows again that people are generally aware of potential risks but either do not take them seriously or don't see why risks from AI are the rule rather than an exception. So rather than making people aware that there are risks you have to tell them what are the risks.
The comic is demonstrating a risk. It's showing the risk of assuming that AIs will be hostile. That guarantees that AIs will be hostile.
"Guarantees" how? I mean, if you're writing your own AI, and you're assuming that it'll be hostile no matter what you do, then that is indeed evidence that you'll end up making a (non-anthropomorphically) hostile AI or fail to make an AI at all; and if you're writing an AI that you expect to be hostile, maybe you should just not write it. But a Friendly AI should still be Friendly even if the majority of people in the world expect it to be hostile.