Here is another example of an outsider perspective on risks from AI. I think such examples can serve as a way to fathom the inferential distance between the SIAI and its target audience as to consequently fine tune their material and general approach.
This shows again that people are generally aware of potential risks but either do not take them seriously or don't see why risks from AI are the rule rather than an exception. So rather than making people aware that there are risks you have to tell them what are the risks.
A (highly intelligent) friend of mine posted a link to this on Facebook tagged with "Reason #217 why the singularity isn't that big of a thing." I'm wondering if there's a concise way to correct him without a link to Less Wrong.
"Webcomics aren't real"?
(Normally I'd link to "Generalization From Fictional Evidence", but that seems to sum up the basic point if you don't want to link to LW...)