Here is another example of an outsider perspective on risks from AI. I think such examples can serve as a way to fathom the inferential distance between the SIAI and its target audience as to consequently fine tune their material and general approach.
This shows again that people are generally aware of potential risks but either do not take them seriously or don't see why risks from AI are the rule rather than an exception. So rather than making people aware that there are risks you have to tell them what are the risks.
People are friendly to dogs they assume to be friendly, and hostile to dogs they expect to be hostile.
Likewise if an AI has self-preservation as a higher value than the preservation of other life, it will eradicate other life that it'll expect to be hostile to it.
In short by being too scared of AI, we're increasing the risk that some kinds of AI (ones that value their own existence more than human life) will destroy us preemptively.
"People do X, therefore AIs will do X" is not a valid argument for AIs in general. It may apply to AIs that value self-preservation over the preservation of other life, but we shouldn't make one of those. Also, what Benelliot said.