I'd hardly call this an "uninformed perspective on AI risk" — I read it as more of a parody of sci-fi tropes about AI rebellion, and, in any case, probably not an attempt to comment on what scenarios are plausible as real futures.
(Zach Weiner is actually a very smart guy, and I'd bet that he'd have no trouble grasping the real issues.)
(Zach Weiner is actually a very smart guy, and I'd bet that he'd have no trouble grasping the real issues.)
I wouldn't be surprised at all if he was already well-aware of the issues. It's a bit silly to assume when authors of fiction (be it novels, movies, games or webcomics) make something "non-realistic", it's because they're stupid and not because they're optimizing plot or "understandability" or humor or brevity or their message ... I found Eliezer's writings on how "Probably the artist did not even think to ask whether an alien perceives human females as attractive" a bit unkind in that way too.
Here is another example of an outsider perspective on risks from AI. I think such examples can serve as a way to fathom the inferential distance between the SIAI and its target audience as to consequently fine tune their material and general approach.
via sentientdevelopments.com
This shows again that people are generally aware of potential risks but either do not take them seriously or don't see why risks from AI are the rule rather than an exception. So rather than making people aware that there are risks you have to tell them what are the risks.