Here is another example of an outsider perspective on risks from AI. I think such examples can serve as a way to fathom the inferential distance between the SIAI and its target audience as to consequently fine tune their material and general approach.
This shows again that people are generally aware of potential risks but either do not take them seriously or don't see why risks from AI are the rule rather than an exception. So rather than making people aware that there are risks you have to tell them what are the risks.
Safety features represent the human value of not getting hurt. Car air bags represent the desire not to die in motor vehicle accidents. Planing down wood represents the desire not to get splinters. Fixing the floorboards helps with not falling down. It seems as though artefacts that encode human values are commonplace and fairly easy to create.
Isolated instrumental values, certainly... agreed. (I could quibble about your examples, but that's beside the point.)
I had understood SarahC to mean "human values" in a more comprehensive/coherent sense, but perhaps I misunderstood.