Here is another example of an outsider perspective on risks from AI. I think such examples can serve as a way to fathom the inferential distance between the SIAI and its target audience as to consequently fine tune their material and general approach.
This shows again that people are generally aware of potential risks but either do not take them seriously or don't see why risks from AI are the rule rather than an exception. So rather than making people aware that there are risks you have to tell them what are the risks.
That is a bit of an old chestnut around here. It is like saying "the rule" for computer software is to crash or go into an infinite loop. If you actually look at the computer software available, it behaves quite differently. Expecting the real world to present you with a random sample from the theoretically-possible options often doesn't make any sense at all.
That link would appear to need its own warning. It too talks about "blindly pulling an arbitrary mind from a mind design space". No sensible model of the run up to superintelligence looks very much like that.
Yes, let's be careful here.
The AI's that might actually exist in the future are those that had their origins in human-designed computer programs. Right? If an AI exists in 2050, then it was designed by something designed by ... something designed by a human.
Is this really a random sample of all possible minds? I find it conceivable that human-designed AIs are a narrower subset of all the things that could be defined as minds. Maybe, to some extent, any "thinking machine" a human designs will have some features in common with the human mind (b... (read more)