Here is another example of an outsider perspective on risks from AI. I think such examples can serve as a way to fathom the inferential distance between the SIAI and its target audience as to consequently fine tune their material and general approach.
This shows again that people are generally aware of potential risks but either do not take them seriously or don't see why risks from AI are the rule rather than an exception. So rather than making people aware that there are risks you have to tell them what are the risks.
The way software development usually works is with lots of testing. You use a test harness to restrain the program - and then put it through its paces.
The idea that we won't be able to do that with machine intelligence seems like one of the more screwed-up ideas to come out of the SIAI to me.
The most often-cited justification is the AI box experiments - which are cited as evidence that you can't safely restrain a machine intelligence - since it will find a way to escape.
This does not seem like a credible position to me. You don't build your test harness out of humans. The AI box experiments seem to have low relevance to this problem to me.
The forces on the outside will include many humans and machines. They will together be able to construct pretty formidable prisions with configurable safety levels.
Obviously, we would need to avoid permanent setbacks - but apart from those we don't really have to "get it right first time". Many possible problems can be recovered from. Also, it doesn't mean that we won't be able to test and rehearse. We will be able to do those things.
Test harnesses might turn out to be very useful, but this isn't a trivial task, and I don't think the development and use of such harnesses can be taken for granted. It's not just that it must be safely contained, but that it also has to be able to interact with the outside world in a manner that can't be dangerous, but is still informative enough to decide whether its friendly- this seems hard.
The original subject of disagreement was "is AI failure the rule or exception?". This isn't a precisely specified question, but it just seemed like you ... (read more)