As the SIAI is gaining publicity more people are reviewing its work. I am not sure how popular this blog is but judged by its about page he writes for some high-profile blogs. His latest post takes on Omohundro's "Basic AI Drives":
When we last looked at a paper from the Singularity Institute, it was an interesting work asking if we actually know what we’re really measuring when trying to evaluate intelligence by Dr. Shane Legg. While I found a few points that seemed a little odd to me, the broader point Dr. Legg was perusing was very much valid and there were some equations to consider. However, this paper isn’t exactly representative of most of the things you’ll find coming from the Institute’s fellows. Generally, what you’ll see are spanning philosophical treatises filled with metaphors, trying to make sense out of a technology that either doesn’t really exist and treated as a black box with inputs and outputs, or imagined by the author as a combination of whatever a popular science site reported about new research ideas in computer science. The end result of this process tends to be a lot like this warning about the need to develop a friendly or benevolent artificial intelligence system based on a rather fast and loose set of concepts about what an AI might decide to do and what will drive its decisions.
Link: worldofweirdthings.com/2011/01/12/why-training-a-i-isnt-like-training-your-pets/
I posted a few comments but do not think to be the right person to continue that discussion. So if you believe it is important what other people think about the SIAI and want to improve its public relations, there is your chance. I'm myself interested in the answers to his objections.
Absolutely not. If you take another look, I argue that it's uncessary. You don't want the machine to do something? Put in a boundry. You don't have the option to just turn off a lab rat's desire to search a particular corner of its cage with a press of a button, so all you can do is put in some deterrent. But with a machine, you can just tell it not to do that. For example, this code in Java would mean not to add two even numbers if the method recieves them:
public int add(int a, int b) { if ((a % 2) != 0 && (b % 2) != 0) { return a + b; } return -1; }
So why do I need to build an elaborate curcuit to "reward" the computer for not adding even numbers? And why would it suddenly decide to override the condition? Just to see why? If I wanted it to experiment, I'd just give it fewer bounds.
Part of the disagreement here seems to arise from disjoint models of what a powerful AI would consist of.
You seem to imagine something like an ordinary computer, which receives its instructions in some high-level imperative language, and then carries them out, making use of a huge library of provably correct algorithms.
Other people imagine something like a neural net containing more 'neurons' than the human brain - a device which is born with little more hardwired programming than the general guidance that 'learning is good' and 'hurting people is bad' tog... (read more)