I'm worried that XiXi posted this link expressly as an example of the sort of thing that the SIAI should be engaging in and then when the author came over here, his comments got quickly downvoted. This is not an effective recipe for engagement.
Shane Legg is not "from the Singularity Institute". He is currently a postdoctoral research fellow at the Gatsby Computational Neuroscience Unit in London.
My view is that the problem here is a disconnect between the practical and the theoretical view points.
The practical view of computers is likely to commit pc-morphism, that is assume that any computer systems of the future are likely to be like current PCs in the way that they are programmed and act. This is not unreasonable if you haven't been exposed to things like cellular automata and have a lot of evidence of computers being PC-like.
The theoretical view looks at the entire world as a computer (computable physics etc) and so has grander views of what ...
I am by no means an expert, but I see a problem with this passage:
...Wanted behaviors are rewarded, unwanted are punished, and the subject is basically trained to do something based on this feedback. It’s a simple and effective method since you’re not required to communicate the exact details of a task with your subject. Your subject might not even be human, and that’s ok because eventually, after enough trial and error, he’ll get the idea of what he should be doing to avoid a punishment and receive the reward. But while you’re plugging into the existing be
Part of the disagreement here seems to arise from disjoint models of what a powerful AI would consist of.
You seem to imagine something like an ordinary computer, which receives its instructions in some high-level imperative language, and then carries them out, making use of a huge library of provably correct algorithms.
Other people imagine something like a neural net containing more 'neurons' than the human brain - a device which is born with little more hardwired programming than the general guidance that 'learning is good' and 'hurting people is bad' together with a high-speed internet connection and the URL for wikipedia. Training such an AI might well be a bit like training your pets.
It is not clear to me which kind of AI will reach a human level of intelligence first. But if I had to bet, I would guess the second. And therein lies the danger.
ETA: But even the first kind of AI can be dangerous, because sooner or later someone is going to issue a command with unforeseen consequences.
Having read the piece I was not impressed. I became even less impressed when I read his criticism of Legg's piece. It seems to be basically come down "computers can't do things that humans can. And they never will be able to. So there."
"imagined by the author as a combination of whatever a popular science site reported"
I've heard this argument from non-singulatarians from time to time. It bothers me due to the problem conservation of expected evidence. What is the blogger's priors of taking an argument seriously if it seems as if the discussed about topic reminds him of something he's heard about in a pop sci piece?
We all know that popular sci/tech reporting isn't the greatest, but if you low confidence about SIAI-type AI and hearing it reminds you of some second hand pop rep...
As the SIAI is gaining publicity more people are reviewing its work. I am not sure how popular this blog is but judged by its about page he writes for some high-profile blogs. His latest post takes on Omohundro's "Basic AI Drives":
Link: worldofweirdthings.com/2011/01/12/why-training-a-i-isnt-like-training-your-pets/
I posted a few comments but do not think to be the right person to continue that discussion. So if you believe it is important what other people think about the SIAI and want to improve its public relations, there is your chance. I'm myself interested in the answers to his objections.