Louie comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (432)
Check out SIAI's publications page. Kaj's most recent paper (published at ECAP '10) is a good 2 page summary of why AGI can be an x-risk for anyone who is uninformed of SIAI's position:
"From mostly harmless to civilization-threatening: pathways to dangerous artificial general intelligences"
A recent paper showed that 'Striatal Volume Predicts Level of Video Game Skill Acquisition'. A valid inference would be that an AGI with the computational equivalent of a higher striatal volume would possess a superior cognitive flexibility, at least when it comes to gaming. But what could it accomplish? I'm playing a game called Trackmania, it is a arcade racing game. The top players are so close to the ideal line and therefore the fastest time that a superhuman AI could indeed beat them but only by a few milliseconds. Each millisecond less might demand a order of magnitude more skill, but that doesn't matter. First of all, there is a absolute limit. Secondly, it doesn't provide a serious advantage, it doesn't matter. And that may very well be the case with physics too. There is no guarantee that a faster thinking or increased working memory capacity will ever yield anything genuine without a lot of dumb luck, if at all. It is unlikely that a superhuman AI would come up with a faster than light propulsion or that it would disprove Gödel's incompleteness theorems.
Of course, we should be careful. And it is absolutely justified that an organisation like the SIAI gets money to do research on those questions. But there is not enough evidence to outweigh the doubt as to impede AI research. We will actually need research of real AGI to answer some of the open questions.
Regarding self-improvement I'm very doubtful too. The human indecision and fuzziness of thinking might very well be a feature. A superhuman AI might very well beat us at Go or the stock exchange, as long as it deals with its own kind and not the irrational agents that we are, but that doesn't mean it will be able to deal with natural problems orders of magnitude more efficient than we do.
Most of the risks from superhuman AI are associated with advanced nanotechnology. Without it, it will be impotent. Can it solve it, if it is possible at all? Can it implement its results if it can solve it, if it is possible? Because without it, self-improvement will be very hard. What will be even harder is creating copies of it without first building the necessary infrastructure for the computational substrates.
Could an AGI take over the Internet? This is very unlikely. There are spare resources, but not that much. You can't expect that it would even be suitable as a computational substrate. And how is it going to make use of it before crude measures are taken to shut it down? Many open questions, much speculation.
Paperclipping is another very speculative idea. Is a superhuman artificial general intelligence possible that is mistakenly equipped with the incentive to turn the universe into paperclips? I guess it is possible, but not without hard-coding this incentive deliberately and with great care.
Kaj's paper relies very heavily on Omohundro's paper from AGI '08. Check out the reply that I presented/published at BICA '08 which (among other things) summarizes why the assumptions that Kaj relies upon are probably incorrect:
Discovering the Foundations of a Universal System of Ethics
Two things surprised me in your argument. One is that you seemed to assume that features of human ethics (which you attribute to our having evolved as social animals) would be universal in the sense that they would also apply to AIs which did not evolve and which aren't necessarily social.
The second is that although you pay lip service to game theory, you don't seem to be aware of any game theoretic research on ethics deeper than Axelrod(1984) and the Tit-for-Tat experiments. You ought to at least peruse Binsmore's "Natural Justice", even if you don't want to plow through the two volumes of "Game Theory and the Social Contract".
Not really - the paper is about ways by which an AGI might become more powerful than humanity (corresponding to premise 3 in Ben's reconstructed version of the SIAI argument). You can combine it with Omohundro-like arguments, and I do briefly mention that connection in the conclusions, but the core content of the paper is an independent and separate issue from AI drives, universal ethics or any such issue.
From a quick read, it seems to rely on the assumption that a superhuman AI couldn't rely on its ability to destroy humanity.
Omohundro's paper was about The Basic AI Drives. The abstract says: " We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design".
Social drives are arguably not very "basic" - since they only show up in social situations.
I'm sure such machines would also have a "drive to swim" - if immersed in water - and a "drive to escape" - if encased by crushing jaws - but these "drives" were judged not sufficiently "basic" to go into Omohundro's paper.
Doesn't that assume what it is trying to prove - by starting out with:
"The main reason to be worried about greater-than-human intelligence is because it is hard for humans to anticipate and control."
...? From the perspective of technological determinism, "controlling" the machines should probably not be our aim. Our more plausible options are more along the lines of joining with them - or being interesting enough to keep around in their historical simulations.
To me it rather looks like that the paper in question is trying to give a summary of conclusions that follow from the premise that greater-than-human intelligence is possible. I'm not reluctant to any of the mentioned possibilities but I'm wary of using inferences derived from reasonable but unproven hypothesis as foundations for further speculative thinking. Although the paper does a good job on stating reasons to justify the existence and support for an organisation such as the SIAI, it does not substantiate the initial premise to an extent that one could draw the conclusions about the probability of associated risks. Nevertheless such estimations are given, such as that there is a high likelihood of humanity's demise given that we develop superhuman artificial general intelligence without first defining mathematically how to prove the benevolence of the former. This I believe is a unsatisfactory conclusion as it lacks justification. This is not to say that it is wrong to state probability estimations and update them given new evidence, but that they are not compelling and therefore should not be used to justify any mandatory actions regarding research on artificial intelligence. Although those ideas can very well serve as an urge to caution.