mwaser comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: mwaser 31 October 2010 07:18:17PM *  0 points [-]

Kaj's paper relies very heavily on Omohundro's paper from AGI '08. Check out the reply that I presented/published at BICA '08 which (among other things) summarizes why the assumptions that Kaj relies upon are probably incorrect:

Discovering the Foundations of a Universal System of Ethics

Comment author: Perplexed 31 October 2010 10:41:57PM 7 points [-]

Two things surprised me in your argument. One is that you seemed to assume that features of human ethics (which you attribute to our having evolved as social animals) would be universal in the sense that they would also apply to AIs which did not evolve and which aren't necessarily social.

The second is that although you pay lip service to game theory, you don't seem to be aware of any game theoretic research on ethics deeper than Axelrod(1984) and the Tit-for-Tat experiments. You ought to at least peruse Binsmore's "Natural Justice", even if you don't want to plow through the two volumes of "Game Theory and the Social Contract".

Comment author: Kaj_Sotala 01 November 2010 04:45:45PM *  3 points [-]

Not really - the paper is about ways by which an AGI might become more powerful than humanity (corresponding to premise 3 in Ben's reconstructed version of the SIAI argument). You can combine it with Omohundro-like arguments, and I do briefly mention that connection in the conclusions, but the core content of the paper is an independent and separate issue from AI drives, universal ethics or any such issue.

Comment author: hairyfigment 31 October 2010 09:15:20PM *  3 points [-]

From a quick read, it seems to rely on the assumption that a superhuman AI couldn't rely on its ability to destroy humanity.

Comment author: timtyler 31 October 2010 09:59:04PM *  2 points [-]

Omohundro's paper was about The Basic AI Drives. The abstract says: " We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design".

Social drives are arguably not very "basic" - since they only show up in social situations.

I'm sure such machines would also have a "drive to swim" - if immersed in water - and a "drive to escape" - if encased by crushing jaws - but these "drives" were judged not sufficiently "basic" to go into Omohundro's paper.