hairyfigment comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: mwaser 31 October 2010 07:18:17PM *  0 points [-]

Kaj's paper relies very heavily on Omohundro's paper from AGI '08. Check out the reply that I presented/published at BICA '08 which (among other things) summarizes why the assumptions that Kaj relies upon are probably incorrect:

Discovering the Foundations of a Universal System of Ethics

Comment author: hairyfigment 31 October 2010 09:15:20PM *  3 points [-]

From a quick read, it seems to rely on the assumption that a superhuman AI couldn't rely on its ability to destroy humanity.