Emile comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: Emile 30 October 2010 02:07:21PM 3 points [-]

He wrote Ethical Issues in Advanced Artificial Intelligence, which does caution against non-friendly AGI:

For all of these reasons, one should be wary of assuming that the emergence of superintelligence can be predicted by extrapolating the history of other technological breakthroughs, or that the nature and behaviors of artificial intellects would necessarily resemble those of human or other animal minds.

Comment author: anonym 30 October 2010 11:06:24PM 2 points [-]

The question is not whether Bostrom urges caution (which Goertzel and many others also urge), but whether Bostrom agrees that the Scary Idea is true -- that is, whether projects like Ben's and others will probably end the human race if developed without a pre-existing FAI theory, and whether the only (or most promising) way to not incur extremely high risk of wiping out humanity is to develop FAI theory first.

Comment author: Vladimir_Nesov 30 October 2010 03:11:34PM 0 points [-]

Right, forgot about that.