jmmcd comments on Nils Nilsson's AI History: The Quest for Artificial Intelligence - Less Wrong

13 Post author: CarlShulman 31 October 2010 07:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (3)

You are viewing a single comment's thread.

Comment author: jmmcd 04 November 2010 03:05:17AM *  2 points [-]

I'm curious about the extent to which LW/OB/SIAI-style concerns about unfriendly AI are of interest in the academic AI community. This book provides one datapoint, so:

It mentions the singularity and the SIAI (p. 646-648):

Some people have pointed out that HLAI [human-level AI] necessarily implies superhuman-level intelligence [...]

In 2004, The Singularity Institute for Artificial Intelligence (SIAI) was formed "to confront this urgent challenge, both the opportunity and the risk." Its Director of Research, Ben Goertzel, is also chair of an organization called the “Artificial General Intelligence Research Institute” [...]

[Note that what "the risk" is is not spelt out. Also it gives the impression that Ben Goertzel is in charge -- if I remember right he never was, and has now left entirely.]

I think there's no mention of friendliness, even under other names (did anyone find one?). It's not that such questions would be considered off-topic:

Besides the criticisms of AI based on what people claim it cannot do, there are also criticisms based on what people claim it should not do. Some of the “should-not” people mention the inappropriateness of machines attempting to perform tasks that are inherently human-centric, such as teaching, counseling, and rendering judicial opinions. Others, such as the Computer Professionals for Social Responsibility mentioned previously, don’t want to see AI technology (or any other technology for that matter) used in warfare or for surveillance or for tasks that require experience-based human judgment. In addition, there are those who, like the Luddites of 19th century Britain, are concerned about machines replacing humans and thereby causing unemployment and economic dislocation. Finally, there are those who worry that AI and other computer technology would dehumanize people, reduce the need for person-to-person contact, and change what it means to be human.

(p. 393)