You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on AI Risk and Opportunity: Humanity's Efforts So Far - Less Wrong Discussion

28 Post author: lukeprog 21 March 2012 02:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread.

Comment author: Wei_Dai 21 March 2012 04:41:33AM 13 points [-]

I would give Vernor Vinge a bit more credit. He was a professor of computer science as well as a novelist in 1993. His "A Fire Upon the Deep", published in 1992, featured a super-intelligent AI (called the Blight) that posed an existential risk to a galactic civilization. (I wonder if Eliezer had been introduced to the Singularity through that book instead of "True Names", he would have invented the FAI idea several years earlier.)

To quote Schmidhuber:

It was Vinge, however, who popularized the technological singularity and significantly elaborated on it, exploring pretty much all the obvious related topics, such as accelerating change, computational speed explosion, potential delays of the singularity, obstacles to the singularity, limits of predictability and negotiability of the singularity, evil vs benign super-intelligence, surviving the singularity, etc.

Comment author: lukeprog 21 March 2012 05:22:05AM 2 points [-]

Updated.