You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on against "AI risk" - Less Wrong Discussion

24 Post author: Wei_Dai 11 April 2012 10:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 12 April 2012 02:40:57AM *  2 points [-]

Apparently it's also common to not include uploads in the definition of AI. For example, here's Eliezer:

Perhaps we would rather take some other route than AI to smarter-than-human intelligence - say, augment humans instead? To pick one extreme example, suppose the one says: The prospect of AI makes me nervous. I would rather that, before any AI is developed, individual humans are scanned into computers, neuron by neuron, and then upgraded, slowly but surely, until they are super-smart; and that is the ground on which humanity should confront the challenge of superintelligence.

Comment author: CarlShulman 12 April 2012 02:44:48AM 5 points [-]

Yeah, there's a distinction between things targeting a broad audience, where people describe WBE as a form of AI, versus some "inside baseball" talk in which it is used to contrast against WBE.

Comment author: Wei_Dai 12 April 2012 03:20:12AM 3 points [-]

That paper was written for the book "Global Catastrophic Risks" which I assume is aimed at a fairly general audience. Also, looking at the table of contents for that book, Eliezer's chapter was the only one talking about AI risks, and he didn't mention the three listed in my post that you consider to be AI risks.

Do you think I've given enough evidence to support the position that many people, when they say or hear "AI risk", is either explicitly thinking of something narrower than your definition of "AI risk", or have not explicitly considered how to define "AI" but is still thinking of a fairly narrow range of scenarios?

Besides that, can you see my point that an outsider/newcomer who looks at the public materials put out by SI (such as Eliezer's paper and Luke's Facing the Singularity website) and typical discussions on LW would conclude that we're focused on a fairly narrow range of scenarios, which we call "AI risk"?

Comment author: CarlShulman 12 April 2012 03:35:17AM 1 point [-]

explicitly thinking of something narrower than your definition of "AI risk", or have not explicitly considered how to define "AI" but is still thinking of a fairly narrow range of scenarios?

Yes.