You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on against "AI risk" - Less Wrong Discussion

24 Post author: Wei_Dai 11 April 2012 10:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 12 April 2012 12:57:12AM *  5 points [-]

I have the opposite perception, that "Singularity" is worse than "artificial intelligence."

I see... I'm not sure what to suggest then. Anyone else have ideas?

I'm also not sure exactly what you mean by the "single scenario" getting privileged, or where you would draw the lines.

I think the scenario that "AI risk" tends to bring to mind is a de novo or brain-inspired AGI (excluding uploads) rapidly destroying human civilization. Here are a couple of recent posts along these lines and using the phrase "AI risk".

Comment author: steven0461 12 April 2012 01:04:27AM *  1 point [-]

"Posthumanity" or "posthuman intelligence" or something of the sort might be an accurate summary of the class of events you have in mind, but it sounds a lot less respectable than "AI". (Though maybe not less respectable than "Singularity"?)