You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on Q&A with experts on risks from AI #2 - Less Wrong Discussion

15 Post author: XiXiDu 09 January 2012 07:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 09 January 2012 09:55:57PM *  16 points [-]

XiXiDu: You should clarify this "human-level intelligence" concept, it seems to be systematically causing trouble. For example:

"By AI having 'human-level intelligence' we mean that it's a system that's about as good or better (perhaps unevenly) than humans (or small groups of humans) at activities such as programming, engineering and research."

The idea of "human-level intelligence" inspired by science fiction or naive impressions from AI that refers to somewhat human-like AIs is pervasive enough that when better-informed people hear a term like "human-level intelligence", they round up to this cliche and proceed with criticizing it.

Comment author: torekp 10 January 2012 02:39:00AM 9 points [-]

Agreed. But not all respondents trash the question just because it's poorly phrased. Nils Nilsson writes:

I'll rephrase your question to be: When will AI be able to perform around 80% of these jobs as well or better than humans perform?

I really like this guy.

Comment author: Manfred 09 January 2012 10:19:03PM *  0 points [-]

I agree. I think the most important use of the concept is in question 3, and so for timeline purposes we can rephrase "human-level intelligence" as "human-level competence at improving its source code, combined with a structure that allows general intelligence."

Question 3 would then read "What probability do you assign to the possibility of an AGI with human-level competence at improving its source code being able to self-modify its way up to massively superhuman skills in many areas within a matter of hours/days/< 5 years?"

Comment author: jsteinhardt 10 January 2012 05:00:54PM 2 points [-]

I don't think most AI researchers think of "improving its source code" as one of the benchmarks in an AI research program. Whether or not you think it is, asking them to identify a benchmark that they've actually thought about (I really like Nilson's 80% of human jobs, especially since it jives well with a Hansonian singularity) seems more likely to get an informative response.

Comment author: TheOtherDave 09 January 2012 10:30:39PM 2 points [-]

Might be worth specifying whether "human-level competence at improving its source code" here means "as good at improving source code as an average professional programmer," "as good at improving source code as an average human," "as good at improving source code as the best professional programmer," or something else.