You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Manfred comments on Q&A with experts on risks from AI #2 - Less Wrong Discussion

15 Post author: XiXiDu 09 January 2012 07:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 09 January 2012 10:19:03PM *  0 points [-]

I agree. I think the most important use of the concept is in question 3, and so for timeline purposes we can rephrase "human-level intelligence" as "human-level competence at improving its source code, combined with a structure that allows general intelligence."

Question 3 would then read "What probability do you assign to the possibility of an AGI with human-level competence at improving its source code being able to self-modify its way up to massively superhuman skills in many areas within a matter of hours/days/< 5 years?"

Comment author: jsteinhardt 10 January 2012 05:00:54PM 2 points [-]

I don't think most AI researchers think of "improving its source code" as one of the benchmarks in an AI research program. Whether or not you think it is, asking them to identify a benchmark that they've actually thought about (I really like Nilson's 80% of human jobs, especially since it jives well with a Hansonian singularity) seems more likely to get an informative response.

Comment author: TheOtherDave 09 January 2012 10:30:39PM 2 points [-]

Might be worth specifying whether "human-level competence at improving its source code" here means "as good at improving source code as an average professional programmer," "as good at improving source code as an average human," "as good at improving source code as the best professional programmer," or something else.