Eliezer_Yudkowsky comments on LINK: Google research chief: 'Emergent artificial intelligence? Hogwash!' - Less Wrong

7 Post author: shminux 17 May 2013 07:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 18 May 2013 05:14:37AM 11 points [-]

I've only read the LW post, not the original (which tells you something about how concerned I am) but I'll briefly remark that adding humans to something does not make it safe.

Comment author: shminux 18 May 2013 05:19:04AM *  2 points [-]

Indeed it doesn't, but making something constrained by human power makes it less powerful and hence potentially less unsafe. Though that's probably not what Spector wants to do.

Comment author: ChristianKl 18 May 2013 11:42:28AM *  2 points [-]

Just because humans are involved doesn't mean that the whole system is constrained by the human element.

Comment author: Benja 18 May 2013 10:34:59AM *  1 point [-]

Voted back up to zero because this seems true as far as it goes. The problem is that if he succeeds in doing something that has a useful AGI component at all, that makes it a lot more likely (at least according to how my brain models things) that something which doesn't need a human in the loop will appear soon after, either through a modification of the original system, as a new system designed by the original system, or simply as a new system inspired by the insights from the original system.

Comment author: buybuydandavis 18 May 2013 08:13:21PM *  0 points [-]

I think so too - the comment on safety was a non sequitur, confusing human in the loop in the department of defense sense with human in the loop as a sensor/actuator for the google AI.

But adding a billion humans as intelligent trainers to the AI is a powerful way to train it. Google seems to consistently look for ways to leverage customer usage for value - other companies don't seem to get that as much.