Vladimir_Nesov comments on LINK: Google research chief: 'Emergent artificial intelligence? Hogwash!' - Less Wrong

7 Post author: shminux 17 May 2013 07:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 17 May 2013 09:47:33PM *  5 points [-]

His approach, if workable, also appears safe: it requires human feedback in the loop.

Human feedback doesn't help with "safe". (For example, complex values can't be debugged by human feedback, and the behavior of a sufficiently complicated agent won't "resemble" its idealized values, its pattern of behavior might just be chosen as instrumentally useful.)

Comment author: shminux 17 May 2013 09:56:00PM 2 points [-]

I agree that human feedback does not ensure safety, what I meant is that if it is necessary for functioning, it restricts how smart or powerful an AI can become.

Comment author: Eliezer_Yudkowsky 18 May 2013 04:10:37PM 7 points [-]

Necessary-at-stage-1 is not the same as necessary-at-stage-2. A lot of people seem to use the word "safety" in conjunction with a single medium-level obstacle to one slice out of the total risk pie.

Comment author: ikrase 18 May 2013 05:49:37AM -1 points [-]

Agreed. (Alternatively, this could end up like obedient AI maybe? Not sure).