SilentCal comments on [link] New essay summarizing some of my latest thoughts on AI safety - Less Wrong

14 Post author: Kaj_Sotala 01 November 2015 08:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread.

Comment author: SilentCal 03 November 2015 09:59:12PM 4 points [-]

I think it would help the discussion to distinguish more between knowing what human values are and caring about them--that is, between acquiring instrumental values and acquiring terminal ones. The "human enforcement" section touches on this, but I think too weakly: it seems indisputable that an AI trained naively via a reward button would acquire only instrumental values, and drop them as soon as it could control the button. This is a counterexample to the Value Learning Thesis if interpreted as referring to terminal values.

An obvious programmer strategy would be to cause the AI to acquire our values as instrumental values, then try to modify the AI to make them terminal.