Vladimir_Nesov comments on Against utility functions - Less Wrong

40 Post author: Qiaochu_Yuan 19 June 2014 05:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 19 June 2014 08:11:38AM *  3 points [-]

It seems worth reflecting on the fact that the point of the foundational LW material discussing utility functions was to make people better at reasoning about AI behavior and not about human behavior.

For value extrapolation problem, you need to consider both what an AI could do with a goal (how to use it, what kind of thing it is), and which goal represents humane values (how to define it).

Comment author: Qiaochu_Yuan 19 June 2014 04:55:25PM 5 points [-]

I still think there's too much confusion between ethics-for-AI and ethics-for-humans discussions here. There's no particular reason that a conceptual apparatus suited for the former discussion should also be suited for the latter discussion.

Comment author: David_Gerard 21 June 2014 09:53:15PM 0 points [-]

Yep. Particularly as humans are observably not human-friendly. (Even to the extent of preserving human notions of value - plenty of humans go dangerously nuts.)