Qiaochu_Yuan comments on Against utility functions - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (87)
For value extrapolation problem, you need to consider both what an AI could do with a goal (how to use it, what kind of thing it is), and which goal represents humane values (how to define it).
I still think there's too much confusion between ethics-for-AI and ethics-for-humans discussions here. There's no particular reason that a conceptual apparatus suited for the former discussion should also be suited for the latter discussion.
Yep. Particularly as humans are observably not human-friendly. (Even to the extent of preserving human notions of value - plenty of humans go dangerously nuts.)