You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on Oracle AI: Human beliefs vs human values - Less Wrong Discussion

2 Post author: Stuart_Armstrong 22 July 2015 11:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 03 August 2015 11:22:47AM 0 points [-]

It's clear that people have ordinal preferences over certain world-states, and that many of these preferences are quite stable from day to day. And people have some ability to trade these off with probabilities, suggesting cardinal preferences as well. It seems correct and useful to refer to this as "values", at least approximately.

On the other hand, it's clear that our brains do not implement some function that assigns a real number to world-states. That's one of the reasons that it's so hard to distinguish human values in the first place.