wiresnips comments on [Altruist Support] How to determine your utility function - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
I think that would just yield your revealed preference function. As I said, trying to optimize that is like a falling apple trying to optimize "falling". It doesn't describe what you want to do; it describes what you're going to do next no matter what.
If we accept that what someone 'wants' can be distinct from their behaviour, then "what do I want?" and "what will I do?" are two different questions (unless you're perfectly rational). Presumably, a FAI scanning a brain could answer either question.