Giles comments on [Altruist Support] How to determine your utility function - Less Wrong

7 Post author: Giles 01 May 2011 06:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Giles 01 May 2011 03:28:52PM -1 points [-]

I think that would just yield your revealed preference function. As I said, trying to optimize that is like a falling apple trying to optimize "falling". It doesn't describe what you want to do; it describes what you're going to do next no matter what.

Comment author: wedrifid 01 May 2011 06:15:10PM 6 points [-]

I think that would just yield your revealed preference function.

No, it wouldn't. It would read the brain and resolve it into a utility function. If it resolves into a revealed preference function then the FAI is bugged. Because I told it to deduce a utility function.

Comment author: wiresnips 01 May 2011 05:55:30PM 4 points [-]

If we accept that what someone 'wants' can be distinct from their behaviour, then "what do I want?" and "what will I do?" are two different questions (unless you're perfectly rational). Presumably, a FAI scanning a brain could answer either question.