taw comments on Morality is not about willpower - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (144)
A person's behavior can always be understood as optimizing a utility function, it just that if they are irrational (as in the Allais paradox) the utility functions start to look ridiculously complex. If all else fails, a utility function can be used that has a strong dependency on time in whatever way is required to match the observed behavior of the subject. "The subject had a strong preference for sneezing at 3:15:03pm October 8, 2011."
From the point of view of someone who wants to get FAI to work, the important question is, if the FAI does obey the axioms required by utility theory, and you don't obey those axioms for any simple utility function, are you better off if:
the FAI ascribes to you some mixture of possible complex utility functions and helps you to achieve that, or
the FAI uses a better explanation of your behavior, perhaps one of those alternative theories listed in the wikipedia article, and helps you to achieve some component of that explanation?
I don't understand the alternative theories well enough to know if the latter option even makes sense.
Models relying on expected utility make extremely strong assumption about treatment of probabilities with utility being strictly linear in probability, and these assumptions can be very easily demonstrated to be wrong.
They also make assumptions that many situations are equivalent (pay $50 for 50% chance to win $100 vs accept $50 for 50% chance of losing $100) where all experiments show otherwise.
Utility theory without these assumptions predicts nothing whatsoever.