Liron comments on VNM expected utility theory: uses, abuses, and interpretation - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (48)
It's not really an issue of impracticality. It's just that, any time you have a higher-level class of utility, the lower-level classes of utility stop being relevant to your decisions. No matter how precise the algorithm is. That's why I say it's extra complexity with no optimization benefit. Since the extra structure doesn't even map better to my intuition about preference, I just Occam-shave it away.
Wait... certainly, if you lexicographically value (brightness, redness) of a light, and somehow manage to be in a scenario where you can't make the light brighter, and somehow manage to know that, then the redness value becomes relevant.
What I mean is that the environment itself makes such precise situations rare (a non-practical issue), and an imprecise algorithm makes it hard to detect when, if ever, they occur (a practical issue).