Academian comments on VNM expected utility theory: uses, abuses, and interpretation - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (48)
Wait... certainly, if you lexicographically value (brightness, redness) of a light, and somehow manage to be in a scenario where you can't make the light brighter, and somehow manage to know that, then the redness value becomes relevant.
What I mean is that the environment itself makes such precise situations rare (a non-practical issue), and an imprecise algorithm makes it hard to detect when, if ever, they occur (a practical issue).