Is your process something like: "compare each option against the next until you find the worst and best?"
Yes, approximately.
It is becoming clear from this and other comments that you consider at least the transitivity property of VNM to be axiomatic.
I consider all the axioms of VNM to be totally reasonable. I don't think the human decision system follows the VNM axioms. Hence the project of defining and switching to this VNM thing; it's not what we already use, but we think it should be.
If VNM is required, it seems sort of hard to throw it out after the fact if it causes too much trouble.
VNM is required to use VNM, but if you encounter a circular preference and decide you value running in circles more than the benefits of VNM, then you throw out VNM. You can't throw it out from the inside, only decide whether it's right from outside.
What is the point of ranking other stuff relative to the 0 and 1 anchor if you already know the 1 anchor is your optimal choice?
Expectation. VNM isn't really useful without uncertainty. Without uncertainty, transitive preferences are enough.
If being a whale has utility 1, and getting nothing has utility 0, and getting a sandwich has utility 1/500, but the whale-deal only has a probability of 1/400 with nothing otherwise, then I don't know until I do expectation that the 1/400 EU from the whale is better than the 1/500 EU from the sandwich.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
If you don't conform to VNM, you don't have a utility function.
I think you mean to refer to your decision algorithms.
No, I mean if my utility function violates transitivity or other axioms of VNM, I more want to fix it than to throw out VNM as being invalid.