Apologies if this is answered elsewhere and I couldn't find it. In AI reading I come across an agent's utility function, , mapping world-states to real numbers.
The existence of is justified by the VNM-utility theorem. The first axiom required for VNM utility is 'Completeness' -- in the context of AI this means for every pair of world-states, and , the agent knows or ~ .
Completeness over world-states seems like a huge assumption. Every agent we make this assumption for must already have the tools to compare 'world where, all else equal, the only food is peach ice cream' v. 'world where, all else equal, Shakespeare never existed.'* I have no idea how I'd reliably make that comparison as a human, and that's a far cry from '~', being indifferent between the options.
Am I missing something that makes the completeness assumption reasonable? Is 'world-state' used loosely, referring to a point in a vastly smaller space, with the exact space never being specified? Essentially, I'm confused, can anyone help me out?
*if it's important I can try to cook up better-defined difficult comparisons. 'all else equal' is totally under-specified... where does the ice cream come from?
I'm not sure about the first case:
I don't see why this is true. While "VNM utility function => safe from wandering Bayesians", it's not clear to me that "no VNM utility function => vulnerable to wandering Bayesians." I think the vulnerability to wandering Bayesians comes from failing to satisfy Transitivity rather than failing to satisfy Completeness. I have not done the math on that.
But the general point, about approximation, I like. Utility functions in game theory (decision theory?) problems normally involve only a small space. I think completeness is an entirely safe assumption when talking about humans deciding which route to take to their destination, or what bets to make in a specified game. My question comes from the use of VNM utility in AI papers like this one: http://intelligence.org/files/FormalizingConvergentGoals.pdf, where agents have a utility function over possible states of the universe (with the restriction that the space is finite).
Is the assumption that an AGI reasoning about universe-states has a utility function an example of reasonable use, for you?