Followup To: Logic as Probability
If we design a robot that acts as if it's uncertain about mathematical statements, that violates some desiderata for probability. But realistic robots cannot prove all theorems; they have to be uncertain about hard math problems.
In the name of practicality, we want a foundation for decision-making that captures what it means to make a good decision, even with limited resources. "Good" means that even though our real-world robot can't make decisions well enough to satisfy Savage's theorem, we want to approximate that ideal, not throw it out. Although I don't have the one best answer to give you, in this post we'll take some steps forward.
Part of the sequence Logical Uncertainty
Previous Post: Logic as Probability
Next post: Solutions and Open Problems
Hmm, no, I was trying to make a different point. Okay, let's back up a little. Can you spell out what you think are the assumptions and conclusions of Savage's theorem with your proposed changes? I have some vague idea of what you might say, and I suspect that the conclusions don't follow from the assumptions because the proof stops working, but by now we seem to misunderstand each other so much that I have to be sure.
I am proposing no changes. My claim is that even though we use english words like "event-space" or "actions" when describing Savage's theorem, the things that actually have the relevant properties in the AMD problem are the strategies.
Cribbing from the paper I linked, the key property of "actions" is that they are functions from the set of "states of the world" (also somewhat mutable) to the set of consequences (the things I have a utility function over). If the state is "I'm at the first intersection" and I take the action (no quotes, actual action) of "go straight," that does return a consequence.