Followup To: Logic as Probability
If we design a robot that acts as if it's uncertain about mathematical statements, that violates some desiderata for probability. But realistic robots cannot prove all theorems; they have to be uncertain about hard math problems.
In the name of practicality, we want a foundation for decision-making that captures what it means to make a good decision, even with limited resources. "Good" means that even though our real-world robot can't make decisions well enough to satisfy Savage's theorem, we want to approximate that ideal, not throw it out. Although I don't have the one best answer to give you, in this post we'll take some steps forward.
Part of the sequence Logical Uncertainty
Previous Post: Logic as Probability
Next post: Solutions and Open Problems
Lol that is a nice story in that link, but it isn't a Dutch book. The bet in it isn't set up to measure subjective probability either, so I don't really see what the lesson in it is for logical probability.
Say that instead of the digits of pi, we were betting on the contents of some boxes. For concreteness let there be three boxes, one of which contains a prize. Say also that you have looked inside the boxes and know exactly where the prize is. For me, I have some subjective probability P( X_i | I_mine ) that the prize is inside box i. For you, all your subjective probabilities are either zero or one, since you know perfectly well where the prize is. However, if my beliefs about where the prize is follow the probability calculus correctly, you still cannot Dutch book me, even though you know where the prize is and I don't.
So, how is the scenario about the digits of pi different to this? Do you have some example of an actual Dutch book that I would accept if I were to allow logical uncertainty?
edit:
Ok well I thought of what seems to be a typical Dutch book scenario, but it has made me yet more confused about what is special about the logical uncertainty case. So, let me present two scenarios, and I wonder if you can tell me what the difference is:
Consider two propositions, A and B. Let it be the case that A->B. However, say that we do not realise this, and say we assign the following probabilities to A and B:
P(A) = 0.5
P(B) = 0.5
P(B|A) = P(B)
P(A & B) = 0.25
indicating that we think A and B are independent. Based on these probabilities, we should accept the following arrangement of bets:
Sell bet for $0.50 that A is false, payoff $1 if correct
Sell bet for $0.25 that A & B are both true, payoff $1 if correct
The expected amount we must pay out is 0.5$1 + 0.25$1 = $0.75, which is how much we are selling the bets for, so everything seems fair to us.
Someone who understands that A->B will happily buy these bets from us, since they know that "not A" and "A & B" are actually equivalent to "not A" and "A", i.e. he knows P(not A) + P(A & B) = 1, so he wins $1 from us no matter what is the case, making a profit of $0.25. So that seems to show that we are being incoherent if we don't know that A->B.
But now consider the following scenario; instead of having the logical relation that A->B, say that our opponent just has some extra empirical information D that we do not, so that for him P(B|A,D) = 1. For him, then, he would still say that
P(not A | D) + P(A & B | D) = P(not A | D) + P(B|A,D)*P(A|D) = P(not A|D) + P(A|D) = 1
so that we, who do not know D, could still be screwed by the same kind of trade as in the first example. But then, this is sort of obviously possible, since having more information than your opponent should give you a betting advantage. But both situations seem equivalently bad for us, so why are we being incoherent in the first example, but not in the second? Or am I still missing something?