Can anyone point me toward work that's been done on the five-and-ten problem? Or does someone want to discuss it here? Specifically, I don't understand why it is a problem for probabilistic algorithms. I would reason:
There is a high probability that I prefer $10 to $5. Therefore I will decide to choose $5, with low probability.
And there's nowhere to go from there. If I try to use the fact that I chose $5 to prove that $5 was the better choice all along (because I'm rational), I get something like:
The probability that I prefer $5 to $10 is low. But I have very high confidence in my rationality, meaning that I assign high probability, a priori, to any choice I make being the choice I prefer. Therefore, given that I choose $5, the probability that I prefer $5 is high. So $5 doesn't seem like a bad choice, since I'll probably end up with what I prefer.
But things still turn out right, because:
However, the probability that I prefer $10, given that I choose $10, is even higher, because the probability that I prefer $10 was high to begin with. Therefore, $10 is a better choice than $5, because the probability that (I prefer $10 to $5 given that I choose $10) is higher than the probability that (I prefer $5 to $10 given that I choose $5).
So unless I'm missing something, the five-and-ten problem is just a problem of overconfidence.
The problem is how classical logical statements work. The statement "If A then B" more properly translates as "~(A and ~B)".
Thus, we get valid logical statements that look bizarre to humans: "If Paris is the capital of France, then Rome is the capital of Italy" seems untrue in a causal sense (if we changed the capital of France, we would not change the capital of Italy, and vice versa) but it is true in a logical sense, because A is true, B is true, true and ~true is false, and ~false is true.
That example seems just silly, but...
If it's worth saying, but not worth its own post, even in Discussion, it goes here.