I would like to share a doubt with you. Peter Achinstein, in his The Book of Evidence considers two probabilistic views about the conditions that must be satisfied in order for e to be evidence that h. The first one says that e is evidence that h when e increases the probability of h when added to some background information b:
(Increase in Probability) e is evidence that h iff P(h|e&b) > P(h|b).
The second one says that e is evidence that h when the probability of h conditional on e is higher than some threshold k:
(High Probability) e is evidence that h iff P(h|e) > k.
A plausible way of interpreting the second definition is by saying that k = 1/2. When one takes k to have such fixed value, it turns out that P(h|e) > k has the same truth-conditions as P(h|e) > P(~h|e) - at least if we are assuming that P is a function obeying Kolmogorov's axioms of the probability calculus. Now, Achinstein takes P(h|e) > k to be a necessary but insufficient condition for e to be evidence that h - while he claims that P(h|e&b) > P(h|b) is neither necessary nor sufficient for e to be evidence that h. That may seem shocking for those that take the condition fleshed out in (Increase in Probability) at least as a necessary condition for evidential support (I take it that the claim that it is necessary and sufficient is far from accepted - presumably one also wants to qualify e as true, or as known, or as justifiably believed, etc). So I would like to check one of Achinstein's counter-examples to the claim that increase in probability is a necessary condition for evidential support.
The relevant example is as follows:
The lottery counterexample
Suppose one has the following background b and piece of evidence e1:
b: This is a fair lottery in which one ticket drawn at random will win.
e1: The New York Times reports that Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery.
Further, one also learns e2:
e2: The Washington Post reports that Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery.
So, one has evidence in favor of
h: Bill Clinton will win the lottery.
The point now is that, although it seems right to regard e2 as being evidence in favor of h, it fails to increase h's probability conditional on (b&e1) - at least so says Achinstein. According to his example, the following is true:
P(h|b&e1&e2) = P(h|b&e1) = 999/1000.
Well, I have my doubts about this counterexample. The problem with it seems to me to be this: that e1 and e2 are taken to be the same piece of evidence. Let me explain. If e1 and e2 increase the probability of h, that is because they increase the probability of a further proposition:
g: Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery,
and, as it happens, g increases the probability of h. That The New York Times reports g, assuming that the New York Times is reliable, increases the probability of g - and the same can be said about The Washington Post reporting g. But the counterexample seems to assume that both e1 and e2 are equivalent with g, and they're not. Now, it is clear that P(h|b&g) = P(h|b&g&g), but this does not show that e2 fails to increase h's probability on (b&e1). So, if it is true that e2 increases the probability of g conditional on e1, that is, if P(g|e1&e2) > P(g|e1), and if it is true that g increases the probability of h, then it is also true that e2 increases the probability of h. I may be missing something, but this reasoning sounds right to me - the example wouldn't be a counterexample. What do you think?
Memorizing a book might make you more knowledgeable, but memorizing the same book twice will just make you more bored.
Similarly, acquiring new information about the world will improve your probability estimates, but acquiring the same information in two different ways will just make your equations more complicated.