I would like to share a doubt with you. Peter Achinstein, in his The Book of Evidence considers two probabilistic views about the conditions that must be satisfied in order for e to be evidence that h. The first one says that e is evidence that h when e increases the probability of h when added to some background information b:
(Increase in Probability) e is evidence that h iff P(h|e&b) > P(h|b).
The second one says that e is evidence that h when the probability of h conditional on e is higher than some threshold k:
(High Probability) e is evidence that h iff P(h|e) > k.
A plausible way of interpreting the second definition is by saying that k = 1/2. When one takes k to have such fixed value, it turns out that P(h|e) > k has the same truth-conditions as P(h|e) > P(~h|e) - at least if we are assuming that P is a function obeying Kolmogorov's axioms of the probability calculus. Now, Achinstein takes P(h|e) > k to be a necessary but insufficient condition for e to be evidence that h - while he claims that P(h|e&b) > P(h|b) is neither necessary nor sufficient for e to be evidence that h. That may seem shocking for those that take the condition fleshed out in (Increase in Probability) at least as a necessary condition for evidential support (I take it that the claim that it is necessary and sufficient is far from accepted - presumably one also wants to qualify e as true, or as known, or as justifiably believed, etc). So I would like to check one of Achinstein's counter-examples to the claim that increase in probability is a necessary condition for evidential support.
The relevant example is as follows:
The lottery counterexample
Suppose one has the following background b and piece of evidence e1:
b: This is a fair lottery in which one ticket drawn at random will win.
e1: The New York Times reports that Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery.
Further, one also learns e2:
e2: The Washington Post reports that Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery.
So, one has evidence in favor of
h: Bill Clinton will win the lottery.
The point now is that, although it seems right to regard e2 as being evidence in favor of h, it fails to increase h's probability conditional on (b&e1) - at least so says Achinstein. According to his example, the following is true:
P(h|b&e1&e2) = P(h|b&e1) = 999/1000.
Well, I have my doubts about this counterexample. The problem with it seems to me to be this: that e1 and e2 are taken to be the same piece of evidence. Let me explain. If e1 and e2 increase the probability of h, that is because they increase the probability of a further proposition:
g: Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery,
and, as it happens, g increases the probability of h. That The New York Times reports g, assuming that the New York Times is reliable, increases the probability of g - and the same can be said about The Washington Post reporting g. But the counterexample seems to assume that both e1 and e2 are equivalent with g, and they're not. Now, it is clear that P(h|b&g) = P(h|b&g&g), but this does not show that e2 fails to increase h's probability on (b&e1). So, if it is true that e2 increases the probability of g conditional on e1, that is, if P(g|e1&e2) > P(g|e1), and if it is true that g increases the probability of h, then it is also true that e2 increases the probability of h. I may be missing something, but this reasoning sounds right to me - the example wouldn't be a counterexample. What do you think?
While I find this particular re-definition a bit silly, that doesn't mean that in general having more succinct definitions isn't a good thing.
If the second definition of evidence were used, it would mean that "collecting evidence" would be a fundamentally different thing than it would be in the first case. In the second, "evidence" is completely relative to what is already known, and lots of new material would not be included.
So if I go out and ask people to "collect evidence", their actions should be different depending on which definition we collectively used.
In addition, the definition would lead to interesting differences in quantities. If we used the first definition, me having "lots of evidence" could mean having lots of redundant evidence for one small part of something (of course, it would also be helpful to quantify "lots", but I believe that or something similar could be done). In the second definition, I imagine it would be much more useful to what I actually want.
This new definition makes "evidence" much more coupled to resulting probabilities, which in itself could be a good thing. However, it seems like an unintuitive stretch for my current understanding of the word, so I would prefer that rather than re-defining the word, a condition were used. For example, "updating evidence" for the second definition.
Thanks, that's interesting. The exercise of thinking how people would act to gather evidence having in mind the two probabilistic definitions gives food for thought. Specifically, I'm thinking that, if we were to tell people: "Look for evidence in favor of h and, remember, evidence is that which ...", where we substitute '...' by the relevant definition of evidence, they would gather evidence in a different way from the way we naturally look for evidence for some hypotheses. The agents to whom that advice was given would have a reflexive access t... (read more)