Evidence and counterexample to positive relevance
I would like to share a doubt with you. Peter Achinstein, in his The Book of Evidence considers two probabilistic views about the conditions that must be satisfied in order for e to be evidence that h. The first one says that e is evidence that h when e increases the probability of h when added to some background information b:
(Increase in Probability) e is evidence that h iff P(h|e&b) > P(h|b).
The second one says that e is evidence that h when the probability of h conditional on e is higher than some threshold k:
(High Probability) e is evidence that h iff P(h|e) > k.
A plausible way of interpreting the second definition is by saying that k = 1/2. When one takes k to have such fixed value, it turns out that P(h|e) > k has the same truth-conditions as P(h|e) > P(~h|e) - at least if we are assuming that P is a function obeying Kolmogorov's axioms of the probability calculus. Now, Achinstein takes P(h|e) > k to be a necessary but insufficient condition for e to be evidence that h - while he claims that P(h|e&b) > P(h|b) is neither necessary nor sufficient for e to be evidence that h. That may seem shocking for those that take the condition fleshed out in (Increase in Probability) at least as a necessary condition for evidential support (I take it that the claim that it is necessary and sufficient is far from accepted - presumably one also wants to qualify e as true, or as known, or as justifiably believed, etc). So I would like to check one of Achinstein's counter-examples to the claim that increase in probability is a necessary condition for evidential support.
The relevant example is as follows:
The lottery counterexample
Suppose one has the following background b and piece of evidence e1:
b: This is a fair lottery in which one ticket drawn at random will win.
e1: The New York Times reports that Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery.
Further, one also learns e2:
e2: The Washington Post reports that Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery.
So, one has evidence in favor of
h: Bill Clinton will win the lottery.
The point now is that, although it seems right to regard e2 as being evidence in favor of h, it fails to increase h's probability conditional on (b&e1) - at least so says Achinstein. According to his example, the following is true:
P(h|b&e1&e2) = P(h|b&e1) = 999/1000.
Well, I have my doubts about this counterexample. The problem with it seems to me to be this: that e1 and e2 are taken to be the same piece of evidence. Let me explain. If e1 and e2 increase the probability of h, that is because they increase the probability of a further proposition:
g: Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery,
and, as it happens, g increases the probability of h. That The New York Times reports g, assuming that the New York Times is reliable, increases the probability of g - and the same can be said about The Washington Post reporting g. But the counterexample seems to assume that both e1 and e2 are equivalent with g, and they're not. Now, it is clear that P(h|b&g) = P(h|b&g&g), but this does not show that e2 fails to increase h's probability on (b&e1). So, if it is true that e2 increases the probability of g conditional on e1, that is, if P(g|e1&e2) > P(g|e1), and if it is true that g increases the probability of h, then it is also true that e2 increases the probability of h. I may be missing something, but this reasoning sounds right to me - the example wouldn't be a counterexample. What do you think?
two puzzles on rationality of defeat
I present here two puzzles of rationality you LessWrongers may think is worth to deal with. Maybe the first one looks more amenable to a simple solution, while the second one has called attention of a number of contemporary epistemologists (Cargile, Feldman, Harman), and does not look that simple when it comes to a solution. So, let's go to the puzzles!
Puzzle 1
At t1 I justifiably believe theorem T is true, on the basis of a complex argument I just validly reasoned from the also justified premises P1, P2 and P3.
So, in t1 I reason from premises:
(R1) P1, P2 ,P3
To the known conclusion:
(T) T is true
At t2, Ms. Math, a well known authority on the subject matter of which my reasoning and my theorem are just a part, tells me I’m wrong. She tells me the theorem is just false, and convince me of that on the basis of a valid reasoning with at least one false premise, the falsity of that premise being unknown to us.
So, in t2 I reason from premises (Reliable Math and Testimony of Math):
(RM) Ms. Math is a reliable mathematician, and an authority on the subject matter surrounding (T),
(TM) Ms. Math tells me T is false, and show to me how is that so, on the basis of a valid reasoning from F, P1, P2 and P3,
(R2) F, P1, P2 and P3
To the justified conclusion:
(~T) T is not true
It could be said by some epistemologists that (~T) defeat my previous belief (T). Is it rational for me to do this way? Am I taking the correct direction of defeat? Wouldn’t it also be rational if (~T) were defeated by (T)? Why ~(T) defeats (T), and not vice-versa? It is just because ~(T)’s justification obtained in a later time?
Puzzle 2
At t1 I know theorem T is true, on the basis of a complex argument I just validly reasoned, with known premises P1, P2 and P3. So, in t1 I reason from known premises:
(R1) P1, P2 ,P3
To the known conclusion:
(T) T is true
Besides, I also reason from known premises:
(ME) If there is any evidence against something that is true, then it is misleading evidence (evidence for something that is false)
(T) T is true
To the conclusion (anti-misleading evidence):
(AME) If there is any evidence against (T), then it is misleading evidence
At t2 the same Ms. Math tells me the same thing. So in t2 I reason from premises (Reliable Math and Testimony of Math):
(RM) Ms. Math is a reliable mathematician, and an authority on the subject matter surrounding (T),
(TM) Ms. Math tells me T is false, and show to me how is that so, on the basis of a valid reasoning from F, P1, P2 and P3,
But then I reason from::
(F*) F, RM and TM are evidence against (T), and
(AME) If there is any evidence against (T), then it is misleading evidence
To the conclusion:
(MF) F, RM and TM is misleading evidence
And then I continue to know T and I lose no knowledge, because I know/justifiably believe that the counter-evidence I just met is misleading. Is it rational for me to act this way?
I know (T) and I know (AME) in t1 on the basis of valid reasoning. Then, I am exposed to misleading evidences (Reliable Math), (Testimony of Math) and (F). The evidentialist scheme (and maybe still other schemes) support the thesis that (RM), (TM) and (F) DEFEATS my justification for (T) instead. So that whatever I inferred from (T) is no longer known. However, given my previous knowledge of (T) and (AME), I could know that (MF): F is misleading evidence. It can still be said that (RM), (TM) and (F) DEFEAT my justification for (T), given that (MF) DEFEAT my justification for (RM), (TM) and (F)?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)