Thanks. I would say that what we have in front of us are clear cases where someone have evidence for something else. In the example given, we have in front of us that both, e1 and e2 (together with the assumption that the NYT and WP are reliable) are evidence for g. So, presumably, there is an agreement between people offering the truth conditions for 'e is evidence that h' about the range of cases where there is evidence - while the is no agreement between people answering the question about the sound of the three, because the don't agree on the range of cases where sound occurs. Otherwise, there would be no counterexamples such as the one that Achinstein tried to offer. If I offer some set of truth-conditions for Fa, and one of the data that I use to explain what it is for something to be F is the range of cases where F is applied, then if you present to me a case where F applies but it is not satisfied by the truth-conditions I offered, I will think that there is something wrong with that truth-conditions.
Trying to flesh out truth-conditions for a certain type of sentence is not the same thing as giving a definition. I'm not saying you're completely wrong on this, I just really think that this is not merely verbal dispute. About what would I expect to accomplish by finding out the best set of truth-conditions for 'e is evidence that h', I would say that a certain concept that is used in the law, natural science and philosophy has now clear boundaries, and if some charlatan offers an argument in a public space for some conclusion of his interest, I can argue with him that he has no evidence for his claims.
Thanks for the reference to the fortitudinence concept - I didn't know it yet.
I would like to share a doubt with you. Peter Achinstein, in his The Book of Evidence considers two probabilistic views about the conditions that must be satisfied in order for e to be evidence that h. The first one says that e is evidence that h when e increases the probability of h when added to some background information b:
The second one says that e is evidence that h when the probability of h conditional on e is higher than some threshold k:
A plausible way of interpreting the second definition is by saying that k = 1/2. When one takes k to have such fixed value, it turns out that P(h|e) > k has the same truth-conditions as P(h|e) > P(~h|e) - at least if we are assuming that P is a function obeying Kolmogorov's axioms of the probability calculus. Now, Achinstein takes P(h|e) > k to be a necessary but insufficient condition for e to be evidence that h - while he claims that P(h|e&b) > P(h|b) is neither necessary nor sufficient for e to be evidence that h. That may seem shocking for those that take the condition fleshed out in (Increase in Probability) at least as a necessary condition for evidential support (I take it that the claim that it is necessary and sufficient is far from accepted - presumably one also wants to qualify e as true, or as known, or as justifiably believed, etc). So I would like to check one of Achinstein's counter-examples to the claim that increase in probability is a necessary condition for evidential support.
The relevant example is as follows:
The point now is that, although it seems right to regard e2 as being evidence in favor of h, it fails to increase h's probability conditional on (b&e1) - at least so says Achinstein. According to his example, the following is true:
Well, I have my doubts about this counterexample. The problem with it seems to me to be this: that e1 and e2 are taken to be the same piece of evidence. Let me explain. If e1 and e2 increase the probability of h, that is because they increase the probability of a further proposition:
and, as it happens, g increases the probability of h. That The New York Times reports g, assuming that the New York Times is reliable, increases the probability of g - and the same can be said about The Washington Post reporting g. But the counterexample seems to assume that both e1 and e2 are equivalent with g, and they're not. Now, it is clear that P(h|b&g) = P(h|b&g&g), but this does not show that e2 fails to increase h's probability on (b&e1). So, if it is true that e2 increases the probability of g conditional on e1, that is, if P(g|e1&e2) > P(g|e1), and if it is true that g increases the probability of h, then it is also true that e2 increases the probability of h. I may be missing something, but this reasoning sounds right to me - the example wouldn't be a counterexample. What do you think?