Yes, we agree on that. There is an example that copes with the structure you just mentioned. Suppose that
h: I will get rid of the flu
e1: I took Fluminex
e2: I took Fluminalva
b: Fluminex and Fluminalva cancel each other's effect against flu
Now suppose that both, Fluminex and Fluminalva, are effective against flu. Given this setting, P(h|b&e1)>P(h|b) and P(h|b&e2)>P(h|b), but P(h|b&e1&e2)<P(h|b). If the use of background b is bothering you, just embed the information about the canceling of effects in each of the pieces of eviden...
But that these are the truth conditions for evidential support relations does not mean that only tautologies can be evidence, nor that only sets of tautologies can be one's background. If you prefer, this is supposed to be a 'test' for checking if particular bits of information are evidence for something else. So I agree that backgrounds in minds is one of the things we got to be interested in, as long as we want to say something about rationality. I just don't think that the usefulness of the test (the new truth-conditions) is killed. =]
Thanks. I would say that what we have in front of us are clear cases where someone have evidence for something else. In the example given, we have in front of us that both, e1 and e2 (together with the assumption that the NYT and WP are reliable) are evidence for g. So, presumably, there is an agreement between people offering the truth conditions for 'e is evidence that h' about the range of cases where there is evidence - while the is no agreement between people answering the question about the sound of the three, because the don't agree on the range of ...
Right, so, one think that is left open by both definitions is the kind of interpretation given to the function P. Is that suppose to be interpreted as a (rational) credence function? If so, the Positive Relevance account would say that e is evidence that h when one is rational in having a bigger credence in h when one has e as evidence than when one does not have e as evidence. For some, though, it would seem that in our case the agent that already knows b and e1 wouldn't be rational in having a bigger credence that Bill will win the lottery if she learns ...
This is not a case where we have two definitions talking about two sorts of things (like sound waves versus perception of sound waves). This is a case where we have two rival mathematical definitions to account for the relation of evidential support. You seem to think that the answer to questions about disputes over distinct definitions is in that post you are referring to. I read the post, and I didn't find the answer to the question I'm interested in answering - which is not even that of deciding between two rival definitions.
So, I'll kind of second the observation in the comment above. It seems to me that, from the fact that reading the same story in the Washington Post does not make your epistemic situation better, it does not seem to follow that the Post story is not evidence that Bill win the lottery. That is: from the fact that a certain piece of evidence is swamped by another piece of evidence in a certain situation, it does not follow that the former is not evidence. We can see that it is evidence just following your steps: we conceive another situation where I didn't re...
Thanks. Your first question is showing a case where the evidential support of e1 is swamped by the evidential support of g, right? It seems that, if I have g as evidence, e1 doesn't change my epistemic situation as regards the proposition that Bill will win the lottery. So if we answer that e1 is not evidence that h in this case, we are assuming that if one piece of evidence is swamped by another, it is not evidence anymore. I wouldn't go that way (would you?), because in a situation where I didn't see Bill buying the tickets, I still would have e2 as evid...
Thanks Vaniver. Doesn't your example shows something unsatisfactory about the High Probability interpretation also? Given that P(A or ~A|My socks are white)>1/2, that my socks are white would also count as evidence that A or ~A. Your point seems to suggest that there must be something having to do with content in common between the evidence and the hypothesis.
Thanks, that's interesting. The exercise of thinking how people would act to gather evidence having in mind the two probabilistic definitions gives food for thought. Specifically, I'm thinking that, if we were to tell people: "Look for evidence in favor of h and, remember, evidence is that which ...", where we substitute '...' by the relevant definition of evidence, they would gather evidence in a different way from the way we naturally look for evidence for some hypotheses. The agents to whom that advice was given would have a reflexive access t...
I agree that some philosophical searches for analyses of concepts turn out generating endless, fruitless, sequences of counterexamples and new definitions. However, it is not the case that, always, when we are trying to find out the truth conditions for something, we are engaged in such kind of unproductive thinking. As long as we care about what it is for something to be evidence for something else (we may care about this because we want to understand what gives support to scientific theories, etc), it seems legitimate for us to look for satisfactory truth conditions for 'e is evidence that h'. Trying to make the boundaries of our concepts clear is also part of the project of optimizing our rationality.
So, I would like to thank you guys for the hints and critical comments here - you are helping me a lot! I'll read what you recommended in order to investigate the epistemological properties of the degree-of-belief version of bayesianism. For now, I'm just full of doubts: "does bayesianism really stand as a normative theory of rational doxastic attitudes?"; "what is the relation between degrees of belief and evidential support?", "is it correct to say that people reason in accordance to probability principles when they reason correctly?", "is the idea of defeating evidence an ilusion?", and still others. =]
OK, got it, thank you. I have two doubts. (i) Why a belief with degree 1 is not affected by new information which is counter-evidence to that belief? Does it mean every belief with degree 1 I have now will never be lost/defeated/changed? (ii) The difference between what you call traditional epistemology and Bayesianism involves lots of things. I think one of them is their objectives - the traditional epistemologist and the Bayesian in general have different goals. The first one is interested in posing the correct norms of reasoning and other sources of be...
Thank you, Zed. You are right: I didn't specified the meaning of 'misleading evidence'. It means evidence to believe something that is false (whether or not the cognitive agent receiving such evidence knows it is misleading). Now, maybe it I'm missing something, but I don't see any silliness in thinking of terms of "belief A defeats belief B". On the basis of having an experiential evidence, I believe there is a tree in front of me. But then, I discover I'm drugged with LCD (a friend of mine put it in my coffee previously, unknown to me). This ne...
Thank you! Well, you didn't answered to the puzzle. The puzzles are not showing that my reasoning is broken because I have evidence to believe T and ~T. The puzzles are asking what is the rational thing to do in such a case - what is the right choice from the epitemological point of view. So, when you answer in puzzle 1 that believing (~T) is the rational thing to do, you must explain why that is so. The same applies to puzzle 2. I don't think that degrees of beliefs, expressed as probabilities, can solve the problem. Whether my belief is rational or not ...
Good afternoon, morning or night! I'm a graduate student in Epistemology. My research is about epistemic rationality, logic and AI. I'm actually investigating about the general pattern of epistemic norms and about their nature - if these norms must be actually accessed by the cognitive agent to do their job or not; if these norms in fact optimize the epistemic goal of having true beliefs and avoiding false ones, or rather if these norms just appear to do so; and still other questions. I was navigating through the web and looking for web-based softwares to ...
Me neither - but I am not thinking that it is a good idea to divorce h from b.
Just a technical point: P(x) = P(x|b)P(b) + P(x|~b)P(~b)