All of fsopho's Comments + Replies

Me neither - but I am not thinking that it is a good idea to divorce h from b.

Just a technical point: P(x) = P(x|b)P(b) + P(x|~b)P(~b)

0Decius
Given a deck of cards shuffled and arranged in a circle, the odds of the northernmost card being the Ace of Spades should be 1/52. h=the northernmost card is the Ace of Spades (AoS) Turning over a card at random which is neither the AoS nor the northernmost card is evidence for h. Omega providing the true statement "The AoS is between the KoD and 5oC" is not evidence for or against, unless the card we turned over is either adjacent to the northernmost card or one of the referenced cards. If we select another card at random, we can update again- either to 2%, 50%, 1, or 0. (2% if none of the referenced cards are shown, 50% if an adjacent card is picked and it is either KoD or 5oC, 1 if the northernmost card is picked and it is the AoS, and 0 if one of the referenced cards turns up where it shouldn't be.) That seems enough proof that evidence can alter the evidential value of other evidence.

Yes, we agree on that. There is an example that copes with the structure you just mentioned. Suppose that

h: I will get rid of the flu

e1: I took Fluminex

e2: I took Fluminalva

b: Fluminex and Fluminalva cancel each other's effect against flu

Now suppose that both, Fluminex and Fluminalva, are effective against flu. Given this setting, P(h|b&e1)>P(h|b) and P(h|b&e2)>P(h|b), but P(h|b&e1&e2)<P(h|b). If the use of background b is bothering you, just embed the information about the canceling of effects in each of the pieces of eviden... (read more)

0Decius
I don't understand what it would mean to divorce a hypothesis h from the background b. Suppose you have the flu (background b); there is zero chance that you don't have the flu, so P(~b)=0 and P(x&~b)=0, therefore P(x|~b)=0 (or undefined, but can be treated as zero for these purposes). Since P(x)=P(x|b)+P(x|~b), P(x)=P(x|b) EDIT: As pointed out below, P(x)=P(x|b)P(b)+P(x|~b)P(~b). This changes nothing else . If we change the background information, we change b and are dealing with a new hypothetical universe (for example, one in which taking both Fluminex and Fluminalva increases the duration of a flu.) In that universe, you need prior beliefs about whether you are taking Fluminex and Fluminalva, (and both, if they aren't independent) as well as their effectiveness separately and together, in order to come to a conclusion. P, h, and e are all dependent on the universe b existing, and a different universe (even one that only varies in a tiny bit of information) means a different h, even if the same words are used to describe it. Evidence exists only in the (possibly hypothetical) universe that it actually exists in.

So, he claims that it is just a necessary condition - not a sufficient one. I didn't reach the point where he offers the further conditions that, together with high probability, are supposed to be sufficient for evidential support.

p.s: still, you earned a point for the comment =|

But that these are the truth conditions for evidential support relations does not mean that only tautologies can be evidence, nor that only sets of tautologies can be one's background. If you prefer, this is supposed to be a 'test' for checking if particular bits of information are evidence for something else. So I agree that backgrounds in minds is one of the things we got to be interested in, as long as we want to say something about rationality. I just don't think that the usefulness of the test (the new truth-conditions) is killed. =]

All right, I see. I agree that order is not determinant for evidential support relations.

It seems to me that the relevant sentence is not meaningful, or false.

0Decius
I think that we agree that neither of the definitions offered in the post are correct. Can you see any problem with "e is evidence of h iff P(h|e) > P(h)", other than cases where evidence interacts in some complex manner such that P(h|e1)>P(h); P(h|e2)>P(h); but P(h|e1&e1)<P(h) (I'm not sure that is even possible, but I think it can be done with three mutually exclusive hypotheses).

Actually, Achinstein's claim is that the first one does not need to be satisfied - the probability of h does not need to be increased by e in order for e to be evidence that h. He gives up the first condition because of the counterexamples.

0pragmatist
Well, duh. You're right, the post was pretty clear about this. I need to read more carefully. So does he believe that the second condition is both necessary and sufficient? That seems prone to a bunch of counterexamples also.

Thanks. I would say that what we have in front of us are clear cases where someone have evidence for something else. In the example given, we have in front of us that both, e1 and e2 (together with the assumption that the NYT and WP are reliable) are evidence for g. So, presumably, there is an agreement between people offering the truth conditions for 'e is evidence that h' about the range of cases where there is evidence - while the is no agreement between people answering the question about the sound of the three, because the don't agree on the range of ... (read more)

Right, so, one think that is left open by both definitions is the kind of interpretation given to the function P. Is that suppose to be interpreted as a (rational) credence function? If so, the Positive Relevance account would say that e is evidence that h when one is rational in having a bigger credence in h when one has e as evidence than when one does not have e as evidence. For some, though, it would seem that in our case the agent that already knows b and e1 wouldn't be rational in having a bigger credence that Bill will win the lottery if she learns ... (read more)

1Vaniver
It's not clear to me why exactly you want the definition of evidence to not rely on the particular background of the mind where the P resides. If you limit b to tautologies, you kill its usefulness. "This is a fair lottery in which one ticket drawn at random will win" isn't a tautology.

This is not a case where we have two definitions talking about two sorts of things (like sound waves versus perception of sound waves). This is a case where we have two rival mathematical definitions to account for the relation of evidential support. You seem to think that the answer to questions about disputes over distinct definitions is in that post you are referring to. I read the post, and I didn't find the answer to the question I'm interested in answering - which is not even that of deciding between two rival definitions.

1Richard_Kennaway
What is this "relation of evidential support", that is a given thing in front of us? From your paraphrase of Achinstein, and the blurb of his book, it is clear that there is no such thing, any more than "sound" means something distinct from either "vibrations" or "aural perceptions". "Sound" is a word that covers both of these, and since both are generally present when we ordinarily talk of sound, the unheard falling tree appears paradoxical, leading us to grasp around for something else that "sound" must mean. "Evidence" is a word that covers both of the two definitions offered, and several others, but the fact that our use of the word does not seem to match any one of them does not mean that there must be something else in the world that is the true meaning of "evidence". The analogy with unheard falling trees is exact. What would you expect to accomplish by discovering whether some particular e really is "evidence" for some h, that would not be accomplished by discovering whether each of the concrete definitions is satisfied? If you know whether e is "fortitudinence" for h (increases its probability), and you know whether e is "veritescence" for h (gives a posterior probability above 1/2), what else do you want to know? BTW, around here "fortitudinence" is generally called "Bayesian evidence" for reasons connected with Bayes theorem, but again, that's just a definition. There are reasons why that is an especially useful concept, but however strong those reasons, one is not discovering what the word "evidence" "really means".

Yeah, one of the problems of the example is that it seems to take for granted that both, the NYT and WP are 100% reliable.

So, I'll kind of second the observation in the comment above. It seems to me that, from the fact that reading the same story in the Washington Post does not make your epistemic situation better, it does not seem to follow that the Post story is not evidence that Bill win the lottery. That is: from the fact that a certain piece of evidence is swamped by another piece of evidence in a certain situation, it does not follow that the former is not evidence. We can see that it is evidence just following your steps: we conceive another situation where I didn't re... (read more)

Thanks. Your first question is showing a case where the evidential support of e1 is swamped by the evidential support of g, right? It seems that, if I have g as evidence, e1 doesn't change my epistemic situation as regards the proposition that Bill will win the lottery. So if we answer that e1 is not evidence that h in this case, we are assuming that if one piece of evidence is swamped by another, it is not evidence anymore. I wouldn't go that way (would you?), because in a situation where I didn't see Bill buying the tickets, I still would have e2 as evid... (read more)

0Decius
My first case is where g is given- you know it as well as you know anything else, including the other givens. I would not say that whether a particular fact is evidence depends on the order in which you consider them. Do you concur that "It is 75% likely that Bill will win the lottery given 'it is not the case that This is a fair lottery in which one ticket drawn at random will win. . . .'" p(h|~b) is not a meaningful statement?

Yes I did - but thanks for the tip anyway.

2Richard_Kennaway
Well, it's a complete answer to the conundrum.

Thanks Vaniver. Doesn't your example shows something unsatisfactory about the High Probability interpretation also? Given that P(A or ~A|My socks are white)>1/2, that my socks are white would also count as evidence that A or ~A. Your point seems to suggest that there must be something having to do with content in common between the evidence and the hypothesis.

Thanks, that's interesting. The exercise of thinking how people would act to gather evidence having in mind the two probabilistic definitions gives food for thought. Specifically, I'm thinking that, if we were to tell people: "Look for evidence in favor of h and, remember, evidence is that which ...", where we substitute '...' by the relevant definition of evidence, they would gather evidence in a different way from the way we naturally look for evidence for some hypotheses. The agents to whom that advice was given would have a reflexive access t... (read more)

I agree that some philosophical searches for analyses of concepts turn out generating endless, fruitless, sequences of counterexamples and new definitions. However, it is not the case that, always, when we are trying to find out the truth conditions for something, we are engaged in such kind of unproductive thinking. As long as we care about what it is for something to be evidence for something else (we may care about this because we want to understand what gives support to scientific theories, etc), it seems legitimate for us to look for satisfactory truth conditions for 'e is evidence that h'. Trying to make the boundaries of our concepts clear is also part of the project of optimizing our rationality.

So, I would like to thank you guys for the hints and critical comments here - you are helping me a lot! I'll read what you recommended in order to investigate the epistemological properties of the degree-of-belief version of bayesianism. For now, I'm just full of doubts: "does bayesianism really stand as a normative theory of rational doxastic attitudes?"; "what is the relation between degrees of belief and evidential support?", "is it correct to say that people reason in accordance to probability principles when they reason correctly?", "is the idea of defeating evidence an ilusion?", and still others. =]

I can't believe people apply Baye's theorem when confronted to counter-evidence. What evidence do we have to believe that Bayesian probability theories describe the way we reason inductively?

8Manfred
Oh, if you want to model what people actually do, I agree it's much more complicated. Merely doing things correctly is quite simple by comparison.
6[anonymous]
It doesn't necessarily describe the way we actually reason (because of cognitive biases that effect our ability to make inferences), but it does describe the way we should reason.

we are not justified in assigning probability 1 to the belief that 'A=A' or to the belief that 'p -> p'? Why not?

1argumzio
Those are only beliefs that are justified given certain prior assumptions and conventions. In another system, such statements might not hold. So, from a meta-logical standpoint, it is improper to assign probabilities of 1 or 0 to personally held beliefs. However, the functional nature of the beliefs do not themselves figure in how the logical operators function, particularly in the case of necessary reasoning. Necessary reasoning is a brick wall that cannot be overcome by alternative belief, especially when one is working under specific assumptions. To deny the assumptions and conventions one set for oneself, one is no longer working within the space of those assumptions or conventions. Thus, within those specific conventions, those beliefs would indeed hold to the nature of deduction (be either absolutely true or absolutely false), but beyond that they may not.
1[anonymous]
Short answer: Because if you assign probability 1 to a belief, then it is impossible for you to change your mind even when confronted with a mountain of opposing evidence. For the full argument, see Infinite Certainty.

OK, got it, thank you. I have two doubts. (i) Why a belief with degree 1 is not affected by new information which is counter-evidence to that belief? Does it mean every belief with degree 1 I have now will never be lost/defeated/changed? (ii) The difference between what you call traditional epistemology and Bayesianism involves lots of things. I think one of them is their objectives - the traditional epistemologist and the Bayesian in general have different goals. The first one is interested in posing the correct norms of reasoning and other sources of be... (read more)

3AlexSchell
(i) That remark concerns a Bayesian agent, or more specifically an agent who updates by conditionalization. It's a property of conditionalization that no amount of evidence that an agent updates upon can change a degree of belief of 0 or 1. Intuitively, the closer a probability gets to 1, the less it will decrease in its absolute value in response to a given strength of counterevidence. 1 corresponds to the limit at which it won't decreases at all from any counterevidence. (ii) I'm well-aware that the aims of most epistemologists and most Bayesian philosophers diverge somewhat, but there is substantial overlap even within philosophy (i.e. applying Bayesianism to norms of belief change); furthermore, Bayesianism is very much applicable (and in fact applied) to norms of belief change, your puzzles being examples of questions that wouldn't even occur to a Bayesian.
7prase
That's how degree 1 is defined: such strong a belief that no evidence can persuade one to abandon it. (You shoudn't have such beliefs, needless to say.) I don't see the difference. Bayesian epistemology is a set of prescriptive norms of reasoning. Bayesianism explains the problem away - the problem is there only if you use notions like defeat or knowledge and insist that to build your epistemology on them. Your puzzle shows that it is impossible. The fact that Bayesianism is free of Gettier problems is an argument for Bayesianism and against "traditional epistemology". To make an imprecise analogy, ancient mathematicians have long wondered what the infinite sum 1-1+1-1+1-1... is equal to. When calculus was invented, people saw that this was just a confused question. Some puzzles are best answered by rejecting the puzzle altogether.

I didn't downvote! And I am not shooting the messenger, as I am also sure it is not and argument about Gettier problems. I am sorry if the post offended you - maybe it is better not to mix different views of something.

3AlexSchell
I believe Manfred is referring to downvoting your post, you being the messenger, etc.

Well, puzzle 2 is a puzzle with a case of knowledge: I know (T). Changing to probabilities does not solve the problem, only changes it!

6[anonymous]
But that's the thing: you don't "know" (T). You have a certain degree of belief, which is represented by a real number between 1 and 0, that (T) is true. You can then update this degree of belief based on (RM) and (TM).

Thank you, Zed. You are right: I didn't specified the meaning of 'misleading evidence'. It means evidence to believe something that is false (whether or not the cognitive agent receiving such evidence knows it is misleading). Now, maybe it I'm missing something, but I don't see any silliness in thinking of terms of "belief A defeats belief B". On the basis of having an experiential evidence, I believe there is a tree in front of me. But then, I discover I'm drugged with LCD (a friend of mine put it in my coffee previously, unknown to me). This ne... (read more)

1Zed
If you're certain that belief A holds you cannot change your mind about that in the future. The belief cannot be "defeated", in your parlance. So given that you can be exposed to information that will lead you to change your mind we conclude that you weren't absolutely certain about belief A in the first place. So how certain were you? Well, this is something we can express as a probability. You're not 100% certain a tree in front of you is, in fact, really there exactly because you realize there is a small chance you're drugged or otherwise cognitively incapacitated. So as you come into contact with evidence that contradicts what you believe you become less certain your belief is correct, and as you come into contact with evidence that confirms what you believe you become more confident your belief is correct. Apply Bayes' rules for this (for links to Bayes and Bayesian reasoning see other comments in this thread). I've just read a couple of pages of Defeasible Reasoning by Pollock and it's a pretty interesting formal model of reasoning. Pollock argues, essentially, that Bayesian epistemology is incompatible with deductive reasoning (pg 15). I semi-quote: "[...] if Bayesian epistemology were correct, we could not acquire new justified beliefs by reasoning from previously justified beliefs" (pg 17). I'll read the paper, but this all sounds pretty ludicrous to me.

Thank you! Well, you didn't answered to the puzzle. The puzzles are not showing that my reasoning is broken because I have evidence to believe T and ~T. The puzzles are asking what is the rational thing to do in such a case - what is the right choice from the epitemological point of view. So, when you answer in puzzle 1 that believing (~T) is the rational thing to do, you must explain why that is so. The same applies to puzzle 2. I don't think that degrees of beliefs, expressed as probabilities, can solve the problem. Whether my belief is rational or not ... (read more)

7Manfred
So, in order to answer the puzzles, you have to start with probabilistic beliefs, rather than with binary true-false beliefs. The problem is currently somewhat like the question "is it true or false that the sun will rise tomorrow." To a very good approximation, the sun will rise tomorrow. But the earth's rotation could stop, or the sun could get eaten by a black hole, or several other possibilities that mean that it is not absolutely known that the sun will rise tomorrow. So how can we express our confidence that the sun will rise tomorrow? As a probability - a big one, like 0.999999999999. Why not just round up to one? Because although the gap between 0.999999999999 and 1 may seem small, it actually takes an infinite amount of evidence to bridge that gap. You may know this as the problem of induction. So anyhow, let's take problem 1. How confident are you in P1, P2, and P3? Let's say about 0.99 each - you could make a hundred such statements and only get one wrong, or so you think. So how about T? Well, if it follows form P1, P2 and P3, then you believe it with degree about 0.97. Now Ms. Math comes and tells you you're wrong. What happens? You apply Bayes' theorem. When something is wrong, Ms. Math can spot it 90% of the time, and when it's right, she only thinks it's wrong 0.01% of the time. So Bayes' rule says to multiply your probability of ~T by 0.9/(0.030.9 + 0.970.0001), giving an end result of T being true with probability only about 0.005. Note that at no point did any beliefs "defeat" other ones. You just multiplied them together. If Ms. Math had talked to you first, and then you had gotten your answer after, the end result would be the same. The second problem is slightly trickier because not only do you have to apply probability theory correctly, you have to avoid applying it incorrectly. Basically, you have to be good at remembering to use conditional probabilities when applying (AME). I suspect that you only conceive that you can conceive of that
3AlexSchell
Well, in that case, learning RM & TM leaves these degrees of belief unchanged, as an agent who updates via conditionalization cannot change a degree of belief that is 0 or 1. That's just an agent with an unfortunate prior that doesn't allow him to learn. More generally, I think you might be missing the point of the replies you're getting. Most of them are not-very-detailed hints that you get no such puzzles once you discard traditional epistemological notions such as knowledge, belief, justification, defeaters, etc. (or change the subject from them) and adopt Bayesianism (here, probabilism & conditionalization & algorithmic priors). I am confident this is largely true, at least for your sorts of puzzles. If you want to stick to traditional epistemology, a reasonable-seeming reply to puzzle 2 (more within the traditional epistemology framework) is here: http://www.philosophyetc.net/2011/10/kripke-harman-dogmatism-paradox.html

Good afternoon, morning or night! I'm a graduate student in Epistemology. My research is about epistemic rationality, logic and AI. I'm actually investigating about the general pattern of epistemic norms and about their nature - if these norms must be actually accessed by the cognitive agent to do their job or not; if these norms in fact optimize the epistemic goal of having true beliefs and avoiding false ones, or rather if these norms just appear to do so; and still other questions. I was navigating through the web and looking for web-based softwares to ... (read more)