Comment author: AlexSchell 12 December 2011 06:07:57PM *  2 points [-]

I can conceive the puzzle as one where all the relevant beliefs - (R1), (T), (AME), etc, - have degree 1.

Well, in that case, learning RM & TM leaves these degrees of belief unchanged, as an agent who updates via conditionalization cannot change a degree of belief that is 0 or 1. That's just an agent with an unfortunate prior that doesn't allow him to learn.

More generally, I think you might be missing the point of the replies you're getting. Most of them are not-very-detailed hints that you get no such puzzles once you discard traditional epistemological notions such as knowledge, belief, justification, defeaters, etc. (or change the subject from them) and adopt Bayesianism (here, probabilism & conditionalization & algorithmic priors). I am confident this is largely true, at least for your sorts of puzzles. If you want to stick to traditional epistemology, a reasonable-seeming reply to puzzle 2 (more within the traditional epistemology framework) is here: http://www.philosophyetc.net/2011/10/kripke-harman-dogmatism-paradox.html

Comment author: fsopho 12 December 2011 06:28:47PM 1 point [-]

OK, got it, thank you. I have two doubts. (i) Why a belief with degree 1 is not affected by new information which is counter-evidence to that belief? Does it mean every belief with degree 1 I have now will never be lost/defeated/changed? (ii) The difference between what you call traditional epistemology and Bayesianism involves lots of things. I think one of them is their objectives - the traditional epistemologist and the Bayesian in general have different goals. The first one is interested in posing the correct norms of reasoning and other sources of beliefs (perception, memory, etc). The second one maybe is more interested in modelling rational structures for a variety of purposes. That being the case, the puzzles I brought maybe are not of interest for Bayesians - but that does not mean Bayesianism solve the question of what is the correct thing to do in such cases. Thanks for the link (I already know Harman's approach, which is heavily criticized by Conee and others).

Comment author: Manfred 12 December 2011 04:32:04PM *  4 points [-]

If you downvoted, maybe offer constructive criticism? I feel like you're shooting the messenger, when we should really be shooting (metaphorically) mainstream philosophy for not recognizing the places where these questions have already been solved, rather than publishing more arguments about Gettier problems.

Comment author: fsopho 12 December 2011 05:50:45PM 0 points [-]

I didn't downvote! And I am not shooting the messenger, as I am also sure it is not and argument about Gettier problems. I am sorry if the post offended you - maybe it is better not to mix different views of something.

Comment author: [deleted] 12 December 2011 02:54:53PM *  4 points [-]

Both of these puzzles fall apart if you understand the concepts in Argument Screens Off Authority, A Priori, and Bayes Theorem. Essentially, the notion of "defeat" is extremely silly. In Puzzle 1, for example, what you should really be doing is updating your level of belief in T based on the mathematician's argument. The order in which you heard the arguments doesn't matter--the two Bayesian updates will still give you the same posterior regardless of which one you update on first.

Puzzle 2 is similarly confused about "defeat"; the notion of "misleading evidence" in Puzzle 2 is also wrong. If you look at things in terms of probabilities instead of the "known/not known" dichotomy presented in the puzzle, there is no confusion. Just update on the mathematician's argument and be done with it.

In response to comment by [deleted] on two puzzles on rationality of defeat
Comment author: fsopho 12 December 2011 05:39:42PM -1 points [-]

Well, puzzle 2 is a puzzle with a case of knowledge: I know (T). Changing to probabilities does not solve the problem, only changes it!

Comment author: Zed 12 December 2011 03:14:13PM *  8 points [-]

I second Manfred's suggestion about the use of beliefs expressed as probabilities.

In puzzle (1) you essentially have a proof for T and a proof for ~T. We don't wish the order in which we're exposed to the evidence to influence us, so the correct conclusion is that you should simply be confused*. Thinking in terms of "Belief A defeats belief B" is a bit silly, because you then get situations where you're certain T is true, and the next day you're certain ~T is true, and the day after that you're certain again that T is true after all. So should beliefs defeat each other in this manner? No. Is it rational? No. Does the order in which you're exposed to evidence matter? No.

In puzzle (2) the subject is certain a proposition is true (even though he's still free to change his mind!). However, accepting contradicting evidence leads to confusion (as in puzzle 1), and to mitigate this the construct of "Misleading Evidence" is introduced that defines everything that contradicts the currently held belief as Misleading. This obviously leads to Status Quo Bias of the worst form. The "proof" that comes first automatically defeats all evidence from the future, therefore making sure that no confusion can occur. It even serves as a Universal Counterargument ("If that were true I'd believe it and I don't believe it therefore it can't be true"). This is a pure act of rationalization, not of rationality.

*) meaning that you're not completely confident of T and ~T.

Comment author: fsopho 12 December 2011 05:36:25PM 0 points [-]

Thank you, Zed. You are right: I didn't specified the meaning of 'misleading evidence'. It means evidence to believe something that is false (whether or not the cognitive agent receiving such evidence knows it is misleading). Now, maybe it I'm missing something, but I don't see any silliness in thinking of terms of "belief A defeats belief B". On the basis of having an experiential evidence, I believe there is a tree in front of me. But then, I discover I'm drugged with LCD (a friend of mine put it in my coffee previously, unknown to me). This new piece of information defeats the justification I had for believing there is a tree in front of me - my evidence does not support this belief anymore. There is a good material on defeasible reasoning and justification in John Pollock's website: http://oscarhome.soc-sci.arizona.edu/ftp/publications.html#reasoning

Comment author: Manfred 12 December 2011 02:53:54PM *  19 points [-]

Yo, deductive logic is a special case of probabilistic logic in the limit that your probabilities for things go to 0 and 1, i.e. you're really sure of things. If I'm really sure that Socrates is a man, and I'm really sure that all men are mortal, then I'm really sure that Socrates is mortal. However, if I am 20% sure that Socrates is a space alien, my information is no longer well-modeled by deductive logic, and I have to use probabilistic logic.

The point is that the conditions for deductive logic have always broken down if you can deduce both T and ~T. This breakdown doesn't (always) mean you can no longer reason. It does mean you should stop trying to use deductive logic, and use probabilistic logic instead. Probabilistic logic is, for various reasons, the right way to reason from incomplete information - deductive logic is just an approximation for when you're really sure of things. Try phrasing your problems with degrees of belief expressed as probabilities, follow the rules, and you will find that the apparent problem has vanished into thin air.

Welcome to LessWrong!

Comment author: fsopho 12 December 2011 05:16:20PM 0 points [-]

Thank you! Well, you didn't answered to the puzzle. The puzzles are not showing that my reasoning is broken because I have evidence to believe T and ~T. The puzzles are asking what is the rational thing to do in such a case - what is the right choice from the epitemological point of view. So, when you answer in puzzle 1 that believing (~T) is the rational thing to do, you must explain why that is so. The same applies to puzzle 2. I don't think that degrees of beliefs, expressed as probabilities, can solve the problem. Whether my belief is rational or not doesn't seem to depend on the degree of my belief. There are cases in which the degree of my belief that P is very low and, yet, I am rational in believing that P. There are cases where I infer a proposition from a long argument, have no counter-evidence to any premise or to the support relation between premises and conclusion but, yet, have a low degree of confidence in the conclusion. Degrees of belief is a psychological matter, or at least so it appear to me. Nevertheless, even accepting the degree-of-belief model of doxastic rational changes, I can conceive the puzzle as one where all the relevant beliefs - (R1), (T), (AME), etc, - have degree 1. Can you explain what is the rational thing to do in each case, and why?

two puzzles on rationality of defeat

4 fsopho 12 December 2011 02:17PM

I present here two puzzles of rationality you LessWrongers may think is worth to deal with. Maybe the first one looks more amenable to a simple solution, while the second one has called attention of a number of contemporary epistemologists (Cargile, Feldman, Harman), and does not look that simple when it comes to a solution. So, let's go to the puzzles!

 

Puzzle 1 

At t1 I justifiably believe theorem T is true, on the basis of a complex argument I just validly reasoned from the also justified premises P1, P2 and P3.
So, in t1 I reason from premises:
 
(R1) P1, P2 ,P3
 
To the known conclusion:
 
(T) T is true
 
At t2, Ms. Math, a well known authority on the subject matter of which my reasoning and my theorem are just a part, tells me I’m wrong. She tells me the theorem is just false, and convince me of that on the basis of a valid reasoning with at least one false premise, the falsity of that premise being unknown to us.
So, in t2 I reason from premises (Reliable Math and Testimony of Math):
 
(RM) Ms. Math is a reliable mathematician, and an authority on the subject matter surrounding (T),
 
(TM) Ms. Math tells me T is false, and show to me how is that so, on the basis of a valid reasoning from F, P1, P2 and P3,
 
(R2) F, P1, P2 and P3
 
To the justified conclusion:
 
(~T) T is not true
 
It could be said by some epistemologists that (~T) defeat my previous belief (T). Is it rational for me to do this way? Am I taking the correct direction of defeat? Wouldn’t it also be rational if (~T) were defeated by (T)? Why ~(T) defeats (T), and not vice-versa? It is just because ~(T)’s justification obtained in a later time?


Puzzle 2

At t1 I know theorem T is true, on the basis of a complex argument I just validly reasoned, with known premises P1, P2 and P3. So, in t1 I reason from known premises:
 
(R1) P1, P2 ,P3
 
To the known conclusion:
 
(T) T is true
 
Besides, I also reason from known premises:
 
(ME) If there is any evidence against something that is true, then it is misleading evidence (evidence for something that is false)
 
(T) T is true
 
To the conclusion (anti-misleading evidence):
 
(AME) If there is any evidence against (T), then it is misleading evidence
 
At t2 the same Ms. Math tells me the same thing. So in t2 I reason from premises (Reliable Math and Testimony of Math):
 
(RM) Ms. Math is a reliable mathematician, and an authority on the subject matter surrounding (T),
 
(TM) Ms. Math tells me T is false, and show to me how is that so, on the basis of a valid reasoning from F, P1, P2 and P3,
 
But then I reason from::
 
(F*) F, RM and TM are evidence against (T), and
 
(AME) If there is any evidence against (T), then it is misleading evidence
 
To the conclusion:
 
(MF) F, RM and TM is misleading evidence
 
And then I continue to know T and I lose no knowledge, because I know/justifiably believe that the counter-evidence I just met is misleading. Is it rational for me to act this way?
I know (T) and I know (AME) in t1 on the basis of valid reasoning. Then, I am exposed to misleading evidences (Reliable Math), (Testimony of Math) and (F). The evidentialist scheme (and maybe still other schemes) support the thesis that (RM), (TM) and (F) DEFEATS my justification for (T) instead. So that whatever I inferred from (T) is no longer known. However, given my previous knowledge of (T) and (AME), I could know that (MF): F is misleading evidence. It can still be said that (RM), (TM) and (F) DEFEAT my justification for (T), given that (MF) DEFEAT my justification for (RM), (TM) and (F)?

Comment author: fsopho 07 December 2011 06:29:50PM 5 points [-]

Good afternoon, morning or night! I'm a graduate student in Epistemology. My research is about epistemic rationality, logic and AI. I'm actually investigating about the general pattern of epistemic norms and about their nature - if these norms must be actually accessed by the cognitive agent to do their job or not; if these norms in fact optimize the epistemic goal of having true beliefs and avoiding false ones, or rather if these norms just appear to do so; and still other questions. I was navigating through the web and looking for web-based softwares to calculate probabilites, so that I found LW, and guess what! I started to read it and couldn't stop - each link sounds exciting and interesting (bias, probability, belief, bayesianism...). So, I happily made an account, and I'm eager to discuss with you guys! Hope I can contribute to LW some way. We (me and my research partners) have a blog (https://fsopho.wordpress.com) on epistemology and reasoning. We're all together in the search for knowledge, fighting bias and requiring evidence! see ya =]

View more: Prev