Yo, deductive logic is a special case of probabilistic logic in the limit that your probabilities for things go to 0 and 1, i.e. you're really sure of things. If I'm really sure that Socrates is a man, and I'm really sure that all men are mortal, then I'm really sure that Socrates is mortal. However, if I am 20% sure that Socrates is a space alien, my information is no longer well-modeled by deductive logic, and I have to use probabilistic logic.
The point is that the conditions for deductive logic have always broken down if you can deduce both T and ~T. This breakdown doesn't (always) mean you can no longer reason. It does mean you should stop trying to use deductive logic, and use probabilistic logic instead. Probabilistic logic is, for various reasons, the right way to reason from incomplete information - deductive logic is just an approximation for when you're really sure of things. Try phrasing your problems with degrees of belief expressed as probabilities, follow the rules, and you will find that the apparent problem has vanished into thin air.
Welcome to LessWrong!
Thank you! Well, you didn't answered to the puzzle. The puzzles are not showing that my reasoning is broken because I have evidence to believe T and ~T. The puzzles are asking what is the rational thing to do in such a case - what is the right choice from the epitemological point of view. So, when you answer in puzzle 1 that believing (~T) is the rational thing to do, you must explain why that is so. The same applies to puzzle 2. I don't think that degrees of beliefs, expressed as probabilities, can solve the problem. Whether my belief is rational or not ...
I present here two puzzles of rationality you LessWrongers may think is worth to deal with. Maybe the first one looks more amenable to a simple solution, while the second one has called attention of a number of contemporary epistemologists (Cargile, Feldman, Harman), and does not look that simple when it comes to a solution. So, let's go to the puzzles!
Puzzle 1
At t1 I justifiably believe theorem T is true, on the basis of a complex argument I just validly reasoned from the also justified premises P1, P2 and P3.
So, in t1 I reason from premises:
(R1) P1, P2 ,P3
To the known conclusion:
(T) T is true
At t2, Ms. Math, a well known authority on the subject matter of which my reasoning and my theorem are just a part, tells me I’m wrong. She tells me the theorem is just false, and convince me of that on the basis of a valid reasoning with at least one false premise, the falsity of that premise being unknown to us.
So, in t2 I reason from premises (Reliable Math and Testimony of Math):
(RM) Ms. Math is a reliable mathematician, and an authority on the subject matter surrounding (T),
(TM) Ms. Math tells me T is false, and show to me how is that so, on the basis of a valid reasoning from F, P1, P2 and P3,
(R2) F, P1, P2 and P3
To the justified conclusion:
(~T) T is not true
It could be said by some epistemologists that (~T) defeat my previous belief (T). Is it rational for me to do this way? Am I taking the correct direction of defeat? Wouldn’t it also be rational if (~T) were defeated by (T)? Why ~(T) defeats (T), and not vice-versa? It is just because ~(T)’s justification obtained in a later time?
Puzzle 2
At t1 I know theorem T is true, on the basis of a complex argument I just validly reasoned, with known premises P1, P2 and P3. So, in t1 I reason from known premises:
(R1) P1, P2 ,P3
To the known conclusion:
(T) T is true
Besides, I also reason from known premises:
(ME) If there is any evidence against something that is true, then it is misleading evidence (evidence for something that is false)
(T) T is true
To the conclusion (anti-misleading evidence):
(AME) If there is any evidence against (T), then it is misleading evidence
At t2 the same Ms. Math tells me the same thing. So in t2 I reason from premises (Reliable Math and Testimony of Math):
(RM) Ms. Math is a reliable mathematician, and an authority on the subject matter surrounding (T),
(TM) Ms. Math tells me T is false, and show to me how is that so, on the basis of a valid reasoning from F, P1, P2 and P3,
But then I reason from::
(F*) F, RM and TM are evidence against (T), and
(AME) If there is any evidence against (T), then it is misleading evidence
To the conclusion:
(MF) F, RM and TM is misleading evidence
And then I continue to know T and I lose no knowledge, because I know/justifiably believe that the counter-evidence I just met is misleading. Is it rational for me to act this way?
I know (T) and I know (AME) in t1 on the basis of valid reasoning. Then, I am exposed to misleading evidences (Reliable Math), (Testimony of Math) and (F). The evidentialist scheme (and maybe still other schemes) support the thesis that (RM), (TM) and (F) DEFEATS my justification for (T) instead. So that whatever I inferred from (T) is no longer known. However, given my previous knowledge of (T) and (AME), I could know that (MF): F is misleading evidence. It can still be said that (RM), (TM) and (F) DEFEAT my justification for (T), given that (MF) DEFEAT my justification for (RM), (TM) and (F)?