"If P, then Q. P. Therefore, Not-Q." is just as basic and elemental an error as "If P, then Q. Q. Therefore, P." is.
I'm not sure I'd grant that. The second can be sneaky, in that you can encounter countless arguments of that form with true premises and a true conclusion. In the first example, on the other hand, true premises guarantee that the conclusion is false.
I'm not sure if there's a word for the latter category, but there probably should be. "The conjunction of the premises is inconsistent with the conclusion" is not nearly as awesome as, say, "Antivalid"
In Bayesian reasoning,
The probability of Q given P is 1. The probability of Q given Not-P is less than 1. The prior probability of P is not 0 and not 1. Q. Therefore, the posterior probability of P is higher than the prior probability of P.
and
The probability of Q given P is greater than 0. The probability of Q given Not-P is 0. The prior probability of P is not 0 and not 1. Not-Q. Therefore, the posterior probability of P is lower than the prior probability of P.
are valid.
The probability of Q given P is 1. The probability of Q given Not-P is less than 1. The prior probability of P is not 0 and not 1. Q. Therefore, the posterior probability of P is lower than the prior probability of P.
and
The probability of Q given P is greater than 0. The probability of Q given Not-P is 0. The prior probability of P is not 0 and not 1. Not-Q. Therefore, the posterior probability of P is higher than the prior probability of P.
are invalid and antivalid.
Is "If P, then Q. P. Therefore, Not-Q." also just as basic and elemental an error as "P is Fermat's Last Theorem. Therefore, P is false."?
Related: Absence of Evidence is Evidence of Absence or Absence of Evidence Is Too Evidence of Absence (Usually) and Conservation of Expected Evidence.
Is "If P, then Q. P. Therefore, Not-Q." also just as basic and elemental an error as "P is Fermat's Last Theorem. Therefore, P is false."?
No, it's far more basic. "Fermat's Last Theorem" is a very complicated concept which is only being referenced here. The full logical description of the concept - which is what's necessary to evaluate the argument - would be much longer.
In the words of a well known amateur pianist:
If P is true then Q is true Q is true Therefore, P becomes more plausible.
But Annoyance was talking about logic, not plausible reasoning or probability theory, right? In terms of Aristotelian deductive logic the two errors quoted are pretty much equivalent.
countless arguments of that form with true premises and a true conclusion.
That's why I edited the post (before your comment) to change "P" to "Therefore, P".
Subtle difference: if P is true, "P" (the statement that P is true) is true. But "Therefore, P" isn't necessarily true, because it references a preceding argument that may (or may not) permit the valid derivation of the conclusion.
In that particular argument, the conclusion does NOT follow from the premises and is false regardless of the value of P.
That's not really how "therefore" is usually used - it's a marker to show where the conclusion is, like drawing a line or using ∴. The point of having the distinction of invalidity is to understand where something might be wrong with an argument even if it has true premises and conclusion; taking "therefore" to mean something in a formal argument doesn't seem fruitful.
It just makes explicit what the other formulation implies: that the statement is said to follow from the premises.
The structure of the argument asserts that in the first version. The structure of the statement asserts it in the second. In terms of the totality of the argument, the two versions mean the same thing, but the shape of their presentation of truth is slightly different. Hopefully the second version reduces the tendency of people to confuse the truth of the statement with the truth of the conclusion.
"If P, then Q. Q. Therefore, P."
I have taught this as a fallacy for the last two semesters in a propositional logic section of my course. However, given the prevalence of this fallacy, as well as the observed resistance of students to assimilating it, I would like to propose that this really isn't a fallacy but a disconnect between what we often naturally mean by "p implies q" and what it technically means in propositional logic.
I argue that what we tend to understand by the statement "p implies q" is that p is the set of things that result in q. This can be tested by the fact that if Q is known to have two causes P1 and P2, and someone says that P1 implies Q, it would be rather natural and logical for another person to "correct" them by adding that P2 also causes Q. But the word correct is too strong ... it is rather an augmentation.
I think that, like many statements in ordinary usage, "P implies Q" is a fundamentally Bayesian statement; a person means: "in my experience, P is the set of things that cause Q". The 'in my experience' means that it is possible (but not observed) that Q might not happen after P, and also that other things besides P might cause Q, but again this hasn't been observed or isn't recalled at the moment. A person is normally willing to be flexible on both points in the face of new evidence, but is taught in logic class to be absolute for a moment while considering the truth or falsity of the statement.
So finally, when an untrained person is told that "p implies q" is a true logical statement, in their efforts to be "absolute" they may think that both implied meanings are true: p always causes q AND only p causes q. It takes some training (deprogramming of what it has seemed to mean their whole natural lives) to understand that only the first is intended (p always causes q) when the statement is "True" in formal logic.
Calling the logical arrow that means "not P or Q" by the name "implication", even if you say "material implication", might've been a bad idea to begin with.
Perhaps a better example of a "fallacy" that is really just a mismatch between expected verses actual meanings is the example of how many doctors fail to accurately estimate the probability of false positives given a positive. It's called the base rate fallacy and works like this (from here):
Given that a women has a positive result, what is the probability that she actually has breast cancer?
It turns out that the probability she has cancer is only 7.76%, but many doctors would over-estimate this.
Well, I argue that this is a mismatch between the intellectual way a mathematician would present the problem and the way a doctor experiences medicine. A doctor who frequently gives mammograms would certainly learn over time that most women who have a positive result don't have breast cancer. From their point of view, the occurrence of 'false positives" -- i.e., results that were positive but false -- is 92%. Yet on a pen and paper test they are told that they rate of false positives is 9.6% and they misinterpret this.
On the one hand, you could just explain very clearly what is meant by this other rate of false positives. Doctors are generally intelligent and can understand this little circumlocution. On the other hand, you could instead give them the more natural figure -- the one that jives with experience; that 92% of positives are false, and remove the fallacy altogether.
I think that mathematicians have an obligation to use definitions that jive with experience (good mathematics always does?), especially instead of calling common sense "fallacious" when, actually, it is just being more Bayesian than frequentist.
The page you link includes the "9.6% false positive" usage, but that terminology is preceded by,
9.6% of women without breast cancer will also get positive mammographies
making the interpretation of the phrase clear.
The mismatch isn't intellectual versus experiential in the way you claim. Most people get the problem right when the numbers are stated as frequencies relative to some large number instead of probabilities or percentages, i.e., when the wording primes people to think about counting members in a class.
Most people get the problem right when the numbers are stated as frequencies relative to some large number instead of probabilities or percentages, i.e., when the wording primes people to think about counting members in a class.
It's still pretty scary that doctors would have to be primed to get basic statistical inference right (a skill that's pretty essential to what they claim to do). The real world doesn't hand you problems in neat, well-defined packages. You can guess the teacher's password, but not Nature's.
After I got into a warm discussion with some other members of the speech and debate club in high school, I started doing a little research into the field of medicine and its errors.
Long story short: doctors are not the experts most people (including many of them) believe them to be, our system of medicine is really screwed up, and it's not even obvious that we derive a net benefit from medical intervention considered overall.
(It's pretty obvious that some specific interventions are extremely important, but they're quite basic and do not make up the majority of all interventions.)
I was about to lecture you on how wrong you are, until I realized I've never encountered a counterexample.
Please note that I do not rule out the possibility that we derive a net benefit. It's just that it isn't obvious that we do.
A counterexample of my being right? Or a counterexample relating to medicine?
A counterexample of my being right? Or a counterexample relating to medicine?
As in, "I have never encountered a doctor that actually understood the limits of his knowledge and how to appropriately use it, nor a clinical practice that wasn't basically the blind leading the blind."
Okay. I was unsure if your statement was meant to be a personal insult or a comment about medicine - your comments have cleared that up for me.
If I may offer a suggestion:
Access NewsBank from your local library, go to the "search America's newspapers" option, and do some searching for the phrase "nasal radium". There will be lots of duplication. You may find it useful to only search for articles written between 1990 and 1995, just to get a basic understanding of what it was.
Then realize that the vast majority of surgical treatments where introduced in pretty much the same way, and had the same amount of pre-testing, as nasal radium.
I don't infer doctors' actual performances from their responses to a word problem, so I'm not that scared. I don't think byrnema was wrong to claim that
A doctor who frequently gives mammograms would certainly learn over time that most women who have a positive result don't have breast cancer.
Er, the whole point of statistical inference (and intelligence more generally) is that you can get the most knowledge from the least data. In other words, so you can figure stuff out before learning it "the hard way". If doctors "eventually figure out" that most positives don't actually mean cancer, that means poor performance (judged against professionals in general), not good performance!
"Eventually" was byrnema's usage -- I'd bet doctors are probably told outright the positive and negative predictive values of the tests by the test designers.
I see no disagreement. You are describing another way that the numbers could be presented so that it would be understood. I am not suggesting that doctors literally confuse the two ways of defining "false positive" but that the definition of false positive given is so far outside of experience, apparently, they are confused/mistaken about how to apply it correctly. My point is that if they actually needed it outside the exam once or twice (i.e., if the result was connected enough with experience to identify the correct or incorrect answer) they would readily learn how to do it.
You have asserted that the reason doctors can accurately tell patients their chances after a diagnostic test even if they perform poorly on the word problem is because they are confused about the term "false positive". But the problem can be phrased without using the word "positive" at all, and people will still get it wrong if it's phrased in terms of probabilities and get it right if it's phrased in terms of relative frequencies. So the fact that doctors can tell patients their chances after a diagnostic test even if they perform poorly on the word problem has nothing to do with them being confused about false positives.
I argue that what we tend to understand by the statement "p implies q" is that p is the set of things that result in q.
But since that's not what the words mean even in standard English, it's clearly a misunderstanding on the part of the students.
since that's not what the words mean even in standard English
Doesn't it depend upon the context?
Suppose the context is some event P. Then we can talk about what things are implied by P and P implies Q has the standard/technical logical meaning. (If P implies Q1, Q2 and Q3 we naturally but not logically expect all 3 in a "true" answer: P -> Q1 ^ Q2 ^ Q3)
On the other hand, if the context is Q, and we ask "what implies Q?" then we expect a fuller answer for P; P is the set if all things that imply Q: P1 v P2 v P3 -> Q.
Perhaps, generally writing P-> Q as (P v S) <=> (Q ^ T) would more accurately capture all intended meanings (technical and natural). It would be understood that S and T are sets that complete the intended sets on each side necessary for the "if and only if" and that they could possibly be empty.
(Alas, this would make it no easier to teach. I just stress in class that P implies Q means P is one example of things that imply Q, and Q is, likewise, need only be one example of things implied by P.)
Doesn't it depend upon the context?
No. "P implies Q", even in regular, everyday English, does not suggest that P is the set of all possible causes for Q. Context doesn't matter.
So I would guess you don't understand why people make the mistake that "if not Q, then also not P". Do you have another hypothesis for the origin of this mistake? (Perhaps there is more than one cause, ha ha.)
Later edit: The first sentence had an obvious error. In the quotes, I meant to write, "if Q, then P" -- or, more symmetrically, "if not P, then also not Q" as the mistake that is often made from "if p then q".
I'm actually in large agreement with you about what "p implies q" means in ordinary English, but can wobble back and forth with some effort. Let me try a little harder to convince you of the interpretation I've been arguing.
Let's suppose you are told, "if P then Q". In everyday life, you can usually take this to mean that if Q then P because P would have caused Q. If Q could instead have been caused by R and R was likely, then why didn't the person say so? Why didn't the person say "if R or P then Q"?
why people make the mistake that "if not Q, then also not P".
Um... I don't think that's a mistake. Given "If P, then Q", the non-existence or falsehood of Q requires that P also not exist / be false. It leads to contradiction, otherwise.
Do you have another hypothesis for the origin of this mistake?
Perhaps people are just not good at processing asymmetrical relations. They may naturally assume, for any relation R, that aRb has the same meaning as bRa. They may not notice that conclusions they make from the mistake at this level of abstraction contradicts their correct understanding at a lower level of abstraction that includes the actual definition of implication.
Interesting, but this doesn't seem true true in general. People are pretty good at not confusing aRb and bRa when R is something like "has more status than", for example.
Good point. When the relation is obviously antisymmetric, where aRb implies not bRa, this is enough to make people realize it is not symmetric.
So when a therapist demonstrated to a patient that some of their beliefs were incompatible or their arguments were contradictory, the patient might assert that the therapist was the one who had the irrational concern or obsession.
Labeling this as projection seems overbroad. If you and I are arguing and it's pretty clear you get my position/premises, and I assume that I'm totally rational, then you must be irrational. Totally valid argument, though of course the premises may be false.
If I believe non-sentient things are acting "irrationally," i.e. they should be acting the way I want, but they aren't, then "projection" seems legitimate. But it seems wrong to call it "projecting" my irrationality by believing those who disagree with me to be irrational. After all, my reasoning is (potentially) valid, it just might be unsound.
When the patient is utterly unable to produce a rational justification for their behavior, and the therapist has asked reasonable questions based on logically-derived premises, the assertion becomes extremely unreasonable.
When the issue isn't rationality per se, but other concerns - and people begin to insist that others around them have the motivations that their own actions strongly indicate they themselves have - projection seems to be quite obvious.
The formal concept of the fallacious argument was born as the twin of logic itself. When the ancient Greeks first began to systematically examine the natural arguments people made as they sought to demonstrate the truth of propositions, they noted that certain types of arguments were vulnerable to counterexamples while others were not. The vulnerable were not true - when it was claimed that they justified a conclusion they could not rule out the alternative - and so were identified as fallacious.
Although the validity of logical arguments can be determined through logic, that doesn't particularly distinguish one fallacy from another. It is a curious fact that, despite this, some fallacies are more frequently made by human beings than others. Much more.
For example, "If P, then Q. P. Therefore, Not-Q." is just as basic and elemental an error as "If P, then Q. Q. Therefore, P." is. But the first fallacy is hardly ever found (humans being what they are, there's probably no mistake within our reach that is never made) while the second is extraordinarily common.
It is in fact generally true that we often confuse a unidirectional implication with the bidirectional. If something implies another thing, we leap to the conclusion that the second also implies the first, that the connection is equally strong each way, even though it is fairly trivial to demonstrate that this isn't necessarily the case. This error seems to be inherent to human intuition, as it occurs across contexts and subjects quite regularly, even when people are aware that it's logically invalid; only careful examination counters this tendency.
Much later, Sigmund Freud began to identify ways that people would deny assertions that they found emotionally threatening, what we now call 'psychological defense mechanisms'. The flaws in Freud's work as a whole are not directly relevant to this discussion and are beyond the scope of this site in any case. Suffice it to say that therapists and psychologists do not consider his theories to be either true or useful, that they do consider them to be unscientific and a self-reinforcing belief system, and that many of the concepts which he introduced and have been taken up into the culture at large are invalid. Not all of his work is so flawed, though - particularly his early ideas.
There is a peculiar relationship between the nature of those defense mechanisms and the intuitive fallacies.
When confronted with a contradiction in their emotionally-charged arguments, people who normally reasoned quite appropriately would suddenly begin to fall into fallacies. What's more, they would be unable to see the errors in their reasoning. Even more extraordinarily, they would often reach conclusions which were superficially related to the correct ones, but which were applied to the wrong concepts, situations, and individuals. In projection, for example, motives and traits belonging to the patients are instead asserted to belong to others or even the world itself; properties within the psyche are "projected" outward. So when a therapist demonstrated to a patient that some of their beliefs were incompatible or their arguments were contradictory, the patient might assert that the therapist was the one who had the irrational concern or obsession.
In such cases it seems clear that there is an awareness of some kind that the unpleasant conclusion must be reached, but not of where the property must be attributed. Accusing the therapist of possessing the unacceptable property seems to reduce tension of some sort - it's a relief that people actively seek out and will vehemently, even violently, defend.
Guilt, hate, fear, forbidden joys and loves - there are countless ways people will deny that they possess them. But they all tend to follow into certain predictable patterns, as the wild diversity of snowflakes still showcases repeating and similar forms.
Why is this the case? Ultimately, it took research into concept formation before psychology could really produce an answer to that question.
Next time: associational thought and the implications for rationality.