Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Fallacies as weak Bayesian evidence

59 Kaj_Sotala 18 March 2012 03:53AM

Abstract: Exactly what is fallacious about a claim like ”ghosts exist because no one has proved that they do not”? And why does a claim with the same logical structure, such as ”this drug is safe because we have no evidence that it is not”, seem more plausible? Looking at various fallacies – the argument from ignorance, circular arguments, and the slippery slope argument - we find that they can be analyzed in Bayesian terms, and that people are generally more convinced by arguments which provide greater Bayesian evidence. Arguments which provide only weak evidence, though often evidence nonetheless, are considered fallacies.

As a Nefarious Scientist, Dr. Zany is often teleconferencing with other Nefarious Scientists. Negotiations about things such as ”when we have taken over the world, who's the lucky bastard who gets to rule over Antarctica” will often turn tense and stressful. Dr. Zany knows that stress makes it harder to evaluate arguments logically. To make things easier, he would like to build a software tool that would monitor the conversations and automatically flag any fallacious claims as such. That way, if he's too stressed out to realize that an argument offered by one of his colleagues is actually wrong, the software will work as backup to warn him.

Unfortunately, it's not easy to define what counts as a fallacy. At first, Dr. Zany tried looking at the logical form of various claims. An early example that he considered was ”ghosts exist because no one has proved that they do not”, which felt clearly wrong, an instance of the argument from ignorance. But when he programmed his software to warn him about sentences like that, it ended up flagging the claim ”this drug is safe, because we have no evidence that it is not”. Hmm. That claim felt somewhat weak, but it didn't feel obviously wrong the way that the ghost argument did. Yet they shared the same structure. What was the difference?

The argument from ignorance

Related posts: Absence of Evidence is Evidence of Absence, But Somebody Would Have Noticed!

One kind of argument from ignorance is based on negative evidence. It assumes that if the hypothesis of interest were true, then experiments made to test it would show positive results. If a drug were toxic, tests of toxicity of reveal this. Whether or not this argument is valid depends on whether the tests would indeed show positive results, and with what probability.

With some thought and help from AS-01, Dr. Zany identified three intuitions about this kind of reasoning.

1. Prior beliefs influence whether or not the argument is accepted.

A) I've often drunk alcohol, and never gotten drunk. Therefore alcohol doesn't cause intoxication.

B) I've often taken Acme Flu Medicine, and never gotten any side effects. Therefore Acme Flu Medicine doesn't cause any side effects.

Both of these are examples of the argument from ignorance, and both seem fallacious. But B seems much more compelling than A, since we know that alcohol causes intoxication, while we also know that not all kinds of medicine have side effects.

2. The more evidence found that is compatible with the conclusions of these arguments, the more acceptable they seem to be.

C) Acme Flu Medicine is not toxic because no toxic effects were observed in 50 tests.

D) Acme Flu Medicine is not toxic because no toxic effects were observed in 1 test.

C seems more compelling than D.

3. Negative arguments are acceptable, but they are generally less acceptable than positive arguments.

E) Acme Flu Medicine is toxic because a toxic effect was observed (positive argument)

F) Acme Flu Medicine is not toxic because no toxic effect was observed (negative argument, the argument from ignorance)

Argument E seems more convincing than argument F, but F is somewhat convincing as well.

"Aha!" Dr. Zany exclaims. "These three intuitions share a common origin! They bear the signatures of Bayonet reasoning!"

"Bayesian reasoning", AS-01 politely corrects.

"Yes, Bayesian! But, hmm. Exactly how are they Bayesian?"

continue reading »

Privileging the Hypothesis

57 Eliezer_Yudkowsky 29 September 2009 12:40AM

Suppose that the police of Largeville, a town with a million inhabitants, are investigating a murder in which there are few or no clues—the victim was stabbed to death in an alley, and there are no fingerprints and no witnesses.

Then, one of the detectives says, "Well... we have no idea who did it... no particular evidence singling out any of the million people in this city... but let's consider the hypothesis that this murder was committed by Mortimer Q. Snodgrass, who lives at 128 Ordinary Ln.  It could have been him, after all."

I'll label this the fallacy of privileging the hypothesis.  (Do let me know if it already has an official name—I can't recall seeing it described.)

Now the detective may perhaps have some form of rational evidence which is not legal evidence admissible in court—hearsay from an informant, for example.  But if the detective does not have some justification already in hand for promoting Mortimer to the police's special attention—if the name is pulled entirely out of a hat—then Mortimer's rights are being violated.

And this is true even if the detective is not claiming that Mortimer "did" do it, but only asking the police to spend time pondering that Mortimer might have done it—unjustifiably promoting that particular hypothesis to attention.  It's human nature to look for confirmation rather than disconfirmation.  Suppose that three detectives each suggest their hated enemies, as names to be considered; and Mortimer is brown-haired, Frederick is black-haired, and Helen is blonde.  Then a witness is found who says that the person leaving the scene was brown-haired.  "Aha!" say the police.  "We previously had no evidence to distinguish among the possibilities, but now we know that Mortimer did it!"

continue reading »

She Blinded Me With Science

13 Jonathan_Graehl 04 August 2009 07:10PM

Scrutinize claims of scientific fact in support of opinion journalism.

Even with honest intent, it's difficult to apply science correctly, and it's rare that dishonest uses are punished. Citing a scientific result gives an easy patina of authority, which is rarely scratched by a casual reader. Without actually lying, the arguer may select from dozens of studies only the few with the strongest effect in their favor, when the overall body of evidence may point at no effect or even in the opposite direction. The reader only sees "statistically significant evidence for X". In some fields, the majority of published studies claim unjustified significance in order to gain publication, inciting these abuses.

Here are two recent examples:

Women are often better communicators because their brains are more networked for language. The majority of women are better at "mind-reading," than most men; they can read the emotions written on people's faces more quickly and easily, a talent jump-started by the vast swaths of neural real estate dedicated to processing emotions in the female brain.

- Susan Pinker, a psychologist, in NYT's "DO Women Make Better Bosses"

Twin studies and adoptive studies show that the overwhelming determinant of your weight is not your willpower; it's your genes. The heritability of weight is between .75 and .85. The heritability of height is between .9 and .95. And the older you are, the more heritable weight is.

- Megan McArdle, linked from the LW article The Obesity Myth

continue reading »

Avoiding Failure: Fallacy Finding

10 Patrick 03 July 2009 05:59PM

When I was in high school, one of the exercises we did was to take a newspaper column, and find all of the fallacies it employed. It was a fun thing to do, and is good awareness raising for critical thinking, but it probably wouldn't be enough to stave off being deceived by an artful propagandist unless I did it until it was reflexive. To catch the fallacy being, I usually have to read a sentence three or four times to see the underlying logic behind it and remember why the logic is invalid, when I'm confronted by something as fallacy ridden as an ad for the Love Calculator, I just give up in exhaustion. Worse, when I'm watching television, I can't even rewind to see what they said (I suspect the fallacy count is higher too).

To counter this, (and to further hone my fallacy finding skills), I've extended the fallacy finding exercise to work on video. Take a video from a genre that generally has a high fallacy per minute ratio (e.g. Campaign ads, political debates, speeches, regular ads, Oprah) and edit the video to play a klaxon sound whenever someone commits a logical fallacy or gets a fact wrong, followed by the name of the fallacy they committed flashing on screen.

EDIT: I've made one of these and uploaded it to Youtube. Thank you Eliezer and CannibalSmith for the encouragement. You can find other debates at CNN, and youtube lets you do annotations so no editing software is technically required. I'll be posting further videos to this post as I make/find them.

Catchy Fallacy Name Fallacy (and Supporting Disagreement)

23 JGWeissman 21 May 2009 06:01AM

Related: The Pascal's Wager Fallacy Fallacy, The Fallacy Fallacy

Inspired by:

We need a catchy name for the fallacy of being over-eager to accuse people of fallacies that you have catchy names for.

 

When you read an argument you don't like, but don't know how to attack on its merits, there is a trick you can turn to. Just say it commits1 some fallacy, preferably one with a clever name. Others will side with you, not wanting to associate themselves with a fallacy. Don't bother to explain how the fallacy applies, just provide a link to an article about it, and let stand the implication that people should be able to figure it out from the link. It's not like anyone would want to expose their ignorance by asking for an actual explanation.

What a horrible state of affairs I have described in the last paragraph. It seems, if we follow that advice, that every fallacy we even know the name of makes us stupider. So, I present a fallacy name that I hope will exactly counterbalance the effects I described. If you are worried that you might defend an argument that has been accused of committing some fallacy, you should be equally worried that you might support an accusation that commits the Catchy Fallacy Name Fallacy. Well, now that you have that problem either way, you might as well try to figure if the argument did indeed commit the fallacy, by examining the actual details of the fallacy and whether they actually describe the argument.

But, what is the essence of this Catchy Fallacy Name Fallacy? The problem is not the accusation of committing a fallacy itself, but that the accusation is vague. The essence is "Don't bother to explain". The way to avoid this problem is to entangle your counterargument, whether it makes a fallacy accusation or not, with the argument you intend to refute. Your counterargument should distinguish good arguments from bad arguments, in that it specifies criteria that systematically apply to a class of bad arguments but not to good arguments. And those criteria should be matched up with details of the allegedly bad argument.

The wrong way:

It seems that you've committed the Confirmation Bias.

The right way:

The Confirmation Bias is when you find only confirming evidence because you only look for confirming evidence. You looked only for confirming evidence by asking people for stories of their success with Technique X.

Notice how the right way would seem very out of place when applied against an argument it does not fit. This is what I mean when I say the counterargument should distinguish the allegedly bad argument from good arguments.

And, if someone commits the Catchy Fallacy Name Fallacy in trying to refute your arguments, or even someone else's, call them on it. But don't just link here, you wouldn't want to commit the Catchy Fallacy Name Fallacy Fallacy. Ask them how their counterargument distinguishes the allegedly bad argument from arguments that don't have the problem.

 

1 Of course, when I say that an argument commits a fallacy, I really mean that the person who made that argument, in doing so, committed the fallacy.