Abstract: Exactly what is fallacious about a claim like ”ghosts exist because no one has proved that they do not”? And why does a claim with the same logical structure, such as ”this drug is safe because we have no evidence that it is not”, seem more plausible? Looking at various fallacies – the argument from ignorance, circular arguments, and the slippery slope argument - we find that they can be analyzed in Bayesian terms, and that people are generally more convinced by arguments which provide greater Bayesian evidence. Arguments which provide only weak evidence, though often evidence nonetheless, are considered fallacies.

As a Nefarious Scientist, Dr. Zany is often teleconferencing with other Nefarious Scientists. Negotiations about things such as ”when we have taken over the world, who's the lucky bastard who gets to rule over Antarctica” will often turn tense and stressful. Dr. Zany knows that stress makes it harder to evaluate arguments logically. To make things easier, he would like to build a software tool that would monitor the conversations and automatically flag any fallacious claims as such. That way, if he's too stressed out to realize that an argument offered by one of his colleagues is actually wrong, the software will work as backup to warn him.

Unfortunately, it's not easy to define what counts as a fallacy. At first, Dr. Zany tried looking at the logical form of various claims. An early example that he considered was ”ghosts exist because no one has proved that they do not”, which felt clearly wrong, an instance of the argument from ignorance. But when he programmed his software to warn him about sentences like that, it ended up flagging the claim ”this drug is safe, because we have no evidence that it is not”. Hmm. That claim felt somewhat weak, but it didn't feel obviously wrong the way that the ghost argument did. Yet they shared the same structure. What was the difference?

The argument from ignorance

Related posts: Absence of Evidence is Evidence of Absence, But Somebody Would Have Noticed!

One kind of argument from ignorance is based on negative evidence. It assumes that if the hypothesis of interest were true, then experiments made to test it would show positive results. If a drug were toxic, tests of toxicity of reveal this. Whether or not this argument is valid depends on whether the tests would indeed show positive results, and with what probability.

With some thought and help from AS-01, Dr. Zany identified three intuitions about this kind of reasoning.

1. Prior beliefs influence whether or not the argument is accepted.

A) I've often drunk alcohol, and never gotten drunk. Therefore alcohol doesn't cause intoxication.

B) I've often taken Acme Flu Medicine, and never gotten any side effects. Therefore Acme Flu Medicine doesn't cause any side effects.

Both of these are examples of the argument from ignorance, and both seem fallacious. But B seems much more compelling than A, since we know that alcohol causes intoxication, while we also know that not all kinds of medicine have side effects.

2. The more evidence found that is compatible with the conclusions of these arguments, the more acceptable they seem to be.

C) Acme Flu Medicine is not toxic because no toxic effects were observed in 50 tests.

D) Acme Flu Medicine is not toxic because no toxic effects were observed in 1 test.

C seems more compelling than D.

3. Negative arguments are acceptable, but they are generally less acceptable than positive arguments.

E) Acme Flu Medicine is toxic because a toxic effect was observed (positive argument)

F) Acme Flu Medicine is not toxic because no toxic effect was observed (negative argument, the argument from ignorance)

Argument E seems more convincing than argument F, but F is somewhat convincing as well.

"Aha!" Dr. Zany exclaims. "These three intuitions share a common origin! They bear the signatures of Bayonet reasoning!"

"Bayesian reasoning", AS-01 politely corrects.

"Yes, Bayesian! But, hmm. Exactly how are they Bayesian?"


Note: To keep this post as accessible as possible, I attempt to explain the underlying math without actually using any math. If you would rather see the math, please see the paper referenced at the end of the post.

As a brief reminder, the essence of Bayes' theorem is that we have different theories about the world, and the extent to which we believe in these theories varies. Each theory also has implications about what you expect to observe in the world (or at least it should have such implications). The extent to which an observation makes us update our beliefs depends on how likely our theories say the observation should be. Dr. Zany has a strong belief that his plans will basically always succeed, and this theory says that his plans are very unlikely to fail. Therefore, when they do fail, he should revise his belief in the "I will always succeed" theory down. (So far he hasn't made that update, though.) If this isn't completely intuitive to you, I recommend komponisto's awesome visualization.

Now let's look at each of the above intuitions in terms of Bayes' theorem.

1. Prior beliefs influence whether or not the argument is accepted. This is pretty straightforward -the expression "prior beliefs" is even there in the description of the intuition. Suppose that we hear the argument, "I've often drunk alcohol, and never gotten drunk. Therefore alcohol doesn't cause intoxication". The fact that this person has never gotten drunk from alcohol (or at least claims that he hasn't) is evidence for alcohol not causing any intoxication, but we still have a very strong prior belief for alcohol causing intoxication. Updating on this evidence, we find that our beliefs in both the theory "this person is mistaken or lying" and the theory "alcohol doesn't cause intoxication" have become stronger. Due to its higher prior probability, "this person is mistaken or lying" seems more plausible of the two, so we do not consider this a persuasive argument for alcohol not being intoxicating.

2. The more evidence found that is compatible with the conclusions of these arguments, the more acceptable they seem to be. This too is a relatively straightforward consequence of Bayes' theorem. In terms of belief updating, we might encounter 50 pieces of evidence, one at a time, and make 50 small updates. Or we might encounter all of the 50 pieces of evidence at once, and perform one large update. The end result should be the same. More evidence leads to larger updates.

3. Negative arguments are acceptable, but they are generally less acceptable than positive arguments. This one needs a little explaining, and here we need the concepts of sensitivity and specifity. A test for something (say, a disease) is sensitive if it always gives a positive result when the disease is present, and specific if it only gives a positive result when the disease is present. There's a trade-off between these two. For instance, an airport metal detector is designed to alert its operators if a person carries dangerous metal items. It is sensitive, because nearly any metal item will trigger an alarm - but it is not very specific, because even non-dangerous items will trigger an alarm.

A test which is both extremly sensitive and extremly non-specific is not very useful, since it will give more false alarms than true ones. An easy way of creating an extremely sensitive "test for disease" is to simply always say that the patient has the disease. This test has 100% sensitivity (it always gives a positive result, so it always gives a positive result when the disease is present, as well), but its specificity is very low - equal to the prevalence rate of the disease. It provides no information, and isn't therefore a test at all.

How is this related to our intuition about negative and positive arguments? In short, our environment is such that like the airport metal detector, negative evidence often has high sensitivity but low specificity. We intuitively expect that a test for toxicity might not always reveal a drug to be toxic, but if it does, then the drug really is toxic. A lack of a "toxic" result is what we would expect if the drug weren't toxic, but it's also what we would expect in a lot of cases where the drug was toxic. Thus, negative evidence is evidence, but it's usually much weaker than positive evidence.

"So, umm, okay", Dr. Zany says, after AS-01 has reminded him of the way Bayes' theorem works, and helped him figure out how his intuitions about the fallacies have Bayes-structure. "But let's not lose track of what we were doing, which is to say, building a fallacy-detector. How can we use this to say whether a given claim is fallacious?"

"What this suggests is that we judge a claim to be a fallacy if it's only weak Bayesian evidence", AS-01 replies. "A claim like 'an unreliable test of toxicity didn't reveal this drug to be toxic, so it must be safe' is such weak evidence that we consider it fallacious. Also, if we have a very strong prior belief against something, and a claim doesn't shift this prior enough, then we might call it a 'fallacy' to believe in the thing on the basis of that claim. That was the case with the 'I've had alcohol many times and never gotten drunk, so alcohol must not be intoxicating' claim."

"But that's not what I was after at all! In that case I can't program a simple fallacy-detector: I'd have to implement a full-blown artificial intelligence that could understand the conversation, analyze the prior probabilities of various claims, and judge the weight of evidence. And even if I did that, it wouldn't help me figure out what claims were fallacies, because all of my AIs only want to eradicate the color blue from the universe! Hmm. But maybe the appeal from ignorance was a special case, and other fallacies are more accomodating. How about circular claims? Those must surely be fallacious?"

Circularity

A. God exists because the Bible says so, and the Bible is the word of God.

B. Electrons exist because we can see 3-cm tracks in a cloud chamber, and 3-cm tracks in cloud chambers are signatures of electrons.

"Okay, we have two circular claims here", AS-01 notes. "Their logical structure seems to be the same, but we judge one of them to be a fallacy, while the other seems to be okay."

"I have a bad feeling about this", Dr. Zany says.

The argument for the fallaciousness of the above two claims is that they presume the conclusion in the premises. That is, it is presumed that the Bible is the word of God, but that is only possible if God actually exists. Likewise, if electrons don't exist, then whatever we see in the cloud chamber isn't the signature signs of electrons. Thus, in order to believe the conclusion, we need to already believe it as an implicit premise.

But from a Bayesian perspective, beliefs aren't binary propositions: we can tentatively believe in a hypothesis, such as the existence of God or electrons. In addition to this tentative hypothesis, we have sense data about the existence of the Bible and the 3-cm tracks. This data we take as certain. We also have a second tentative belief, the ambiguous interpretation of this sense data as the word of God or the signature of electrons. The sense data is ambiguous in the sense that it might or might not be the word of God. So we have three components in our inference: the evidence (Bible, 3-cm tracks), the ambiguous interpretation (the Bible is the word of God, the 3-cm tracks are signatures of electrons), and the hypothesis (God exists, electrons exist).

We can conjecture a causal connection between these three components. Let's suppose that God exists (the hypothesis). This then causes the Bible as his word (ambiguous interpretation), which in turn gives rise to the actual document in front of us (sense data). Likewise, if electrons exist (hypothesis), then this can give rise to the predicted signature effects (ambiguous interpretation), which become manifest as what we actually see in the cloud chamber (sense data).

The "circular" claim reverses the direction of the inference. We have sense data, which we would expect to see if the ambiguous interpretation was correct, and we would expect the interpretation to be correct if the hypothesis were true. Therefore it's more likely that the hypothesis is true. Is this allowed? Yes! Take for example the inference "if there are dark clouds in the sky, then it will rain, in which case the grass will be wet". The reverse inference, "the grass is wet, therefore it has rained, therefore there have been dark clouds in the sky" is valid. However, the inference "the grass is wet, therefore the sprinkler has been on, thefore there is a sprinkler near this grass" may also be a valid inference. The grass being wet is evidence for both the presence of dark clouds and for a sprinkler having been on. Which hypothesis do we judge to be more likely? That depends on our prior beliefs about the hypotheses, as well as the strengths of the causal links (e.g. "if there are dark clouds, how likely is it that it rains?", and vice versa).

Thus, the "circular" arguments given above are actually valid Bayesian inferences. But there is a reason that we consider A to be a fallacy, while B sounds valid. Since the intepretation (the Bible is the word of God, 3-cm tracks are signatures of electrons) logically requires the hypothesis, the probability of the interpretation cannot be higher than the probability of the hypothesis. If we assign the existence of God a very low prior belief, then we must also assign a very low prior belief to the interpretation of the Bible as the word of God. In that case, seeing the Bible will not do much to elevate our belief in the claim that God exists, if there are more likely hypotheses to be found.

"So you're saying that circular reasoning, too, is something that we consider fallacious if our prior belief in the hypothesis is low enough? And recognizing these kinds of fallacies is AI-complete, too?" Dr. Zany asks.

"Yup!", AS-01 replies cheerfully, glad that for once, Dr. Zany gets it without a need to explain things fifteen times.

"Damn it. But... what about slippery slope arguments? Dr. Cagliostro claims that if we let minor supervillains stake claims for territory, then we would end up letting henchmen stake claims for territory as well, and eventually we'd give the right to people who didn't even participate in our plans! Surely that must be a fallacy?"

Slippery slope

Slippery slope arguments are often treated as fallacies, but they might not be. There are cases where the stipulated "slope" is what would actually (or likely) happen. For instance, take a claim saying "if we allow microbes to be patented, then that will lead to higher life-forms being patented as well":

There are cases in law, for example, in which a legal precedent has historically facilitated subsequent legal change. Lode (1999, pp. 511–512) cites the example originally identified by Kimbrell (1993) whereby there is good reason to believe that the issuing of a patent on a transgenic mouse by the U.S. Patent and Trademark Office in the year 1988 is the result of a slippery slope set in motion with the U.S. Supreme court’s decision Diamond v. Chakrabarty. This latter decision allowed a patent for an oil-eating microbe, and the subsequent granting of a patent for the mouse would have been unthinkable without the chain started by it.  (Hahn & Oaksford, 2007)

So again, our prior beliefs, here ones about the plausibility of the slope, influence whether or not the argument is accepted. But there is also another component that was missing from the previous fallacies. Because slippery slope arguments are about actions, not just beliefs, the principle of expected utility becomes relevant. A slippery slope argument will be stronger (relative to its alternative) if it invokes a more undesirable potential consequence, if that consequence is more probable, and if the expected utility of the alternatives is smaller.

For instance, suppose for the sake of argument that both increased heroin consumption and increased reggae music consumption are equally likely consequences of cannabis legalization:

A. Legalizing cannabis will lead to an increase in heroin consumption.

B. Legalizing cannabis will lead to an increase in listening to reggae music.

Yet A would feel like a stronger argument against the legalization of cannabis than argument B, since increased heroin consumption feels like it would have lower utility. On the other hand, if the outcome is shared, then the stronger argument seems to be the one where the causal link seems more probable:

C. Legalizing Internet access would lead to an increase in the amount of World of Warcraft addicts.

D. Legalizing video rental stores would lead to an increase in the amount of World of Warcraft addicts.

"Gah. So a strong slippery slope argument is one where both the utility of the outcome, and the outcome's probability is high? So the AI would not only need to evaluate probabilities, but expected utilities as well?"

"That's right!"

"Screw it, this isn't going anywhere. And here I thought that this would be a productive day."

"They can't all be, but we tried our best. Would you like a tuna sandwich as consolation?"

"Yes, please."


Because this post is already unreasonably long, the above discussion only covers the theoretical reasons for thinking about fallacies as weak or strong Bayesian arguments. For math, experimental studies, and two other subtypes of the argument from ignorance (besides negative evidence), see:

Hahn, U. & Oaksford, M. (2007) The Rationality of Informal Argumentation: A Bayesian Approach to Reasoning Fallacies. Psychological Review, vol. 114, no. 3, 704-732. 

New to LessWrong?

New Comment
42 comments, sorted by Click to highlight new comments since: Today at 9:52 AM

I don't agree with your analysis of circular arguments. It seems to be saying electrons are more likely than God because we have a higher prior for electrons than for God. I don't even think this is true, certainly before the discovery of both electrons and the bible not many people would have put a higher prior on tiny points of negative charge than a divine creator.

The real difference is not how well the evidence demonstrates the theories (as all that can really be said is that it's "consistent" with them), but what the alternatives are and how well the evidence disproves them. In the case of God, the bible could well exist in its current form, as evidenced by the existence other religious books inconsistent with it (at least most of which must therefore have been created by man). On the other hand, without electrons there simply is no other explanation for the tracks. In this case the experiment knocks out all simpler alternatives. That's the important difference - it's in the alternatives not the theory.

On the other hand, without electrons there simply is no other explanation for the tracks.

Oh, I'm sure there are other explanations, such as "cloud imps" -- but none that explain the tracks, and all of the other evidence, better than electrons do.

The circular argument about electrons sounds like something a poor science teacher or textbook writer would say. One who didn't understand much about physics or chemistry but was good enough at guessing the teacher's password to acquire a credential.

It glosses over all the physics and chemistry that went into specifying what bits of thing-space are clumped into the identifier "electron", and why physicists who searched for them believed that items in that thing space would leave certain kinds of tracks in a cloud chamber under various conditions. There was a lot of evidence based on many real experiments about electricity that led them to the implicit conditional probability estimates which make that inference legitimate.

The argument itself provides no evidence whatsoever, and encountering sentences like that in science literature is possibly the most frustrating thing about learning settled science to an aspiring rationalist. It simply assumes (and hides!) the science we are supposed to learn, and thus merely giving us another password to guess.

down voted: exaggeration - "no evidence whatsoever", "most frustrating thing".

[-][anonymous]12y120

The so-called fallacies desperately need a Bayesian re-analysis. This is a good start.

Arguments which provide only weak evidence, though often evidence nonetheless, are considered fallacies.

They are (or should be) considered fallacies when they are presented as deductions rather than well calibrated calculations of increased probability.

The familiarity of the recurring characters is a good delivery system for new but related facts; reminding me of the quality of AS-01's tuna sandwiches also reminds me of the previous article's lessons.

I don't think this is an adequate rendition of a circular argument. A circular argument is one that contains a conclusion that is identical to a premise; it should in principle be very easy to detect, provided your argument-evaluator is capable of comprehending language efficiently.

"God exists because the Bible says so, and the Bible is the word of God," is circular, because the Bible can't be the word of God unless God exists. This is not actually the argument you evaluate however; the one you evaluate is, "The bible exists and claims to be the word of God; therefore it is more likely that God exists." That argument is not circular (though it is not very strong).

The other argument is just... weirdly phrased. Cloud-trails are caused by things. Significant other evidence suggests those things also have certain properties. We call those things "electrons." There's nothing circular about that. You've just managed to phrase it in an odd manner that is structurally similar to a circular argument by ignoring the vast network of premises that underlies it.

Similarly, slippery slopes simply fail because they don't articulate probabilities or they assign far higher probabilities than are justified by the evidence. "Legalizing marijuana may herald the Apocalypse" is true for certain, extremely small values of "may." If you say it will do so, then your argument should fail because it simply lacks supporting evidence. I'm not sure there's as much action here as you say.

why does a claim with the same logical structure, such as ”this drug is safe because we have no evidence that it is not”, seem more plausible?

Whoa whoa whoa WHOA whoa whoa. If you find a pill on a table, taped to a copy of that argument, DO NOT TAKE THE PILL.

I disagree with this post because it totally ignores the whole "therefore" part of these fallacies. This note says there isn't any evidence that the pill is unsafe (true), therefore you should take it if I offered you a dollar (false). There was an experiment that demonstrated psychic powers (true), therefore you should behave as if humans have psychic powers (false).

What's wrong with these examples? In the case of the pill, the "therefore" is not merited by the preceding argument - you should not take the pill, it's a simple fact about most peoples' utility functions vis a vis anaphylactic shock. It's not about whether the evidence is weak or strong, the truth of the argument is a well-defined truth value of a claim made about the evidence. The second argument ignores the mountains of evidence that we have no psychic powers. The "therefore" that you should act upon has to be based off of all the evidence, which then has to pass certain marks in order to merit terms like "exists" or "safe for human consumption." Trying to ignore some of the evidence when setting up peoples' "therefores" isn't weak evidence, it's lying.

In the section on circularity, I felt that you were a bit blinded by the fact that electrons actually do exist. The argument about cloud chambers is even more circular than the argument for God. I think it's just that "and therefore, I win the argument and you should all pray facing Mecca" happens to be a false statement about what it takes to convince people of the correctness of a particular religion.

Weak Bayesian evidence is neither necessary nor sufficient for a fallacy in this framework.

There are arguments which provide strong evidence that are still fallacious. As an example. 1% of the population considered is B. 90% of A are B. Therefore you should be 99% certain that X is a B, because you know that X is an A.

There are arguments which provide weak evidence that are not fallacious. As an example. 1% of the population considered is B. 25% of A are B. If you learn that X is an A, you should adjust your probability that X is a B upward.

The key to many fallacies is not weak evidence. The key to fallacies is evidence being treated as stronger than it is. This has the interesting implication that most arguments that claim complete certainty are fallacious.

I like three of your examples and the doctor zany frame device. But I have some issues with the circularity part. First off, I just don't know that I understood it. Second, I don't see why you picked the specific example you used. I doubt that we have many goddists left, but it still seems like you could have used an example that isn't easy to view as an insult to a real group. Lastly, I suspect that your explanation is wrong. However, this could just be because of point one. So, I'm going to wait and see if you clarify things before I go into detail.

Can someone provide the full text of this?

Slippery slope arguments (SSAs) have a bad philosophical reputation. They seem, however, to be widely used and frequently accepted in many legal, political, and ethical contexts. Hahn and Oaksford (2007) argued that distinguishing strong and weak SSAs may have a rational basis in Bayesian decision theory. In this paper three experiments investigated the mechanism of the slippery slope showing that they may have an objective basis in category boundary re-appraisal.

Also this:

...he argued that the very reasons that can make SSAs strong arguments mean that we should be poor at abiding by the distinction between good and bad SSAs, making SSAs inherently undesirable. We argue that Enoch’s meta-level SSA fails on both conceptual and empirical grounds.

It seems to me that you're trying to bridge the gap between arguments which are logically false no matter what (A implies B, therefore B implies A) and arguments which require some knowledge of the world in order to evaluate them.

The argument about ghosts is a fallacy if there's no solid evidence of any ghosts ever. The argument about the safety of a drug is stronger than the ghost argument (though weaker than a good argument) if safe drugs are known to exist. By bringing in more of the real world (that drug's been carefully tested, it's been in use for a long time, and no serious side effects have been observed), you've got as good an argument as is possible for the drug being safe.

A) I've often drunk alcohol, and never gotten drunk. Therefore alcohol doesn't cause intoxication.

B) I've often taken Acme Flu Medicine, and never gotten any side effects. Therefore Acme Flu Medicine doesn't cause any side effects.

The real world strikes again! These arguments can only be evaluated if you know something about human variation.

[-][anonymous]12y10

It seems to me that you're trying to bridge the gap between arguments which are logically false no matter what (A implies B, therefore B implies A) and arguments which require some knowledge of the world in order to evaluate them.

The answer to this observation and the seeming impossibility of bridging the gap, I think, is that the pure formal validity of an argument manifests only in artificial languages. The "fallacies" are part of the study of informal reasoning. But as such, their acceptability always depends on background knowledge. The strictures of "informal logic" should be applied (and in ordinary rational discourse, are implied) in a more graded, Bayesian fashion; but they were developed assuming a closer relation than really exists between formal and informal reasoning.

Awesome post.

The circularity part is a little confusing though. Specifically, the theistic example argument could actually be one of two different theistic arguments, and the more common interpretation of that argument doesn't seem to be the one you are considering. This confused me because I was thinking of the more common argument.

Said in detail, the argument "God exists because the Bible says so, and the Bible is the word of God." could mean either:

1) God exists because the Bible exists, and the Bible was made by God.

Here, it doesn't really matter what the Bible says; the evidence we are using is that whatever was said was said by God. This argument seems to be the one you are addressing, and put in those terms, it doesn't even look like a circular argument.

or,

2) God exists because the Bible says he exists, and the Bible can't be wrong because it was written by God (and he never lies).

This is the argument I thought you were talking about, so I was confused at first while reading the circularity section. Here, our sense data is not just that the Bible exists, but also what the Bible says. (In some versions of the argument, the only evidence being used is what the Bible says, and the arguer doesn't use the mere existence of the Bible.)

Regarding the argument about what the Bible says, there does seem to be a bit of valid reasoning there. Namely, if we assume that God exists, wrote the Bible, and never lies, then we would be kind of confused if the Bible said that God didn't exist. In other words, the fact that the Bible says God exists is at least evidence of self-consistency.

Overall, it seems that a lot of common "fallacies" are actually just weak Bayesian evidence. (Again, this was an awesome post and I found it informative, especially the section on the argument from ignorance.) But it also seems that sometimes people just make mistakes in their reasoning, honest and otherwise.

For example, I think the argument-from-Bible-contents is sometimes used among children and teenagers, and sometimes takes the form "Of course God exists: he wrote the Bible and he said he does. You don't think God would lie, do you?" This is somewhat confused reasoning, and I expect that most or all of the children in that debate are genuinely failing to notice that it's not an issue of whether God is lying, its an issue of whether he exists and said anything at all. Nor do I expect they've noticed the inconsistency in the idea "God exists and said he exists, but he was lying (and really doesn't exist)."

I was confused by this:

If we assign the existence of God a very low prior belief, then we must also assign a very low prior belief to the interpretation of the Bible as the word of God. In that case, seeing the Bible will not do much to elevate our belief in the claim that God exists, if there are more likely hypotheses to be found.

Then I worked out that the likelihood ratio P(S|H) / P(S|¬H) = ( P(S|A)P(A|H) + P(S|¬A)P(¬A|H) ) / ( P(S|A)P(A|¬H) + P(S|¬A)P(¬A|¬H) ) depends only on our conditional probabilities, not on our prior probabilities. (Here S = "We observe the Bible", H = "God exists", and A = "The Bible is the word of God", as in Hahn & Oaksford.)

So the existence of the Bible can be strong evidence for the existence of God if we use likelihood ratio as a measure of strength of evidence. On the other hand, if we start with a very low prior for God, then even somewhat strong evidence will not be enough to convince us of His existence.

Put another way, the Bible can shift log(odds ratio) by quite a bit, independently of our prior for God; but if we have a sufficiently low prior for God, our posterior credence in God won't be much higher.

The conditional probabilities are doing a lot of work here, and it seems that in many cases our estimates of them are strongly dependent on our priors.

What are our estimates for P(S|A) or P(S|notA) and how do we work them out? clearly P(S|A) is high since "The Bible is the word of God" directly implies that the bible exists, so it is at least possible to observe. If our prior for A is very low, then that implies that our estimate of P(S|notA) must be also be high, given that we do in fact observe the bible (or we must have separately a well founded explanation of the truth of S despite it's low probability).

Since having P(S|A) = P(S|notA) in your formula cancels the right side out to 1/1, P(S|H) = P(S|notH). We find as S as evidence for or against A weakens, so does S as evidence for or against H by this argument.

So the problem with the circular argument is apparent in Bayesian terms. In the absence of some information that is outside the circular argument, the lower the prior probability, the weaker the argument. That's not the way an evidential argument is supposed to work.

Even in the case where our prior is higher, the argument isn't actually doing any work, it is what our prior does to our estimate of those conditionals that makes the likelihood ratio higher. If we've estimated those conditionals in a way which causes a fully circular argument to move the estimate away from our prior, then we have to be doing something wrong, because we don't have any new information.

If we have independent estimates of those various conditionals, then we would be able to make a non-circular argument. OTOH We can make a circular argument for anything no matter what is going on in reality, that's why a circular argument is a true and complete fallacy: it provides no evidence whatsoever for or against a premise.

If our prior for A is very low, then that implies that our estimate of P(S|notA) must be also be high, given that we do in fact observe the bible

What? That's an argument for P(S|¬A∧S) being high, not an argument for P(S|¬A) being high.

Paragraph 1 is quite largely a repeat of the abstract in more cutesy terms. I found it somewhat annoying to read the latter right after the former. (I liked the article in general.)

Good point.

Scientific papers are usually written with a structure where the abstract and actual text are independent of each other, i.e. the paper doesn't presume that you've read the abstract and often ends up repeating some of its content in the introduction. I imitated that structure out of habit, but I'm not sure whether it's a good structure to use for blog posts.

It didn't bother me. Though this may just be beacause I'm already habituated to ignoring it after having read many journal articles.

its specificity is very low - equal to the prevalence rate of the disease.

It’s actually 0. The specificity of a test is the probability of a negative test given the absence of disease. Since the probability of a negative test is 0 in this example, it is also 0 given the absence of disease.

This is clear, entertaining and to the point. Thank you.

A nitpick:

So a strong slippery slope argument is one where both the utility of the outcome, and the outcome's probability is high

You may have meant "disutility".

re: pill.

The important thing is that you should expect, with very good confidence, to have found the toxic effects of the drug if there were any. If it is so, then not having found such effects is good evidence. You do not expect to have a proof that ghosts do not exist, if they don't, that's what makes 'because' be a fallacy. You do not expect to have a proof the pills are unsafe, before you did proper testing, either; and even after testing it may easily be unsafe and there's certain risk remaining. The reasoning about pills used to be every bit as fallacious as the reasoning about ghosts - and far more deadly, before the mid 20th century or so, from which point we arranged the testing as to expect to find most of the toxic effects before approving drugs.

re: circularity

Well, if there weren't any other claims about electrons or god, those two claims would not even be claims but simply word definitions. The 'entity we think we seen everywhere and call electrons leaves tracks because of such and such' is the real argument, and 'god the creator of the universe personally wrote the bible' is the real argument. If we actually expected bible to be less likely to exist without God, then the bible would have been evidence, but as such I'd say the likehood of bible-like-religious-text is at very best entirely unrelated to existence or non-existence of god.

That's btw the way my atheism is, except other way around: nothing in religion is coupled in any way what so ever to existence or non-existence of god; i don't need to know if god exists or not to entirely reject religion and be strongly anti-religious. If anything, existence of multiple religions and the evil things that religions did and how they clashed with each other, would seem like a kind of thing that would be less likely in universe that has the creator god who watches over it, and constitute a (weak) evidence against existence of god (and fairly strong evidence against existence of god that intervenes etc). For me the existence of religions (the way they are) is a weak evidence that God does not exist.

Slippery slope arguments are often treated as fallacies, but they might not be.

Who the hell things that slippery slopes are a fallacy? You got it backwards - slippery slopes are a basic fact of life, pretending they don't exist is a fallacy.

Who the hell things that slippery slopes are a fallacy?

Anyone who finds "the slippery slope fallacy" a useful soldier in their argument.

Anyone who finds "the slippery slope fallacy" a useful soldier in their argument.

I find that soldier far less effective and versatile than actual fallacious slippery slopes arguments! Even acausal slippery slopes!

Careful! The slope can slip both ways!

A good rule of thumb for determining which way the slope is slipping, is to see which side is arguing for a change from the status quo.

Who the hell things that slippery slopes are a fallacy?

My undergraduate philosophy (Critical Thinking) professors and the accompanying textbooks for a start.

The are wrong, of course. At least to the extent that they try to generalize the fallacy. Slippery slope arguments are fallacious only to the extent that they draw conclusions beyond that specified by a correct Bayesian update on available evidence.

Unfortunately, explaining this to people who have basic training in logic but limited exposure to rational thinking philosophy is rather difficult.

The "slippery slope fallacy fallacy" (i.e. the fallacy of claiming that slippery slope is a fallacy) is mostly confusion of short-term tactical goals with long-term strategic goals, and pretending that just because a certain group only focuses on its short-term tactical goals at the moment, that they won't continue further towards their long-term strategic goals once their short-term goals are achieved.

There are multiple independent mechanisms how tactical goals (which you might find unproblematic, or usually at least less problematic and not worth bothering with the effort of opposing them actively) will help them pursue their strategic goals (which can often be horrible, but are much easier achieved step by step than at once), but the mechanism is pretty much irrelevant (there was this paper about this in legal context, I'm sure you've read it, I don't think focusing on mechanism makes much sense) - the pattern is just too ubiquitous and works the same way regardless of mechanism of the particular case.

Summary of this post: heuristics differ from biases in amount (of predictive power), not in kind.

Or perhaps they differ by some combination of predictive power, utility and directness of relation to their prediction (susceptibility to be screened off)

"Just as language and auditory centers must work together to understand the significance of speech sounds, so both deductive and inductive centers must work together to construct and evaluate complex inferences".

I have to ask. Am I the only one who really liked this footnote?

Somehow I get the feeling that most commenters haven't yet read the actual paper. This would clear up a lot of the confusion.

Footnote 1 seems to be missing.

Thanks, there isn't supposed to be one - the reference was a leftover from an earlier draft. Deleted it.

(I originally had a separate footnote mentioning that the "argument from ignorance" discussion in this post only discusses the "negative evidence" aspect, with there also existing two other aspects (epistemic closure and shifting the burden of proof) that can be found in the original paper. But then I compressed the footnote into just the "For math, experimental studies, and two other subtypes of the argument from ignorance..." sentence in the current version.)

Upvoted; it was nice to have all three fallacies discussed in a long post instead of splitting it into several short ones and making us wait.

If fallacies are weak Bayesian evidence, and given that for just about any fallacy there is a fallacy that simply negates output of fallacy (a fancy fallacious reasoner's fallacy), then how come fallacies don't (mostly) cancel out as evidence?

e.g. practical example: "correlation implies [direct] causation" is the simple fallacy, and "correlation doesn't imply any correlation" is the corresponding fancy fallacy. Which is also wrong because in our universe, unless it's QM, if a strongly correlates with b, then either a causes b, b causes a, or c causes both a and b - it's not that there's no causation, it's that causation may not be the one you privileged. Usually when you try to teach everyone about some fallacy, you end up creating another fallacy that's opposite (or a bias).

One needs to somehow gauge the 'fallaciousness' of opposite fallacies.

"One needs to somehow gauge the 'fallaciousness' of opposite fallacies."

Isn't that exactly what the Hahn-Oaksford paper does? I doubt I'm as intelligent as most people on this site, but I was under the impression that this was all about using Bayesian methods to measure the probable "fallaciousness" of certain informal fallacies.

I think what happens is that informal and fallacious reasoning rapidly (exponentially or super exponentially in number of steps) diverges from making sense, so it's weight as evidence is typically extremely close to zero.