Cross-posted from my blog

I'd like to coin a term. The Sally-Anne fallacy is the mistake of assuming that somone believes something, simply because that thing is true.1

The name comes from the Sally-Anne test, used in developmental psychology to detect theory of mind. Someone who lacks theory of mind will fail the Sally-Anne test, thinking that Sally knows where the marble is. The Sally-Anne fallacy is also a failure of theory of mind.

In internet arguments, this will often come up as part of a chain of reasoning, such as: you think X; X implies Y; therefore you think Y. Or: you support X; X leads to Y; therefore you support Y.2

So for example, we have this complaint about the words "African dialect" used in Age of Ultron. The argument goes: a dialect is a variation on a language, therefore Marvel thinks "African" is a language.

You think "African" has dialects; "has dialects" implies "is a language"; therefore you think "African" is a language.

Or maybe Marvel just doesn't know what a "dialect" is.

This is also a mistake I was pointing at in Fascists and Rakes. You think it's okay to eat tic-tacs; tic-tacs are sentient; therefore you think it's okay to eat sentient things. Versus: you think I should be forbidden from eating tic-tacs; tic-tacs are nonsentient; therefore you think I should be forbidden from eating nonsentient things. No, in both cases the defendant is just wrong about whether tic-tacs are sentient.

Many political conflicts include arguments that look like this. You fight our cause; our cause is the cause of [good thing]; therefore you oppose [good thing]. Sometimes people disagree about what's good, but sometimes they just disagree about how to get there, and think that a cause is harmful to its stated goals. Thus, liberals and libertarians symmetrically accuse each other of not caring about the poor.3

If you want to convince someone to change their mind, it's important to know what they're wrong about. The Sally-Anne fallacy causes us to mistarget our counterarguments, and to mistake potential allies for inevitable enemies.


  1. From the outside, this looks like "simply because you believe that thing".

  2. Another possible misunderstanding here, is if you agree that X leads to Y and Y is bad, but still think X is worth it.

  3. Of course, sometimes people will pretend not to believe the obvious truth so that they can further their dastardly ends. But sometimes they're just wrong. And sometimes they'll be right, and the obvious truth will be untrue.

New Comment
20 comments, sorted by Click to highlight new comments since:

A special case of this fallacy that you often see is

Your Axioms (+ My Axioms) yield a bald contradiction. Therefore, your position isn't even coherent!

This is a special case of the fallacy because the charge of self-contradiction could stick only if the accused person really subscribed to both Your Axioms and My Axioms. But this is only plausible because of an implicit argument: "My Axioms are true, so obviously the accused believes them. The accused just hasn't noticed the blatant contradiction that results."

I'm curious if there's a term for a variant where you assume that when someone does definitely share the information you have, that they have also seen the implications of that information.

Where you and someone else both have the same level of information but where there are implications of that information.

In the past I've found myself to make too many assumptions about what others have concluded from information available to both of us and skip over nodes in the reasoning chain because i believe the other person to have already passed them which can lead to some confused conversations and backtracking.

Examples would be where the details of some chemical reaction have been laid out or the rules of some system or state machine are laid out and they imply a conclusion but the other person hasn't followed the implication .

Or where I know person A is is in situation X and person B knows that person A is in situation X and I talk to B with the assumption that the person A will be taking the most obvious responses to X.

My SO has rightly scolded me in the past for over-assuming about what will be obvious to the people I'm dealing with and what I assume them to have considered if they're domain experts, particularly when dealing with anything financial because it's come back to bite us in the past.

[-]Val60

I often encountered (when discussing politics, theology or similar subjective topics) a fallacy which is similar to this one, or maybe it can be seen as the reverse of it.

  • A: ice is hot, therefore 2+2=4
  • B: No, ice is not hot, but even if it was, it still wouldn't be a good proof for 2+2=4
  • A: So you don't believe in the obvious truth that 2+2=4 ?

Also, sometimes A might try to prove 2+2=5 with the same strategy.

"Versus: you think I should be forbidden from eating tic-tacs; tic-tacs are nonsentient; therefore you think I should be forbidden from eating nonsentient things"

Isn't that a completely different fallacy?

In the case (C1) "You think it's okay to eat tic-tacs; tic-tacs are sentient; therefore you think it's okay to eat sentient things." You say "You say doing f to A is OK, A is B, therefore, you think it is okay to do f to B". (Because if not, you could not do f to A because A is B)

But in the case (C2) "you think I should be forbidden from eating tic-tacs; tic-tacs are nonsentient; therefore you think I should be forbidden from eating nonsentient things" it is "you think I should not do f to A, A is B, therefore you think I should not do f to B" That is wrong!

If there exist A and C which are all B, then in case C1, we say because f can be done to A, f cannot not be done to B, or else A would not be a member of B (which it is). But in case C2, if f cannot be done to A, then f cannot be done to B reasons the wrong way around: saying something about members of B does not say something about all members of B.

...

Basically, pedantry about the default case of negative statements about groups applying "none of" descriptors and the default case for positive (or atleast non-negative) statements about groups applying "some of" descriptors.

Hiding behind anon account because I didn't want to go through signup hoops before losing my train of thought.

[-]gjm40

Isn't that a completely different fallacy?

I took the meaning to be "therefore you think there are some nonsentient things I should be forbidden to eat". I agree that as written the other meaning is a more natural interpretation, but in the context of the rest of the article I think my interpretation is more likely (exactly because otherwise it would involve an entirely different logical error). philh, would you like to confirm or refute?

[EDITED to fix an idiotic mistake: for some reason I thought Elo, not philh, was the author. My apologies to both.]

Yes, that's what I was going for.

[-]gjm20

My apologies for writing "Elo" where I meant "philh" in the grandparent of this comment. I've fixed it now.

[-]Elo20

Can I take the credit for writing things I did not write? Cause that would be sweet.

Great post. Another issue is why B doesn't believe Y in spite of believing X and in spite of A believing that X implies Y. Some mechanisms:

a) B rejects that X implies Y, for reasons that are good or bad, or somewhere in between. (Last case: reasonable disagreement.)

b) B hasn't even considered whether X implies Y. (Is not logically omniscient.)

c) Y only follows from X given some additional premises Z, which B either rejects (for reasons that are good or bad or somehwere in between) or hasn't entertained. (What Tyrrell McAllister wrote.)

d) B is confused over the meaning of X, and hence is confused over what X implies. (The dialect case.)

[-]gjm30

I see this fallacy all the time, and it's good to have a name for it.

Usually when I see it[1] it's in a political context where employing the fallacy is an effective rhetorical move (because it lets you switch from "Hated enemy group X disagrees with us about a controversial question of fact" to "Hated enemy group X wants cute babies to die in agony"). I suspect few people in such a dispute are (1) willing to listen to a complain that the argument is fallacious and also (2) willing to accept a framing with "because that thing is true" as part of it.

So I guess this is best used "internally", as a tool for thinking about one's own mental mistakes. (Like most things in the field of cognitive biases, probably.)

[1] Or maybe it's just when I notice it.

The term, Sally-Anne fallacy, can be used to many situations as indicated in your article.

I find you article really interesting and very informative. Of course, anyone can argue that it's not simply because you believe that thing, that thing is true. The Sally-Anne fallacy is indeed a cause of biased counter arguments, if not being addressed upon accordingly.

A worthwhile read!

I thought this fallacy was going to be about how you can convince people to marry them by talking about muskrats.

Glad to have this term. I do think there's a non-fallacious, superficially similar argument that goes something like this:

"X leads to Y. This is obvious, and the only way you could doubt it would be some sort of motivated reasoning--motivated by something other than preventing Y. Therefore, if you don't think X leads to Y, you aren't very motivated to prevent Y."

It's philosophically valid, but requires some very strong claims. I also suspect it's prone to causing circular reasoning, where you've 'proven' that no one who cares about Y thinks X doesn't lead to Y and then use that belief to discredit new arguments that X doesn't lead to Y.

This looks like a special case of a failure of intentionality. If a child knows where the marble is, they've managed first-order intentionality, but if they don't realize that Sally doesn't know where the marble is, they've failed at second order.

The orders go higher, though, and it's not obvious how much higher humans can naturally go. If

Bob thinks about "What does Alice think about Bob?" and on rare occasions "What does Alice think Bob thinks about Alice?" but will not organically reason "What does Alice think Bob thinks about Alice's model of Bob?"

then Bob can handle second and third but can't easily handle fourth order intentionality.

It may be a useful writing skill to be comfortable with intentionality at one level higher than your audience.

but will not organically reason "What does Alice think Bob thinks about Alice's model of Bob?"

That's actually pretty easy: Alice doesn't :-)

Obligatory reference: Battle of Wits.

That's true! In life it's best we try to understand why some persons are acting in a particular kind of way, knowing what causes these characters, knowing if these characters are desirable and then going towards advising that person, changing his or her mind by knowing what they're wrong about. By doing this you can be able to explain to that person, giving good ideals and giving advise on areas he can improve more exhibiting characters and attitudes that are desirable to other people and also the society. That's good i think the best thing i have learnt from here is knowing people's fault before trying to correct them, in this way you can be able to determine specific areas that needs improvement in their life style and also being able to change their mind and attitude positively.

I'm not sure this is a single fallacy. It's more a mix of affective fallacy (things I don't like are false) and strawmanning an argument so you can disagree with the easy part.

Mixed in with the human tribal instinct to reinforce their own conclusions rather than looking for reasons to change (confirmation bias), this leads to making the un-persuasive arguments. This is because politics isn't about policy - most people making these bad arguments aren't actually planning or even hoping to persuade. They're hoping to reinforce their position.

Hmm. Maybe I'm saying "this isn't a fallacy". It's not an actual false belief that anyone has - almost nobody has a reflective belief that this is the reason someone on the other side disagrees. It's more a bias - a mode of behavior based on heuristics and unstated goals, rather than a common reasoning falsehood.

I think you're saying that all the cases described above, could be expressed as a mix of other fallacies, therefore it's not distinct fallacy in its own right?

I think a better question is "If we think of class of mistake as a specific named fallacy, will it help us to spot errors of reasoning that we would otherwise have missed? Or alternatively, help us to talk about errors of reasoning that we've noticed."

If it can be expressed in terms of other fallacies, but these mistakes aren't immediately obvious as examples of those fallacies, then it can be worth giving them their own label as philh suggests.

Ultimately, different people will find that different tools and explanations work well for them. While two explanations might be logically equivalent, some people will find that one makes more sense to them, and some people will find that the other makes more sense.

It seems like a useful fallacy to me (so to speak), and I intend to keep an eye out for it.

I agree very much that this is a thing that happens, but I don't think it needs to a named fallacy. There is even a standard nomenclature - failure of theory of mind (its more general but it works).