Carl Sagan once told a parable of someone who comes to us and claims: “There is a dragon in my garage.” Fascinating! We reply that we wish to see this dragon—let us set out at once for the garage! “But wait,” the claimant says to us, “it is an invisible dragon.”
Now as Sagan points out, this doesn’t make the hypothesis unfalsifiable. Perhaps we go to the claimant’s garage, and although we see no dragon, we hear heavy breathing from no visible source; footprints mysteriously appear on the ground; and instruments show that something in the garage is consuming oxygen and breathing out carbon dioxide.
But now suppose that we say to the claimant, “Okay, we’ll visit the garage and see if we can hear heavy breathing,” and the claimant quickly says no, it’s an inaudible dragon. We propose to measure carbon dioxide in the air, and the claimant says the dragon does not breathe. We propose to toss a bag of flour into the air to see if it outlines an invisible dragon, and the claimant immediately says, “The dragon is permeable to flour.”
Carl Sagan used this parable to illustrate the classic moral that poor hypotheses need to do fast footwork to avoid falsification. But I tell this parable to make a different point: The claimant must have an accurate model of the situation somewhere in their mind, because they can anticipate, in advance, exactly which experimental results they’ll need to excuse.
Some philosophers have been much confused by such scenarios, asking, “Does the claimant really believe there’s a dragon present, or not?” As if the human brain only had enough disk space to represent one belief at a time! Real minds are more tangled than that. There are different types of belief; not all beliefs are direct anticipations. The claimant clearly does not anticipate seeing anything unusual upon opening the garage door. Otherwise they wouldn’t make advance excuses. It may also be that the claimant’s pool of propositional beliefs contains the free-floating statement There is a dragon in my garage. It may seem, to a rationalist, that these two beliefs should collide and conflict even though they are of different types. Yet it is a physical fact that you can write “The sky is green!” next to a picture of a blue sky without the paper bursting into flames.
The rationalist virtue of empiricism is supposed to prevent us from making this class of mistake. We’re supposed to constantly ask our beliefs which experiences they predict, make them pay rent in anticipation. But the dragon-claimant’s problem runs deeper, and cannot be cured with such simple advice. It’s not exactly difficult to connect belief in a dragon to anticipated experience of the garage. If you believe there’s a dragon in your garage, then you can expect to open up the door and see a dragon. If you don’t see a dragon, then that means there’s no dragon in your garage. This is pretty straightforward. You can even try it with your own garage.
No, this invisibility business is a symptom of something much worse.
Depending on how your childhood went, you may remember a time period when you first began to doubt Santa Claus’s existence, but you still believed that you were supposed to believe in Santa Claus, so you tried to deny the doubts. As Daniel Dennett observes, where it is difficult to believe a thing, it is often much easier to believe that you ought to believe it. What does it mean to believe that the Ultimate Cosmic Sky is both perfectly blue and perfectly green? The statement is confusing; it’s not even clear what it would mean to believe it—what exactly would be believed, if you believed. You can much more easily believe that it is proper, that it is good and virtuous and beneficial, to believe that the Ultimate Cosmic Sky is both perfectly blue and perfectly green. Dennett calls this “belief in belief.”1
And here things become complicated, as human minds are wont to do—I think even Dennett oversimplifies how this psychology works in practice. For one thing, if you believe in belief, you cannot admit to yourself that you merely believe in belief. What’s virtuous is to believe, not to believe in believing; and so if you only believe in belief, instead of believing, you are not virtuous. Nobody will admit to themselves, “I don’t believe the Ultimate Cosmic Sky is blue and green, but I believe I ought to believe it”—not unless they are unusually capable of acknowledging their own lack of virtue. People don’t believe in belief in belief, they just believe in belief.
(Those who find this confusing may find it helpful to study mathematical logic, which trains one to make very sharp distinctions between the proposition P, a proof of P, and a proof that P is provable. There are similarly sharp distinctions between P, wanting P, believing P, wanting to believe P, and believing that you believe P.)
There are different kinds of belief in belief. You may believe in belief explicitly; you may recite in your deliberate stream of consciousness the verbal sentence “It is virtuous to believe that the Ultimate Cosmic Sky is perfectly blue and perfectly green.” (While also believing that you believe this, unless you are unusually capable of acknowledging your own lack of virtue.) But there are also less explicit forms of belief in belief. Maybe the dragon-claimant fears the public ridicule that they imagine will result if they publicly confess they were wrong.2 Maybe the dragon-claimant flinches away from the prospect of admitting to themselves that there is no dragon, because it conflicts with their self-image as the glorious discoverer of the dragon, who saw in their garage what all others had failed to see.
If all our thoughts were deliberate verbal sentences like philosophers manipulate, the human mind would be a great deal easier for humans to understand. Fleeting mental images, unspoken flinches, desires acted upon without acknowledgement—these account for as much of ourselves as words.
While I disagree with Dennett on some details and complications, I still think that Dennett’s notion of belief in belief is the key insight necessary to understand the dragon-claimant. But we need a wider concept of belief, not limited to verbal sentences. “Belief” should include unspoken anticipation-controllers. “Belief in belief” should include unspoken cognitive-behavior-guiders. It is not psychologically realistic to say, “The dragon-claimant does not believe there is a dragon in their garage; they believe it is beneficial to believe there is a dragon in their garage.” But it is realistic to say the dragon-claimant anticipates as if there is no dragon in their garage, and makes excuses as if they believed in the belief.
You can possess an ordinary mental picture of your garage, with no dragons in it, which correctly predicts your experiences on opening the door, and never once think the verbal phrase There is no dragon in my garage. I even bet it’s happened to you—that when you open your garage door or bedroom door or whatever, and expect to see no dragons, no such verbal phrase runs through your mind.
And to flinch away from giving up your belief in the dragon—or flinch away from giving up your self-image as a person who believes in the dragon—it is not necessary to explicitly think I want to believe there’s a dragon in my garage. It is only necessary to flinch away from the prospect of admitting you don’t believe.
If someone believes in their belief in the dragon, and also believes in the dragon, the problem is much less severe. They will be willing to stick their neck out on experimental predictions, and perhaps even agree to give up the belief if the experimental prediction is wrong.3 But when someone makes up excuses in advance, it would seem to require that belief and belief in belief have become unsynchronized.
1 Daniel C. Dennett, Breaking the Spell: Religion as a Natural Phenomenon (Penguin, 2006).
2 Although, in fact, a rationalist would congratulate them, and others are more likely to ridicule the claimant if they go on claiming theres a dragon in their garage.
3 Although belief in belief can still interfere with this, if the belief itself is not absolutely confident.
Late to the game, but I'm precisely in this boat.
I don't have faith -- if I did, I'd have no qualms whatsoever about facts and arguments presented by atheists. I wouldn't be nervously claiming that the dragon is invisible. (Some people who think the apocalypse is nigh actually do stockpile canned food. That's faith; they believe in Revelations the same way I believe in physics.) I don't have faith, because I'm actually frightened that some archaeologist will find evidence that there wasn't any Exodus, for instance. And the fear is really that changing my religious beliefs will make me a worse person. Less grateful? Less reverent? Less respectful? That's the basic idea but I'm not sure if those words convey it.
To give a non-religious analogy, take the question of whether men have evolved to be irresponsible fathers. That's an empirical question. But a man can be afraid of believing that he is, indeed, biologically designed to be an irresponsible father, because he fears that such a belief will make him actually treat his children poorly. A rational man, we'd hope, would decide "I'll be a good father, whatever the evolutionary biologists say." But he can only do that if he has some independent reason to be a good father, and if he's aware he does.
A religious person wants to be a good person, and wants to have the right sort of attitude to the world. But all his reasons and motivations come from God. He could fear not believing in God because he fears not being good. Presumably, he has some other, non-God motivations for wanting to be good; but let's say that he doesn't know what they are. Then his fear might be justified. With no God and no principles, his behavior might actually change.
It may help to consider the question what would you do without morality? (also see the follow up: The Moral Void).