Related to Belief In Belief

Suppose that a neighbor comes to you one day and tells you “There’s a dragon in my garage!” Since all of us have been through this before at some point or another, you may be inclined to save time and ask “Is the dragon by any chance invisible, inaudible, intangible, and does it convert oxygen to carbon dioxide when it breathes?”

The neighbor, however, is a scientific minded fellow and responds “Yes, yes, no, and maybe, I haven’t checked. This is an idea with testable consequences. If I try to touch the dragon it gets out of the way, but it leaves footprints in flour when I sprinkle it on the garage floor, and whenever it gets hungry, it comes out of my garage and eats a nearby animal. It always chooses something weighing over thirty pounds, and you can see the animals get snatched up and mangled to a pulp in its invisible jaws. It’s actually pretty horrible. You may have noticed that there have been fewer dogs around the neighborhood lately.”

This triggers a tremendous number of your skepticism filters, and so the only thing you can think of to say is “I think I’m going to need to see this.”

“Of course,” replies the neighbor, and he sets off across the street, opens the garage door, and is promptly eaten by the invisible dragon.

Tragic though it is, his death provides a useful lesson. He clearly believed that there was an invisible dragon in his garage, and he was willing to stick his neck out and make predictions based on it. However, he hadn’t internalized the idea that there was a dragon in his garage, otherwise he would have stayed the hell away to avoid being eaten. Humans have a fairly general weakness at internalizing beliefs when we don’t have to come face to face with their immediate consequences on a regular basis.

You might believe, for example, that starvation is the single greatest burden on humanity, and that giving money to charities that aid starving children in underdeveloped countries has higher utility than any other use of your surplus funds. You might even be able to make predictions based on that belief. But if you see a shirt you really like that’s on sale, you’re almost certainly not going to think “How many people will go hungry if I buy this who I could have fed?” It’s not a weakness of willpower that causes you to choose the shirt over the starving children, they simply don’t impinge on your consciousness at that level.

When you consider if you really, properly hold a belief, it’s worth asking not only how it controls your anticipations, but whether your actions make sense in light of a gut-level acceptance of its truth. Do you merely expect to see footprints in flour, or do you move out of the house to avoid being eaten?

New Comment
59 comments, sorted by Click to highlight new comments since: Today at 4:47 PM

This reminds me of the classic industrial accident involving a large, pressurised storage tank. There is a man-sized door to allow access for maintenance and a pressure gauge. The maintenance man is supposed to wait for the pressure to fall to zero before he undoes the heavy steel latches. It is a big tank and he gets bored with waiting for the pressure to vent. The gauge says one pound per square inch. One pound doesn't sound like much so the man undoes the latches. Since the force is per square inch it is several hundred times larger than expected. The heavy door flies open irresistibly and kills the man.

I'm not seeing how the parable helps one be less wrong in real life. In the parable the victim has seen a dog taken by the dragon. If the maintenance man had seen an apprentice crushed in an earlier similar accident the experience would scar him mentally and he would always be wary of pressure vessels. I'm worrying that the parable is cruder than the problems we face in real life.

I don't know more than I've already said about pressure vessel accidents. Is there an underlying problem of crying wolf; too many warning messages obscure the ones that are really matters of life and death? Is it a matter of incentives; the manager gets a bonus if he encourages the maintenance team to work quickly, but doesn't go to jail when cutting corners leads to a fatal accident? Is it a matter of education; the maintenance man just didn't get pressure? Is it a matter of labeling; why not label the gauge by the door with the force per door area? Is it matter of class; the safety officer is middle class, the maintenance man is working class, the working class distrust the middle class and don't much believe what they say?

Is there an underlying problem

I don't know either, but I do know that an internalised, correct understanding of pressure and one of its measures 'pounds per square inch' would be a sufficient condition to save his life. The parable of the pressure vessel seems to be a case of an incorrect belief (one pound isn't much), whereas the parable of the invisible dragon seems to be a case of a correct belief (invisible dragon in my garage) that hasn't been internalised, and so has not produced beliefs it ought to (invisible DRAGON IN MY GARAGE!)

I don't know either, but I do know that an internalised, correct understanding of pressure and one of its measures 'pounds per square inch' would be a sufficient condition to save his life.

I think this is a good opportunity to point out that many people haven't internalized what it means to say "the atmosphere's pressure is about 15 psi". It implies that, if you were to lie face down and someone like me stood on your back, eliciting excruciating, "GET OFF ME!" pain on your end, they've only increased the pressure on your back by maybe 30% of what was on it your entire life, even as it may seem like much more than that.

Indeed, when I visited the Boston LW meetup, a few people there initially refused to believe the implications of 15 psi atmospheric pressure, apparently never having connected that figure to everyday experience.

Fortunately, there are easy experiments to impress people with. As a kid, my favorite one was laying a ruler on a table so half of it was sticking out perpendicularly, put a few layers of newspaper over the other half, and then quickly hit the exposed half downwards - and fail to knock it off the table because atmospheric pressure helped hold it down.

(At least, I think this is how it went. It was a long time ago. I'm sure there are other nifty experiments.)

I suppose that in doing it in the form of a parable (or this parable, anyway,) I erred on the side of being memorable over clear, but that was what I had in mind when I wrote it. A dragon in one's garage is something where it's intuitively obvious that you don't want to go near, once you internalize the fact that it's really there. That's the kind of mistake that we've had millions of years of evolution to prepare us against making. Opening up the garage door to investigate is the sort of behavior that only makes sense when you haven't internalized the idea that there's really something in there that's liable to eat you.

Realistically, the man would probably be terrified if he had seen it eat other animals already, but I threw that in to make the parable flow better. The invisibility and inaudibility probably wouldn't be sufficient in real life given that, but they're stand in qualities for the sort of remove that might prevent one from internalizing a belief.

I upvoted because I immediately understood what you meant; I am humble enough to believe that is a fact about the post and not about my skill at understanding.

Is there an underlying problem of crying wolf; too many warning messages obscure the ones that are really matters of life and death?

This is certainly an enormous problem for interface design in general for many systems where there is some element of danger. The classic "needle tipping into the red" is an old and brilliant solution for some kinds of gauges - an analogue meter where you can see the reading tipping toward a brightly marked "danger zone", usually with a 'safe' zone and an intermediate zone also marked, has surely prevented many accidents. If the pressure gauge on the door had such a meter where green meant "safe to open hatches" and red meant "terribly dangerous", that might have been a better design than just raw numbers.

I haven't worked with pressure doors but I have worked with large vacuum systems, cryogenic systems, labs with lasers that could blind you or x-ray machines that can be dangerously intense, and so on. I can attest that the designers of physics lab equipment do indeed put a good deal of thought and effort into various displays that indicate when the equipment is in a dangerous state.

However, when there are /many/ things that can go dangerously wrong, it becomes very difficult to avoid cluttering the sensorium of the operator with various warnings. The classic example are the control panels for vehicles like airplanes or space ships; you can see a beautiful illustration of the 'indicator clutter problem' in the movie "Airplane!":

I'm not seeing how the parable helps one be less wrong in real life. In the parable the victim has seen a dog taken by the dragon.

I must admit I up-voted the post mostly because I thought the parable was funny.

The gauge says one pound per square inch. One pound doesn't sound like much so the man undoes the latches. [...] I'm not seeing how the parable helps one be less wrong in real life.

Easy - not using metric system will kill you.

Seventy grams per square centimeter sounds like even less, though.

I'm not entirely sure how belief in belief fits in here. The dragon's unlucky host doesn't merely believe in belief: as you go out of your way to point out, he has excellent evidence of the creature's existence and can make predictions based on it. His fatal error is of a different category: rather than adopting a belief for signaling reasons and constructing a model which excuses him from providing empirical evidence for it, he's constructed a working empirical model and failed to note some of its likely consequences.

An imperfect model of an empirical reality can show fatal gaps when applied to the real world. But that's not the error of a tithing churchgoer whose concern for his immortal soul disappears in the face of a tempting Tag Heuer watch; it's the error of a novice pilot who fails to pull out of a tailspin, or of a novice chemist who mistakenly attempts to douse a sodium fire with a water-based foam. A level-one error, in other words, whereas belief in belief would be level zero or off the scale entirely.

The post was inspired by a comment which I felt confused lack of internalization with belief in belief. On reflection, I probably didn't establish the connection sufficiently.

Yeah, that clarifies some things. Reading over the OP, I note with some embarrassment that you never used the phrase "belief in belief" in the body text -- but I also note that Mass_Driver didn't, either.

"Understanding Your Understanding" does a pretty good job of illustrating the levels of belief, but now I'm starting to think that it might be a good idea to look at the same scale from the perspective of expected error types, not just the warning signs the article already includes.

See also, Reason as Memetic Immune Disorder, which discusses failure to internalize as a way to protect us from the consequences of false belief.

In the LW entry on RationalWiki they make fun of Eliezer for the Roko incident.

I always felt like this was unfair, because it amounts to attacking him for actually believing the things he talks about. That is, it's okay to talk about an immensely-powerful AI happening in the future, it's just not okay to act on that belief.

If you don't disagree with someone's beliefs, don't chastise that person for acting consistently with them.

This is because of spill-over from 'religious tolerance'. Most people will feel uncomfortable mocking a ridiculous belief; they have the "everyone is entitled to their own opinion" meme in mind. This makes people disagree with other beliefs much less than they ought to.

People are much more comfortable mocking ridiculous actions (because everyone is not entitled to their own facts) - which is why evangelists are scorned where the average religious person wouldn't be, despite evangelists acting consistently and the average religious person acting inconsistently on beliefs that the mocker doesn't disagree with.

I think the people who wrote the entry probably do disagree with Eliezer's beliefs in this regard. They seem to be mocking his beliefs, not just the actions he takes based on them.

That's not to say that there's any shortage of people who do take issue with, or even outright mock people, for acting on beliefs they do not disagree with.

Perhaps I should have said that I detect both: mock the belief, but additionally mock that it's acted on.

I suspect (*) the principle is: sincere stupidity is no less stupid than insincere stupidity.

It is important here to note that it's a silly wiki of no importance that doesn't pretend to be of any importance. (Many readers aren't happy with its obnoxiousness.) It just happens to be one of the few places on the Internet with a detailed article on LessWrong.

If this is considered a problem, the solution would be a publicity push for LessWrong to get it to sufficient third-party notability for a Wikipedia article or something. The question then is whether that would be good for the mission: "refining the art of human rationality." I'd suggest seeing how the influx of HP:MoR readers affects things. Small September before large one.

(*) "I suspect" as I'm not going to be so foolish as to claim the powers of a spokesman.

I always felt like this was unfair, because it amounts to attacking him for actually believing the things he talks about. That is, it's okay to talk about an immensely-powerful AI happening in the future, it's just not okay to act on that belief.

"That isn't" the belief being mocked.

Why should the guy have anticipated that the dragon would eat him? He's been poking around in the garage doing various experiments, and during all that time the dragon has merely stomped around and avoided him, showing no interest in eating him at all.

Also: why does he call it a dragon, and not just an "invisible creature"? Dragon is a pretty narrow category.

The dragon didn't eat him before because it wasn't hungry. If there's a tiger in your garage, it probably won't attack you as soon as it sees you, but the longer you spend in its vicinity, the greater your chances of being mauled.

The footprints were dragon-shaped, and it preferentially targeted the types of dogs that dragons most like to eat.

[...]it preferentially targeted the types of dogs that dragons most like to eat.

But that doesn't exclude other creatures. For example, the Giant Chupacabra is known to have similar preferences for the kinds of dogs it eats (when it can't find goats, its preferred meal).

I've heard this contrasted as 'knowledge', where you intellectually assent to something and can make predictions from it and 'belief', where you order your life according to that knowledge, but this distinction is certainly not made in normal speech.

A common illustration of this distinction (often told by preachers) is that Blondin the tightrope walker asked the crowd if they believed he could safely carry someone across the Niagra falls on a tightrope, and almost the whole crowd shouted 'yes'. Then he asked for a volunteer to become the first man ever so carried, at which point the crowd shut up. In the end the only person he could find to accept was his manager.

Would they be safe? Probably. Would they enjoy the experience? Probably not...

Yeah, that is a problem with the illustration. However, I don't think it's completely devoid of use.

Taking a risk based on some knowledge is a very strong sign of having internalised that knowledge.

Risking one's life to make a point requires not just belief but an extreme degree of belief, which the crowd was not asked to express.

[-][anonymous]13y-20

A common illustration of this distinction (often told by preachers) is that Blondin the tightrope walker asked the crowd if they believed he could safely carry someone across the Niagra falls on a tightrope, and almost the whole crowd shouted 'yes'. Then he asked for a volunteer to become the first man ever so carried, at which point the crowd shut up. In the end the only person he could find to accept was his manager.

Which is, of course, followed by handing out buckets of stones and pointing out suitable targets of righteous retribution. Adulterers, people who eat beetles, anyone who missed the sermon...

Is that a knee-jerk insult pointed at religion? If so, you're the AI Professor who takes cheap shots at Republicans.

If not, apologies, I must have missed the point.

[-][anonymous]13y00

Is that a knee-jerk insult pointed at religion? If so, you're the AI Professor who takes cheap shots at Republicans.

Not remotely, and that labeling strikes me as decidedly out of place and mildly objectionable.

I've seen you delete comments that received objecting responses a few times now. Why do you do that?

[-][anonymous]13y00

Which is, of course, followed by handing out buckets of stones and pointing out suitable targets of righteous retribution

I sense much anger in you....

people who eat beetles

I've not heard that one. Why are they regarded suitable targets for religious wrath?

Back when I was around 17 I remember being in this situation. I had a not very close friend (I don't remember his name) who claimed to have been in a cave where he encountered a malign ghost or demon. I was uncomfortable explaining that while I though his claim to almost certainly be false, I wasn't going to go and check in any event, as whether the claim was true or false the best thing to do would be not to go to the cave and check.

Along with the other physics-related examples here, Richard Dawkins' pendulum video seems relevant here: http://www.youtube.com/watch?v=Bsk5yPFm5NM

The next step in improving the sanity of your decision algorithm. Observations control what you should expect, and expectations control how you should act.

Somewhat related: at least one person has internalized their belief about the Singularity in a way that appears at least as weird as our hypothetical neighbor boarding up his garage and moving house.

I wanted to add that because it is important to note that the answer to

if you really, properly hold a belief ... [do] your actions make sense in light of a gut-level acceptance of its truth[?].

is automatically going to be "of course they do!", and that link has a situation that ought to challenge you on this topic.

I'm quite prepared to admit that there are many cases where my actions do not make sense in light of a gut level acceptance of my beliefs. I may not think donating money to a charity to support starving children is the highest utility use of my money, but even in spite of my own experiences with starvation, starving children are very much an invisible dragon to me.

There's a big gap between being a strong enough rationalist to acknowledge that one's actions don't make sense, and being a strong enough rationalist to reverse the situation, but at least the understanding can't hurt in making a start.

I'm quite prepared to admit that there are many cases where my actions do not make sense in light of a gut level acceptance of my beliefs.

Excellent, you are ahead of me. My initial reaction to the post was to run through a list of my prominent beliefs to see if they all made sense. They all did, and I only just barely was able to catch myself in time to think "What a coincidence, every single one?". Then the "Singularity as retirement plan" quote occurred to me.

starving children are very much an invisible dragon to me.

I support this 'x is my invisible dragon' turn of phrase!

I support this 'x is my invisible dragon' turn of phrase!

I thought it would be a good figure of speech too, but I'm afraid if I used it outside the context of this thread, people would think of Sagan's dragon, not mine. This parable would have to become a lot more famous for people to start to get it.

This parable would have to become a lot more famous for people to start to get it.

This is the process I am trying to kickstart by throwing my support behind the phrase.

The two concepts could serve as a rhetorical crowbar:

Is this the kind of invisible dragon that isn't really there but you're in denial? ...or the kind that IS really there but you're in denial?

This in turn makes me think that there are some kinds of evidence that affect our behavior, and other kinds that affects our beliefs, and only partial overlap. (E.G. you know the dragon is there but you're not evolved to be as afraid as you should be, because you can't see, hear, or smell it.)

This in turn makes me think that there are some kinds of evidence that affect our behavior, and other kinds that affects our beliefs, and only partial overlap. (E.G. you know the dragon is there but you're not evolved to be as afraid as you should be, because you can't see, hear, or smell it.)

The standard LW terminology for this is near and far modes of thought.

[-][anonymous]13y10

My invisible dragons:

Preventative medicine. (Sanitizing things, flu shots, drinking adequate water, etc.) Risk prevention in general (backing up files, locking my possessions, not going out after dark.) I probably don't do enough of that stuff compared to how bad I'd feel if risks actually occurred. Probably include proper diet among things that I would do differently if I successfully internalized what I believe in principle.

Upvoted for

"What a coincidence, every single one?"

but while I'm here,

I support this 'x is my invisible dragon' turn of phrase!

me too.

Somewhat related: at least one person has internalized their belief about the Singularity in a way that appears at least as weird as our hypothetical neighbor boarding up his garage and moving house.

I'm not so sure. Retirement plans are far, boarding up the garage is near.

Yes, but "cancelling your 401k, not getting an IRA, minimum legal contributions to your pension, etc" seem like near-thinking reactions to the concept of the Singularity.

Yes and slightly more specifically an expected Singularity within one's own lifetime. Not unusual among those who expect a Singularity at all, but at least not universal. People who expect a Singularity in, say 200 years and also think that systems such as 401k will maintain relevance throughout their lifetime may still go with the 401k.

My apologies, I should have said "near-thinking reactions to his personal beliefs about the Singularity". The quote in that link is clearly from somebody who believes the Singularity will happen with high probability before he retires, so making it sound like it's true of any understanding of the Singularity is quite false.

My apologies, I should have said "near-thinking reactions to his personal beliefs about the Singularity". The quote in that link is clearly from somebody who believes the Singularity will happen with high probability before he retires

Thanks, I was looking at just the more local context so responded literally.

One way to think about this is that in both failure of internalization and belief in belief, the believer's brain is in the same state. The only difference between the two cases is whether the belief in question corresponds to the territory.

In particular, there is no way to tell these two cases apart by introspection.

Imagine a raffle where the winner is chosen by some quantum process. Presumably under the many worlds interpretation you can see it as a way of shifting money from lots of your potential selves to just one of them. If you have a goal you are absolutely determined to achieve and a large sum of money would help towards it, then it might make a lot of sense to take part, since the self that wins will also have that desire, and could be trusted to make good use of that money.

Now, I wonder if anyone would take part in such a raffle if all the entrants who didn't win were killed on the spot. That would mean that everyone would win in some universe, and cease to exist in the other universes where they entered. Could that be a kind of intellectual assent vs belief test for Many Worlds?

Now, I wonder if anyone would take part in such a raffle if all the entrants who didn't win were killed on the spot. That would mean that everyone would win in some universe, and cease to exist in the other universes where they entered. Could that be a kind of intellectual assent vs belief test for Many Worlds?

No. No. No!

Quantum Sour-Grapes (ie. what you described) could be the result of a technically coherent value system but not a sane one. Unless there is some kind of physical or emotional torture involved dying doesn't make things better regardless of QM!

Could that be a kind of intellectual assent vs belief test for Many Worlds?

No, because it assumes you're indifferent to any effects you have on worlds that you don't personally get to experience.

I suppose the goal you were going to spend the money on would have to be of sufficient utility if achieved to offset that in order to make the scenario work. Maybe saving the world, or creating lots of happy simulations of yourself, or finding a way to communicate between them.

In that case it sounds like an obviously legit test. Someone disagree?

In that case what does 'Quantum' and/or many worlds have to do with this?