I think I understand Moore's Paradox a bit better now, after reading some of the comments on Less Wrong.  Jimrandomh suggests:

Many people cannot distinguish between levels of indirection. To them, "I believe X" and "X" are the same thing, and therefore, reasons why it is beneficial to believe X are also reasons why X is true.

I don't think this is correct—relatively young children can understand the concept of having a false belief, which requires separate mental buckets for the map and the territory.  But it points in the direction of a similar idea:

Many people may not consciously distinguish between believing something and endorsing it.

After all—"I believe in democracy" means, colloquially, that you endorse the concept of democracy, not that you believe democracy exists.  The word "belief", then, has more than one meaning.  We could be looking at a confused word that causes confused thinking (or maybe it just reflects pre-existing confusion).

So: in the original example, "I believe people are nicer than they are", she came up with some reasons why it would be good to believe people are nice—health benefits and such—and since she now had some warm affect on "believing people are nice", she introspected on this warm affect and concluded, "I believe people are nice".  That is, she mistook the positive affect attached to the quoted belief, as signaling her belief in the proposition.  At the same time, the world itself seemed like people weren't so nice.  So she said, "I believe people are nicer than they are."

And that verges on being an honest mistake—sort of—since people are not taught explicitly how to know when they believe something.  As in the parable of the dragon in the garage; the one who says "There is a dragon in my garage—but it's invisible", does not recognize his anticipation of seeing no dragon, as indicating that he possesses an (accurate) model with no dragon in it.

It's not as if people are trained to recognize when they believe something.  It's not like they're ever taught in high school:  "What it feels like to actually believe something—to have that statement in your belief pool—is that it just seems like the way the world is.  You should recognize this feeling, which is actual (unquoted) belief, and distinguish it from having good feelings about a belief that you recognize as a belief (which means that it's in quote marks)."

This goes a long way toward making this real-life case of Moore's Paradox seem less alien, and providing another mechanism whereby people can be simultaneously right and wrong.

Likewise Kurige who wrote:

I believe that there is a God—and that He has instilled a sense of right and wrong in us by which we are able to evaluate the world around us.  I also believe a sense of morality has been evolutionarily programmed into us—a sense of morality that is most likely a result of the formation of meta-political coalitions in Bonobo communities a very, very long time ago.  These two beliefs are not contradictory, but the complexity lies in reconciling the two.

I suspect, Kurige, you have decided that you have reasons to endorse the quoted belief that God has instilled a sense of right and wrong in us.  And also that you have reasons to endorse the verdict of science.  They both seem like good communities to join, right?  There are benefits to both sets of beliefs?  You introspect and find that you feel good about both beliefs?

But you did not say:

"God instilled a sense of right and wrong in us, and also a sense of morality has been evolutionarily programmed into us.  The two states of reality are not inconsistent, but the complexity lies in reconciling the two."

If you're reading this, Kurige, you should very quickly say the above out loud, so you can notice that it seems at least slightly harder to swallow—notice the subjective difference—before you go to the trouble of rerationalizing.

This is the subjective difference between having reasons to endorse two different beliefs, and your mental model of a single world, a single way-things-are.

New Comment
24 comments, sorted by Click to highlight new comments since:

The reason why saying "There is a God and He instilled..." is harder than saying "I believe that there is a God and He instilled..." is because the words "I believe that" are weasel words. The literal meaning of "I believe that" is irrelevant; any other weasel words would have the same effect. Consider the same sentence, but replace "I believe that" with "It is likely that", or "Evidence indicates that", or any similar phrase, and it's just as easy.

Just because people are aware of a concept, and have words which ought to refer to that concept, does not mean that they consistently connect the two. The best example of this comes from the way people refer to things as [good] and [bad]. When people dislike something, but don't know why, they generate exemplars of the concept "bad", and call it evil, ugly, or stupid. This same mechanism lead to the widespread use of "gay" as a synonym for "bad", and to racial slurs directed at anonymous online rivals who are probably the wrong race for the slur. I think that confidence markers are subject to the same linguistic phenomenon.

People think with sentences like "That's a [good] car" or "[Weasel] God exists". The linguistic parts of their mind expand them to "That's a sweet car" and "I believe God exists" when speaking, and performs the inverse operation when listening. They don't think about how the car tastes, and they don't think about beliefs, even though literal interpretation of what they say would indicate that they do.

[-]Erik90

Ah, but the point is that "believe" is the weasliest of words. I know a few, and would guess there are quite a lot more, intelligent people who readily states "I believe that there is a God" but who would be very hesitant if you asked them to use "Evidence indicates that".

I would say that what you call weasel words occupy a scale and that its not just as easy to use them all in any given context, at least not for reasonably intelligent people.

[-][anonymous]00

There certainly is a right weasel word for a context.

Weasel words, as you call them, are a necessary part of any rational discussion. The scientific equivalent would be, "evidence indicates" or "statistics show".

I'm afraid I must disagree kurige, for two reasons. The first is that they smack of false modesty, a way of insuring yourself against the social consequences of failure without actually taking care not to fail. The second is that the use of such terms don't really convey any new information, and require the use of the passive voice, which is bad style.

"Evidence indicates an increase in ice cream sales" really isn't good science writing, because the immediate question is "What evidence?". It's much better to say "ice cream sales have increased by 15%" and point to the relevant statistics.

On this we agree. If we have 60% confidence that a statement is correct, we would be misleading others if we asserted that it was true in a way that signalled a much higher confidence. Our own beliefs are evidence for others, and we should be careful not to communicate false evidence.

Stripped down to essentials, Eliezer is asking you to assert that God exists with more confidence than it sounds like you have. You are not willing to say it without weasel words because to do so would be to express more certainty than you actually have. Is that right?

Can you offer any evidence that weasel words are necessary to rational discussion? I can imagine that weasel words are common to scientific discussions, as well as discussions regarding faith. However, I don't see any barriers to people eschewing them.

[-]kurige130

If you're reading this, Kurige, you should very quickly say the above out loud, so you can notice that it seems at least slightly harder to swallow - notice the subjective difference - before you go to the trouble of rerationalizing.

There seems to be some confusion here concerning authority. I have the authority to say "I like the color green." It would not make sense for me to say "I believe I like the color green" because I have first-hand knowledge concerning my own likes and dislikes and I'm sufficiently confident in my own mental capacities to determine whether or not I'm deceiving myself concerning so simple a matter as my favorite color.

I do not have the authority to say, "Jane likes the color green." I may know Jane quite well, and the probability of my statement being accurate may be quite high, but my saying it is so does not make it so.

I chose to believe in the existance of God - deliberately and conciously. This decision, however, has absolutely zero effect on the actual existance of God.

Critical realism shows us that the world and our perception of the world are two different things. Ideally any rational thinker should have a close correlation between their perception of the world and reality, but outside of first-hand knowledge they are never equivalent.

You are correct - it is harder for me to say "God exists" than it is for me to say "I believe God exists" for the same reason it is harder for a scientist to say "the higgs-boson exists" than it is to say "according to our model, the higgs-boson should exist."

The scientist has evidence that such a particle exists, and may strongly believe in it's existence, but he does not have the authority to say definitively that it exists. It may exists, or not exist, independent of any such belief.

Correct me if I'm wrong, but from a Bayesian perspective the only difference between first-hand knowledge and N-th hand knowledge (where N>1) are the numbers. There is nothing special about first-hand.

Suppose you see a dog in the street, and formulate this knowledge to yourself. What just happened? Photons from the sun (or other light sources) hit the dog, bounced, hit your eye, initiated a chemical reaction, etc. Your knowledge is neither special nor trivial, but is a chain of physical events.

Now, what happens when your friend tells you he sees a dog? He had to form the knowledge in his head. Then he vocalized it, sound waves moved through the air, hit your ear drum, initiated chemical reactions... supposing he is a truth-sayer, the impact on you, evidence-wise, is almost exactly the same. Simply, the chain of events leading to your own knowledge is longer, but that's the only difference. Once again, there is no magic here. Your friend is just another physical link in the chain.

(A corollary is that introspection in humans is broken. Often we honestly say we want X, but do Y. The various manifestations of this phenomenon have been talked about extensively on OB. It is conceivable that in the future scientists would be able to predict our behavior better than ourselves, by studying our brains directly. So we don't really have any special authority over ourselves.)

If there was an agent known to be a perfect pipe of evidence, we should treat its words as direct observations. People are not perfect pipes of evidence, so that complicates issues. However, some things are pretty clear even though I have no first-hand knowledge of them:

Los Angeles exists. (Note that I've never been to the Americas.) It is now night-time in Los Angeles. (About 2 AM, to be precise.) In 2006, the city of Los Angeles had a population of approximately 3.8 million.

And so on, until:

Modern evolutionary theory is generally true. There is no God.

And if I am wrong, it is simply because I failed my math, not because I lack in "authority". So there, I said it. Your turn :-)

[-][anonymous]10

Like it!

The scientist who says "according to our model M, the higgs-boson should exist" has, as his actual beliefs, a wider distribution of hypotheses than model M. He thinks model M could be right, but he is not sure -- his actual beliefs are that there's a certain probability of {M and higgs-bosons}, and another probability of {not M}.

Is something analogous true for your belief in God? I mean, are you saying "There's this framework I believe in, and, if it's true, then God is true... but that framework may or may not be true?"

Does anyone have a good model of what people in fact do, when they talk about "choosing" a particular belief? At least two possibilities come to mind:

(1) Choosing to act and speak in accordance with a particular belief.

(2) Choosing to "lie" to other parts of one's mind -- to act and speak in accordance with a particular belief internally, so that one's emotional centers, etc., get at least some of their inputs "as though" one held that belief.

Is "choosing to trust someone" any more compatible with lack of self-deception than "choosing" a particular belief?

How about "choosing to have such-and-such a preference/value", or "choosing to regard such-and-such a part of myself as 'the real me, who I should align with'"?

Also, is there a line between self-deception and playing useful tricks on the less rational parts of oneself? An example of a useful trick that doesn't bother me is visualizing ice cream as full of worms, or otherwise disgusting, if I don't want to want to eat it. "Chosen beliefs" do bother me in a way the ice cream trick doesn't -- but my best guess is that the difference is just how intelligent/reason-able a portion of oneself one is lying to.

I don't think there's one model that covers 1) and 2) like you're saying. I think two very different mental processes are going on, and we only use the term "belief" for both of them because we've committed the fallacy of compression.

That is, "I believe (in) X" can mean either

1) My mental model of reality includes X.

or

2) I affiliate with a group that centers around professing X [so I've got a gang watching out for me and if you're part of it we have a basis for cooperating].

So, I don't think there's one answer for your question, because you're describing two different processes, with different methods and goals. Choosing beliefs type 1) is the process of seeking actual truth, while type 2) is the process of gaining power through group affiliation.

Or maybe Robin_Hanson's cynicism is rubbing off on me.

"I chose to believe in the existance of God - deliberately and conciously."

I cannot conceive of how it is possible to deliberately and consciously choose to believe in something.

I grew up in a religious family. I served as a missionary for my church. I married a woman of the same religion. For most of my first 28 years I believed not only that there was a God but that he had established the church of which I and my family were all members.

But once I started examining my beliefs more closely, I realized that I was engaging in the most dishonest sort of special pleading in their favor. And then it was no longer possible to continue believing.

[-]Erik60

Is it harder for you to say "Evidence indicates that God exists" than for you to say "I believe God exists"? Just curious, it's a bit of a pet theory of mine. If you don't want to expend energy just to provide another data point for me, no hard feelings.

If you would be really kind, you could try to indicate how comfortable you are with different qualifiers jimrandomh gave.

There seems to be some confusion here concerning authority. I have the authority to say "I like the color green." It would not make sense for me to say "I believe I like the color green" because I have first-hand knowledge concerning my own likes and dislikes and I'm sufficiently confident in my own mental capacities to determine whether or not I'm deceiving myself concerning so simple a matter as my favorite color.

I do not have the authority to say, "Jane likes the color green." I may know Jane quite well, and the probability of my statement being accurate may be quite high, but my saying it is so does not make it so.

You do not cause yourself to like the color green merely by saying that you do. You are describing yourself, but the act of description does not make the description correct. You could speak falsely, but doing so would not change your preferences as to color.

There are some sentence-types that correspond to your concept of "authority." If I accept your offer to create a contract by saying, "we have a contract," I have in fact made is so by speaking. Likewise, "I now pronounce you man and wife." See J.L. Austin's "How to Do Things With Words" for more examples of this. The philosophy of language term for talking like this is that you are making a "performative utterance," because by speaking you are in fact performing an act, rather than merely describing the world.

But our speech conventions do not require us to speak performatively in order to make flat assertions. If it is raining outside, I can say, "it is raining," even though my saying so doesn't make it so. I think the mistake you are making is in assuming that we cannot assert that something is true unless we are 100% confident in our assertion.

I presume that you use the Higgs boson example because the boson hasn't been experimentally observed? In other words, the Higgs boson is an example where the evidence for existence is from reasoning to the most likely hypothesis, i.e. abduction.

If your belief in God is similar, that means you adopt the hypothesis that God exists because it better explains the available data.The physicist, of course, has access to much stronger evidence than abduction, for instance the LHC experiments, and will give much more weight to such evidence. That's an example of induction, which is key to hypothesis confirmation. Once the LHC results are in, the physicist fully expects to be saying either "the Higgs boson exists" or "the Higgs boson doesn't exist, or if it does it isn't the same thing we thought it was". However, he may well expect with 95% probability to be saying the former and not the latter.

I propose that you hesitate to say X when you have no inductive evidence that X. I also venture that in the case of the proposition "God exists", your belief is qualitatively different from that of pre-modern Christians, in that you are less likely to accept 'tests' of God's existence as valid. The medieval church, for instance, thought heliocentrism was heretical, in that it explicitly contradicted Christianity. This amounts to saying that a proof that the Earth orbits the Sun would be a disproof of Christianity, whereas I don't believe that you would see any particular material fact as evidence against God's existence.

There are languages where those meanings have different terms. It's that simple.

Good point. So do the native speakers of such languages not make this mistake?

[-][anonymous]10

Danish is a nice example of that actually (even though I don't use my native tongue as much as would be beneficial), to say "I believe/endorse X" you usually use the religious belief/person credibility wording "tror på X" while if you are going to be nitty gritty and discuss subjective credence a good formulation is the sense related to being convinced, "overbevist om X," which transliterates roughly to "overmuch burdened by evidence of X"

This post is also a followup to Beware "I Believe". Here is what I've learned.

Thinking about "believing in X" triggers positive affect, so one says "I believe in X". The process that forms the "I believe in" thoughts is separate from the process that analyzes the content of propositions about the territory.

The "I believe in" process can really mess with one's map. This happens in two ways:

  1. It sticks post-it notes over sections of their already-formed map that sever those sections' coherence with the rest of the map, e.g. a Christian who believes evolution happened but in various points God came in and did stuff that's responsible for morality.

  2. It tampers with bias-ridden fragile belief-forming methodology. They might do some transformation to incoming explanations of events so as to reconcile with "I believe in X", which probably won't increase its entanglement with the territory.

And as Eliezer pointed out in a comment to Robin's post, the interference between the separate belief and "belief in" processes is paralleled by the confusing English word "believe" which refers to both processes. And there is no adequate synonym for saying something like "I believe in democracy."

It's not as if people are trained to recognize when they believe something.  It's not like they're ever taught in high school:  "What it feels like to actually believe something—to have that statement in your belief pool—is that it just seems like the way the world is.  You should recognize this feeling, which is actual (unquoted) belief, and distinguish it from having good feelings about a belief that you recognize as a belief (which means that it's in quote marks)."

In contrast to, something like "this is interesting" or "this feels like an interesting thing to say."

[-][anonymous]20

Interestingly enough, the similar statement "It's raining outside but I don't know that it is" is completely consistent. It can easily be applied to events that are likely but not certain (e.g. "This die will come up a 2 or higher, but I don't know that it will").

There has been sufficient evidence (in the form of my own experiences) to say that 'a thing is true.' Based upon my own education, wherein 'sufficient evidence' is described as the summary of a study, or a line in a textbook, or the words of a teacher, my own experiences that show 'a thing is true' are far more real, and so, far more evidence than is required.