Checklist of Rationality Habits
The Two-Party Swindle
The Robbers Cave Experiment had as its subject 22 twelve-year-old boys, selected from 22 different schools in Oklahoma City, all doing well in school, all from stable middle-class Protestant families. In short, the boys were as similar to each other as the experimenters could arrange, though none started out knowing any of the others. The experiment, conducted in the aftermath of WWII, was meant to investigate conflicts between groups. How would the scientists spark an intergroup conflict to investigate? Well, the first step was to divide the 22 boys into two groups of 11 campers -
- and that was quite sufficient. There was hostility almost from the moment each group became aware of the other group's existence. Though they had not needed any name for themselves before, they named themselves the Eagles and the Rattlers. After the researchers (disguised as camp counselors) instigated contests for prizes, rivalry reached a fever pitch and all traces of good sportsmanship disintegrated. The Eagles stole the Rattlers' flag and burned it; the Rattlers raided the Eagles' cabin and stole the blue jeans of the group leader and painted it orange and carried it as a flag the next day.
Each group developed a stereotype of itself and a contrasting stereotype of the opposing group (though the boys had been initially selected to be as similar as possible). The Rattlers swore heavily and regarded themselves as rough-and-tough. The Eagles swore off swearing, and developed an image of themselves as proper-and-moral.
Consider, in this light, the episode of the Blues and the Greens in the days of Rome. Since the time of the ancient Romans, and continuing into the era of Byzantium and the Roman Empire, the Roman populace had been divided into the warring Blue and Green factions. Blues murdered Greens and Greens murdered Blues, despite all attempts at policing. They died in single combats, in ambushes, in group battles, in riots.
Why Our Kind Can't Cooperate
Previously in series: Rationality Verification
From when I was still forced to attend, I remember our synagogue's annual fundraising appeal. It was a simple enough format, if I recall correctly. The rabbi and the treasurer talked about the shul's expenses and how vital this annual fundraise was, and then the synagogue's members called out their pledges from their seats.
Straightforward, yes?
Let me tell you about a different annual fundraising appeal. One that I ran, in fact; during the early years of a nonprofit organization that may not be named. One difference was that the appeal was conducted over the Internet. And another difference was that the audience was largely drawn from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd. (To point in the rough direction of an empirical cluster in personspace. If you understood the phrase "empirical cluster in personspace" then you know who I'm talking about.)
I crafted the fundraising appeal with care. By my nature I'm too proud to ask other people for help; but I've gotten over around 60% of that reluctance over the years. The nonprofit needed money and was growing too slowly, so I put some force and poetry into that year's annual appeal. I sent it out to several mailing lists that covered most of our potential support base.
And almost immediately, people started posting to the mailing lists about why they weren't going to donate. Some of them raised basic questions about the nonprofit's philosophy and mission. Others talked about their brilliant ideas for all the other sources that the nonprofit could get funding from, instead of them. (They didn't volunteer to contact any of those sources themselves, they just had ideas for how we could do it.)
Now you might say, "Well, maybe your mission and philosophy did have basic problems—you wouldn't want to censor that discussion, would you?"
Hold on to that thought.
Because people were donating. We started getting donations right away, via Paypal. We even got congratulatory notes saying how the appeal had finally gotten them to start moving. A donation of $111.11 was accompanied by a message saying, "I decided to give **** a little bit more. One more hundred, one more ten, one more single, one more dime, and one more penny. All may not be for one, but this one is trying to be for all."
But none of those donors posted their agreement to the mailing list. Not one.
Perpetual Motion Beliefs
Followup to: The Second Law of Thermodynamics, and Engines of Cognition
Yesterday's post concluded:
To form accurate beliefs about something, you really do have to observe it. It's a very physical, very real process: any rational mind does "work" in the thermodynamic sense, not just the sense of mental effort... So unless you can tell me which specific step in your argument violates the laws of physics by giving you true knowledge of the unseen, don't expect me to believe that a big, elaborate clever argument can do it either.
One of the chief morals of the mathematical analogy between thermodynamics and cognition is that the constraints of probability are inescapable; probability may be a "subjective state of belief", but the laws of probability are harder than steel.
People learn under the traditional school regimen that the teacher tells you certain things, and you must believe them and recite them back; but if a mere student suggests a belief, you do not have to obey it. They map the domain of belief onto the domain of authority, and think that a certain belief is like an order that must be obeyed, but a probabilistic belief is like a mere suggestion.
They look at a lottery ticket, and say, "But you can't prove I won't win, right?" Meaning: "You may have calculated a low probability of winning, but since it is a probability, it's just a suggestion, and I am allowed to believe what I want."
Here's a little experiment: Smash an egg on the floor. The rule that says that the egg won't spontaneously reform and leap back into your hand is merely probabilistic. A suggestion, if you will. The laws of thermodynamics are probabilistic, so they can't really be laws, the way that "Thou shalt not murder" is a law... right?
So why not just ignore the suggestion? Then the egg will unscramble itself... right?
A Crash Course in the Neuroscience of Human Motivation
[PDF of this article updated Aug. 23, 2011]
Whenever I write a new article for Less Wrong, I'm pulled in two opposite directions.
One force pulls me toward writing short, exciting posts with lots of brain candy and just one main point. Eliezer has done that kind of thing very well many times: see Making Beliefs Pay Rent, Hindsight Devalues Science, Probability is in the Mind, Taboo Your Words, Mind Projection Fallacy, Guessing the Teacher's Password, Hold Off on Proposing Solutions, Applause Lights, Dissolving the Question, and many more.
Another force pulls me toward writing long, factually dense posts that fill in as many of the pieces of a particular argument in one fell swoop as possible. This is largely because I want to write about the cutting edge of human knowledge but I keep realizing that the inferential gap is larger than I had anticipated, and I want to fill in that inferential gap quickly so I can get to the cutting edge.
For example, I had to draw on dozens of Eliezer's posts just to say I was heading toward my metaethics sequence. I've also published 21 new posts (many of them quite long and heavily researched) written specifically because I need to refer to them in my metaethics sequence.1 I tried to make these posts interesting and useful on their own, but my primary motivation for writing them was that I need them for my metaethics sequence.
And now I've written only four posts2 in my metaethics sequence and already the inferential gap to my next post in that sequence is huge again. :(
So I'd like to try an experiment. I won't do it often, but I want to try it at least once. Instead of writing 20 more short posts between now and the next post in my metaethics sequence, I'll attempt to fill in a big chunk of the inferential gap to my next metaethics post in one fell swoop by writing a long tutorial post (a la Eliezer's tutorials on Bayes' Theorem and technical explanation).3
So if you're not up for a 20-page tutorial on human motivation, this post isn't for you, but I hope you're glad I bothered to write it for the sake of others. If you are in the mood for a 20-page tutorial on human motivation, please proceed.
Timeless Identity
Followup to: No Individual Particles, Identity Isn't In Specific Atoms, Timeless Physics, Timeless Causality
People have asked me, "What practical good does it do to discuss quantum physics or consciousness or zombies or personal identity? I mean, what's the application for me in real life?"
Before the end of today's post, we shall see a real-world application with practical consequences, for you, yes, you in today's world. It is built upon many prerequisites and deep foundations; you will not be able to tell others what you have seen, though you may (or may not) want desperately to tell them. (Short of having them read the last several months of OB.)
In No Individual Particles we saw that the intuitive conception of reality as little billiard balls bopping around, is entirely and absolutely wrong; the basic ontological reality, to the best of anyone's present knowledge, is a joint configuration space. These configurations have mathematical identities like "A particle here, a particle there", rather than "particle 1 here, particle 2 there" and the difference is experimentally testable. What might appear to be a little billiard ball, like an electron caught in a trap, is actually a multiplicative factor in a wavefunction that happens to approximately factor. The factorization of 18 includes two factors of 3, not one factor of 3, but this doesn't mean the two 3s have separate individual identities—quantum mechanics is sort of like that. (If that didn't make any sense to you, sorry; you need to have followed the series on quantum physics.)
In Identity Isn't In Specific Atoms, we took this counterintuitive truth of physical ontology, and proceeded to kick hell out of an intuitive concept of personal identity that depends on being made of the "same atoms"—the intuition that you are the same person, if you are made out of the same pieces. But because the brain doesn't repeat its exact state (let alone the whole universe), the joint configuration space which underlies you, is nonoverlapping from one fraction of a second to the next. Or even from one Planck interval to the next. I.e., "you" of now and "you" of one second later do not have in common any ontologically basic elements with a shared persistent identity.
0 And 1 Are Not Probabilities
Followup to: Infinite Certainty
1, 2, and 3 are all integers, and so is -4. If you keep counting up, or keep counting down, you're bound to encounter a whole lot more integers. You will not, however, encounter anything called "positive infinity" or "negative infinity", so these are not integers.
Positive and negative infinity are not integers, but rather special symbols for talking about the behavior of integers. People sometimes say something like, "5 + infinity = infinity", because if you start at 5 and keep counting up without ever stopping, you'll get higher and higher numbers without limit. But it doesn't follow from this that "infinity - infinity = 5". You can't count up from 0 without ever stopping, and then count down without ever stopping, and then find yourself at 5 when you're done.
From this we can see that infinity is not only not-an-integer, it doesn't even behave like an integer. If you unwisely try to mix up infinities with integers, you'll need all sorts of special new inconsistent-seeming behaviors which you don't need for 1, 2, 3 and other actual integers.
Expecting Short Inferential Distances
Homo sapiens' environment of evolutionary adaptedness (aka EEA or "ancestral environment") consisted of hunter-gatherer bands of at most 200 people, with no writing. All inherited knowledge was passed down by speech and memory.
In a world like that, all background knowledge is universal knowledge. All information not strictly private is public, period.
In the ancestral environment, you were unlikely to end up more than one inferential step away from anyone else. When you discover a new oasis, you don't have to explain to your fellow tribe members what an oasis is, or why it's a good idea to drink water, or how to walk. Only you know where the oasis lies; this is private knowledge. But everyone has the background to understand your description of the oasis, the concepts needed to think about water; this is universal knowledge. When you explain things in an ancestral environment, you almost never have to explain your concepts. At most you have to explain one new concept, not two or more simultaneously.
Strategic ignorance and plausible deniability
This is the third part in a mini-sequence presenting material from Robert Kurzban's excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.
The press secretary of an organization is tasked with presenting outsiders with the best possible image of the organization. While they're not supposed to outright lie, they do use euphemisms and try to only mention the positive sides of things.
A plot point in the TV series West Wing is that the President of the United States has a disease which he wants to hide from the public. The White House Press Secretary is careful to ask whether there's anything she needs to know about the President's health, instead of whether there's anything she should know. As the President's disease is technically something she should know but not something she needs to know, this allows the President to hide the disease from her without lying to her (and by extension, to the American public). As she then doesn't need to lie either, she can do her job better.
If our minds are modular, critical information can be kept away from the modules that are associated with consciousness and speech production. It can often be better if the parts of the system that exist to deal with others are blissfully ignorant, or even actively mistaken, about information that exists in other parts of the system.
In one experiment, people could choose between two options. Choosing option A meant they got $5, and someone else also got $5. Option B meant that they got $6 and the other person got $1. About two thirds were generous and chose option A.
A different group of people played a slightly different game. As before, they could choose between $5 or $6 for themselves, but they didn't know how their choice would affect the other person's payoff. They could find out, however – if they just clicked a button, they'd be told whether the choice was between $5/$5 and $6/$1, or $5/$1 and $6/$5. From a subject's point of view, clicking a button might tell them that picking the option they actually preferred meant they were costing the other person $4. Not clicking meant that they could honestly say that they didn't know what their choice cost the other person. It turned out that about half of the people refused to look at the other player's payoffs, and that many more subjects chose $6/? than $5/?.
There are many situations where not knowing something means you can avoid a lose-lose situation. If know your friend is guilty of a serious crime and you are called to testify in court, you may either betray your friend or commit perjury. If you see a building on fire, and a small boy comes to tell you that a cat is caught in the window, your options are to either risk yourself to save the cat, or take the reputational hit of neglecting a socially perceived duty to rescue the cat. (Footnote in the book: ”You could kill the boy, but then you've got other problems.”) In the trolley problem, many people will consider both options wrong. In one setup, 87% of the people who were asked thought that pushing a man to the tracks to save five was wrong, and 62% said that not pushing him was wrong. Better to never see the people on the tracks. In addition to having your reputation besmirched by not trying to save someone, many nations have actual ”duty to rescue” laws which require you to act if you see someone in serious trouble.
In general, people (and societies) often believe that if you know about something bad, you have a duty to stop it. If you don't know about something, then obviously you can't be blamed for not stopping it. So we should expect that part of our behavior is designed to avoid finding out information that would impose an unpleasant duty on us.
I personally tend to notice this conflict if I see people in public places who look like they might be sleeping or passed out. Most likely, they're just sleeping and don't want to be bothered. If they're drunk or on drugs, they could even be aggressive. But then there's always the chance that they have some kind of a condition and need medical assistance. Should I go poke them to make sure? You can't be blamed if you act like you didn't notice them, some part of me whispers. Remember the suggestion that you can fight the bystander effect by singling out a person and asking them directly for help? You can't pretend you haven't noticed a duty if the duty is pointed out to you directly. As for the bystander effect in general, there's less of a perceived duty to help if everyone else ignores the person, too. (But then this can't be the sole explanation, because people are most likely to act when they're alone and there's nobody else around to know about your duty. The bystander effect isn't actually discussed in the book, this paragraph is my own speculation.)
The police may also prefer not to know about some minor crime that is being committed. If it's known that they're ignoring drug use (say), they lose some of their authority and may end up punished by their superiors. If they don't ignore it, they may spend all of their time doing minor busts instead of concentrating on more serious crime. Parents may also pretend that they don't notice their kids engaging in some minor misbehavior, if they don't want to lose their authority but don't feel like interfering either.
In effect, the value of ignorance comes from the costs of others seeing you know something that puts you in a position in which you are perceived to have a duty and must choose to do one of two costly acts – punish, or ignore. In may own lab, we have found that people know this. When our subjects are given the opportunity to punish someone who has been unkind in an economic game, they do so much less when their punishment won't be known by anyone. That is, they decline to punish when the cloak of anonymity protects them.
The (soon-to-expire) ”don't ask, don't tell” policy of the United States military can be seen as an institutionalization of this rule. Soldiers are forbidden from revealing information about their sexuality, which would force their commanders to discharge them. On the other hand, commanders are also forbidden from inquiring into the matter and finding out.
A related factor is the desire for plausible deniability. A person who wants to have multiple sexual partners may resist getting himself tested for sexual disease. If he was tested, he might find out he had a disease, and then he'd be accused of knowingly endangering others if he didn't tell them about his disease. If he isn't tested, he'll only be accused of not finding out that information, which is often considered less serious.
These are examples of situations where it's advantageous to be ignorant of something. But there are also situations where it is good to be actively mistaken. More about them in the next post.
Better Disagreement
Honest disagreement is often a good sign of progress.
- Gandhi
Now that most communication is remote rather than face-to-face, people are comfortable disagreeing more often. How, then, can we disagree well? If the goal is intellectual progress, those who disagree should aim not for name-calling but for honest counterargument.
To be more specific, we might use a disagreement hierarchy. Below is the hierarchy proposed by Paul Graham (with DH7 added by Black Belt Bayesian).1
DH0: Name-Calling. The lowest form of disagreement, this ranges from "u r fag!!!" to "He’s just a troll" to "The author is a self-important dilettante."
DH1: Ad Hominem. An ad hominem ('against the man') argument won’t refute the original claim, but it might at least be relevant. If a senator says we should raise the salary of senators, you might reply: "Of course he’d say that; he’s a senator." That might be relevant, but it doesn’t refute the original claim: "If there’s something wrong with the senator’s argument, you should say what it is; and if there isn’t, what difference does it make that he’s a senator?"
DH2: Responding to Tone. At this level we actually respond to the writing rather than the writer, but we're responding to tone rather than substance. For example: "It’s terrible how flippantly the author dimisses theology."
DH3: Contradiction. Graham writes: "In this stage we finally get responses to what was said, rather than how or by whom. The lowest form of response to an argument is simply to state the opposing case, with little or no supporting evidence." For example: "It’s terrible how flippantly the author dismisses theology. Theology is a legitimate inquiry into truth."
DH4: Counterargument. Finally, a form of disagreement that might persuade! Counterargument is "contradiction plus reasoning and/or evidence." Still, counterargument is often directed at a minor point, or turns out to be an example of two people talking past each other, as in the parable about a tree falling in the forest.
DH5: Refutation. In refutation, you quote (or paraphrase) a precise claim or argument by the author and explain why the claim is false, or why the argument doesn’t work. With refutation, you're sure to engage exactly what the author said, and offer a direct counterargument with evidence and reason.
DH6: Refuting the Central Point. Graham writes: "The force of a refutation depends on what you refute. The most powerful form of disagreement is to refute someone’s central point." A refutation of the central point may look like this: "The author’s central point appears to be X. For example, he writes 'blah blah blah.' He also writes 'blah blah.' But this is wrong, because (1) argument one, (2) argument two, and (3) argument three."
DH7: Improve the Argument, then Refute Its Central Point. Black Belt Bayesian writes: "If you’re interested in being on the right side of disputes, you will refute your opponents' arguments. But if you're interested in producing truth, you will fix your opponents' arguments for them. To win, you must fight not only the creature you encounter; you [also] must fight the most horrible thing that can be constructed from its corpse."2 Also see: The Least Convenient Possible World.
Having names for biases and fallacies can help us notice and correct them, and having labels for different kinds of disagreement can help us zoom in on the parts of a disagreement that matter.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)