Causal Diagrams and Causal Models
Suppose a general-population survey shows that people who exercise less, weigh more. You don't have any known direction of time in the data - you don't know which came first, the increased weight or the diminished exercise. And you didn't randomly assign half the population to exercise less; you just surveyed an existing population.
The statisticians who discovered causality were trying to find a way to distinguish, within survey data, the direction of cause and effect - whether, as common sense would have it, more obese people exercise less because they find physical activity less rewarding; or whether, as in the virtue theory of metabolism, lack of exercise actually causes weight gain due to divine punishment for the sin of sloth.
vs. |
The usual way to resolve this sort of question is by randomized intervention. If you randomly assign half your experimental subjects to exercise more, and afterward the increased-exercise group doesn't lose any weight compared to the control group [1], you could rule out causality from exercise to weight, and conclude that the correlation between weight and exercise is probably due to physical activity being less fun when you're overweight [3]. The question is whether you can get causal data without interventions.
For a long time, the conventional wisdom in philosophy was that this was impossible unless you knew the direction of time and knew which event had happened first. Among some philosophers of science, there was a belief that the "direction of causality" was a meaningless question, and that in the universe itself there were only correlations - that "cause and effect" was something unobservable and undefinable, that only unsophisticated non-statisticians believed in due to their lack of formal training:
"The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm." -- Bertrand Russell (he later changed his mind)
"Beyond such discarded fundamentals as 'matter' and 'force' lies still another fetish among the inscrutable arcana of modern science, namely, the category of cause and effect." -- Karl Pearson
The famous statistician Fisher, who was also a smoker, testified before Congress that the correlation between smoking and lung cancer couldn't prove that the former caused the latter. We have remnants of this type of reasoning in old-school "Correlation does not imply causation", without the now-standard appendix, "But it sure is a hint".
This skepticism was overturned by a surprisingly simple mathematical observation.
Initiation Ceremony
The torches that lit the narrow stairwell burned intensely and in the wrong color, flame like melting gold or shattered suns.
192... 193...
Brennan's sandals clicked softly on the stone steps, snicking in sequence, like dominos very slowly falling.
227... 228...
Half a circle ahead of him, a trailing fringe of dark cloth whispered down the stairs, the robed figure itself staying just out of sight.
239... 240...
Not much longer, Brennan predicted to himself, and his guess was accurate:
Sixteen times sixteen steps was the number, and they stood before the portal of glass.
The great curved gate had been wrought with cunning, humor, and close attention to indices of refraction: it warped light, bent it, folded it, and generally abused it, so that there were hints of what was on the other side (stronger light sources, dark walls) but no possible way of seeing through—unless, of course, you had the key: the counter-door, thick for thin and thin for thick, in which case the two would cancel out.
From the robed figure beside Brennan, two hands emerged, gloved in reflective cloth to conceal skin's color. Fingers like slim mirrors grasped the handles of the warped gate—handles that Brennan had not guessed; in all that distortion, shapes could only be anticipated, not seen.
"Do you want to know?" whispered the guide; a whisper nearly as loud as an ordinary voice, but not revealing the slightest hint of gender.
Brennan paused. The answer to the question seemed suspiciously, indeed extraordinarily obvious, even for ritual.
Petrov Day is September 26
On September 26th, 1983, the world was nearly destroyed by nuclear war. That day is Petrov Day, named for the man who averted it. Petrov Day is now a yearly event on September 26 commemorating the anniversary of the Petrov incident. Last year, Citadel, the Boston-area rationalist house, performed a ritual on Petrov day. We will be doing it again - and have published a revised version, for anyone else who wants to have a Petrov Day celebration themselves.
The purpose of the ritual is to make catastrophic and existential risk emotionally salient, by putting it into historical context and providing positive and negative examples of how it has been handled. This is not for the faint of heart and not for the uninitiated; it is aimed at those who already know what catastrophic and existential risk is, have some background knowledge of what those risks are, and believe (at least on an abstract level) that preventing those risks from coming to pass is important.
Petrov Day is designed for groups of 5-10 people, and consists of a series of readings and symbolic actions which people take turns doing. It is easy to organize; you'll need a few simple props (candles and a candle-holder) and a printout of the program for each person, but other than that no preparation is necessary.
Organizer guide and program (for one-sided printing) (PDF)
Program for two-sided print and fold (PDF)
There will be a Petrov Day ritual hosted at Citadel (Boston area) and at Highgarden (New York area). If you live somewhere else, consider running one yourself!
Whining-Based Communities
Previously in series: Selecting Rationalist Groups
Followup to: Rationality is Systematized Winning, Extenuating Circumstances
Why emphasize the connection between rationality and winning? Well... that is what decision theory is for. But also to place a Go stone to block becoming a whining-based community.
Let's be fair to Ayn Rand: There were legitimate messages in Atlas Shrugged that many readers had never heard before, and this lent the book a part of its compelling power over them. The message that it's all right to excel—that it's okay to be, not just good, but better than others—of this the Competitive Conspiracy would approve.
But this is only part of Rand's message, and the other part is the poison pill, a deadlier appeal: It's those looters who don't approve of excellence who are keeping you down. Surely you would be rich and famous and high-status like you deserve if not for them, those unappreciative bastards and their conspiracy of mediocrity.
If you consider the reasonableness-based conception of rationality rather than the winning-based conception of rationality—well, you can easily imagine some community of people congratulating themselves on how reasonable they were, while blaming the surrounding unreasonable society for keeping them down. Wrapping themselves up in their own bitterness for reality refusing to comply with the greatness they thought they should have.
But this is not how decision theory works—the "rational" strategy adapts to the other players' strategies, it does not depend on the other players being rational. If a rational agent believes the other players are irrational then it takes that expectation into account in maximizing expected utility. Van Vogt got this one right: his rationalist protagonists are formidable from accepting reality swiftly and adapting to it swiftly, without reluctance or attachment.
Incremental Progress and the Valley
Followup to: Rationality is Systematized Winning
Yesterday I said: "Rationality is systematized winning."
"But," you protest, "the reasonable person doesn't always win!"
What do you mean by this? Do you mean that every week or two, someone who bought a lottery ticket with negative expected value, wins the lottery and becomes much richer than you? That is not a systematic loss; it is selective reporting by the media. From a statistical standpoint, lottery winners don't exist—you would never encounter one in your lifetime, if it weren't for the selective reporting.
Even perfectly rational agents can lose. They just can't know in advance that they'll lose. They can't expect to underperform any other performable strategy, or they would simply perform it.
"No," you say, "I'm talking about how startup founders strike it rich by believing in themselves and their ideas more strongly than any reasonable person would. I'm talking about how religious people are happier—"
Ah. Well, here's the the thing: An incremental step in the direction of rationality, if the result is still irrational in other ways, does not have to yield incrementally more winning.
The optimality theorems that we have for probability theory and decision theory, are for perfect probability theory and decision theory. There is no companion theorem which says that, starting from some flawed initial form, every incremental modification of the algorithm that takes the structure closer to the ideal, must yield an incremental improvement in performance. This has not yet been proven, because it is not, in fact, true.
"So," you say, "what point is there then in striving to be more rational? We won't reach the perfect ideal. So we have no guarantee that our steps forward are helping."
Purchase Fuzzies and Utilons Separately
Previously in series: Money: The Unit of Caring
Yesterday:
There is this very, very old puzzle/observation in economics about the lawyer who spends an hour volunteering at the soup kitchen, instead of working an extra hour and donating the money to hire someone...
If the lawyer needs to work an hour at the soup kitchen to keep himself motivated and remind himself why he's doing what he's doing, that's fine. But he should also be donating some of the hours he worked at the office, because that is the power of professional specialization and it is how grownups really get things done. One might consider the check as buying the right to volunteer at the soup kitchen, or validating the time spent at the soup kitchen.
I hold open doors for little old ladies. I can't actually remember the last time this happened literally (though I'm sure it has, sometime in the last year or so). But within the last month, say, I was out on a walk and discovered a station wagon parked in a driveway with its trunk completely open, giving full access to the car's interior. I looked in to see if there were packages being taken out, but this was not so. I looked around to see if anyone was doing anything with the car. And finally I went up to the house and knocked, then rang the bell. And yes, the trunk had been accidentally left open.
Under other circumstances, this would be a simple act of altruism, which might signify true concern for another's welfare, or fear of guilt for inaction, or a desire to signal trustworthiness to oneself or others, or finding altruism pleasurable. I think that these are all perfectly legitimate motives, by the way; I might give bonus points for the first, but I wouldn't deduct any penalty points for the others. Just so long as people get helped.
But in my own case, since I already work in the nonprofit sector, the further question arises as to whether I could have better employed the same sixty seconds in a more specialized way, to bring greater benefit to others. That is: can I really defend this as the best use of my time, given the other things I claim to believe?
Why *I* fail to act rationally
There is a lot of talk here about sophisticated rationality failures - priming, overconfidence, etc. etc. There is much less talk about what I think is the more common reason for people failing to act rationally in the real world - something that I think most people outside this community would agree is the most common rationality failure mode - acting emotionally (pjeby has just begun to discuss this, but I don't think it's the main thrust of his post...).
While there can be sound evolutionary reasons for having emotions (the thirst for revenge as a Doomsday Machine being the easiest to understand), and while we certainly don't want to succumb to the fallacy that rationalists are emotionless Spock-clones. I think overcoming (or at least being able to control) emotions would, for most people, be a more important first step to acting rationally than overcoming biases.
If I could avoid saying things I'll regret later when angry, avoid putting down colleagues through jealousy, avoid procrastinating because of laziness and avoid refusing to make correct decisions because of fear, I think this would do a lot more to make me into a winner than if I could figure out how to correctly calibrate my beliefs about trivia questions, or even get rid of my unwanted Implicit Associations.
So the question - do we have good techniques for preventing our emotions from making bad decisions for us? Something as simple as "count to ten before you say anything when angry" is useful if it works. Something as sophisticated as "become a Zen Master" is probably unattainable, but might at least point us in the right direction - and then there's everything in between.
Why Our Kind Can't Cooperate
Previously in series: Rationality Verification
From when I was still forced to attend, I remember our synagogue's annual fundraising appeal. It was a simple enough format, if I recall correctly. The rabbi and the treasurer talked about the shul's expenses and how vital this annual fundraise was, and then the synagogue's members called out their pledges from their seats.
Straightforward, yes?
Let me tell you about a different annual fundraising appeal. One that I ran, in fact; during the early years of a nonprofit organization that may not be named. One difference was that the appeal was conducted over the Internet. And another difference was that the audience was largely drawn from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd. (To point in the rough direction of an empirical cluster in personspace. If you understood the phrase "empirical cluster in personspace" then you know who I'm talking about.)
I crafted the fundraising appeal with care. By my nature I'm too proud to ask other people for help; but I've gotten over around 60% of that reluctance over the years. The nonprofit needed money and was growing too slowly, so I put some force and poetry into that year's annual appeal. I sent it out to several mailing lists that covered most of our potential support base.
And almost immediately, people started posting to the mailing lists about why they weren't going to donate. Some of them raised basic questions about the nonprofit's philosophy and mission. Others talked about their brilliant ideas for all the other sources that the nonprofit could get funding from, instead of them. (They didn't volunteer to contact any of those sources themselves, they just had ideas for how we could do it.)
Now you might say, "Well, maybe your mission and philosophy did have basic problems—you wouldn't want to censor that discussion, would you?"
Hold on to that thought.
Because people were donating. We started getting donations right away, via Paypal. We even got congratulatory notes saying how the appeal had finally gotten them to start moving. A donation of $111.11 was accompanied by a message saying, "I decided to give **** a little bit more. One more hundred, one more ten, one more single, one more dime, and one more penny. All may not be for one, but this one is trying to be for all."
But none of those donors posted their agreement to the mailing list. Not one.
Raising the Sanity Waterline
To paraphrase the Black Belt Bayesian: Behind every exciting, dramatic failure, there is a more important story about a larger and less dramatic failure that made the first failure possible.
If every trace of religion was magically eliminated from the world tomorrow, then—however much improved the lives of many people would be—we would not even have come close to solving the larger failures of sanity that made religion possible in the first place.
We have good cause to spend some of our efforts on trying to eliminate religion directly, because it is a direct problem. But religion also serves the function of an asphyxiated canary in a coal mine—religion is a sign, a symptom, of larger problems that don't go away just because someone loses their religion.
Consider this thought experiment—what could you teach people that is not directly about religion, which is true and useful as a general method of rationality, which would cause them to lose their religions? In fact—imagine that we're going to go and survey all your students five years later, and see how many of them have lost their religions compared to a control group; if you make the slightest move at fighting religion directly, you will invalidate the experiment. You may not make a single mention of religion or any religious belief in your classroom, you may not even hint at it in any obvious way. All your examples must center about real-world cases that have nothing to do with religion.
If you can't fight religion directly, what do you teach that raises the general waterline of sanity to the point that religion goes underwater?
Zombies Redacted
I looked at my old post Zombies! Zombies? and it seemed to have some extraneous content. This is a redacted and slightly rewritten version.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)