Beware of Other-Optimizing
Previously in series: Mandatory Secret Identities
I've noticed a serious problem in which aspiring rationalists vastly overestimate their ability to optimize other people's lives. And I think I have some idea of how the problem arises.
You read nineteen different webpages advising you about personal improvement—productivity, dieting, saving money. And the writers all sound bright and enthusiastic about Their Method, they tell tales of how it worked for them and promise amazing results...
But most of the advice rings so false as to not even seem worth considering. So you sigh, mournfully pondering the wild, childish enthusiasm that people can seem to work up for just about anything, no matter how silly. Pieces of advice #4 and #15 sound interesting, and you try them, but... they don't... quite... well, it fails miserably. The advice was wrong, or you couldn't do it, and either way you're not any better off.
And then you read the twentieth piece of advice—or even more, you discover a twentieth method that wasn't in any of the pages—and STARS ABOVE IT ACTUALLY WORKS THIS TIME.
At long, long last you have discovered the real way, the right way, the way that actually works. And when someone else gets into the sort of trouble you used to have—well, this time you know how to help them. You can save them all the trouble of reading through nineteen useless pieces of advice and skip directly to the correct answer. As an aspiring rationalist you've already learned that most people don't listen, and you usually don't bother—but this person is a friend, someone you know, someone you trust and respect to listen.
And so you put a comradely hand on their shoulder, look them straight in the eyes, and tell them how to do it.
Plant Seeds of Rationality
After his wife died, Elzéard Bouffier decided to cultivate a forest in a desolate, treeless valley. He built small dams along the side of the nearby mountain, thus creating new streams that ran down into the valley. Then, he planted one seed at a time.
After four decades of steady work, the valley throbbed with life. You could hear the buzzing of bees and the tweeting of birds. Thousands of people moved to the valley to enjoy nature at its finest. The government assumed the regrowth was a strange natural phenomenon, and the valley's inhabitants were unaware that their happiness was due to the selfless deeds of one man.
This is The Man Who Planted Trees, a popular inspirational tale.
But it's not just a tale. Abdul Kareem cultivated a forest on a once-desolate stretch of 32 acres along India's West Coast, planting one seed at a time. It took him only twenty years.
Like trees in the ground, rationality does not grow in the mind overnight. Cultivating rationality requires care and persistence, and there are many obstacles. You probably won't bring someone from average (ir)rationality to technical rationality in a fortnight. But you can plant seeds.
You can politely ask rationalist questions when someone says something irrational. Don't forget to smile!
You can write letters to the editor of your local newspaper to correct faulty reasoning.
You can visit random blogs, find an error in reasoning, offer a polite correction, and link back to a few relevant Less Wrong posts.
One person planting seeds of rationality can make a difference, and we can do even better if we organize. An organization called Trees for the Future has helped thousands of families in thousands of villages to plant more than 50 million trees around the world. And when it comes to rationality, we can plant more seeds if we, for example, support the spread of critical thinking classes in schools.
Do you want to collaborate with others to help spread rationality on a mass scale?
You don't even need to figure out how to do it. Just contact leaders who already know what to do, and volunteer your time and energy.
Email the Foundation for Critical Thinking and say, "How can I help?" Email Louie Helm and sign up for the Singularity Institute Volunteer Network.
Change does not happen when people gather to talk about how much they suffer from akrasia. Change happens when lots of individuals organize to make change happen.
Lost Purposes
It was in either kindergarten or first grade that I was first asked to pray, given a transliteration of a Hebrew prayer. I asked what the words meant. I was told that so long as I prayed in Hebrew, I didn't need to know what the words meant, it would work anyway.
That was the beginning of my break with Judaism.
As you read this, some young man or woman is sitting at a desk in a university, earnestly studying material they have no intention of ever using, and no interest in knowing for its own sake. They want a high-paying job, and the high-paying job requires a piece of paper, and the piece of paper requires a previous master's degree, and the master's degree requires a bachelor's degree, and the university that grants the bachelor's degree requires you to take a class in 12th-century knitting patterns to graduate. So they diligently study, intending to forget it all the moment the final exam is administered, but still seriously working away, because they want that piece of paper.
Maybe you realized it was all madness, but I bet you did it anyway. You didn't have a choice, right?
Efficient Charity: Do Unto Others...
This was originally posted as part of the efficient charity contest back in November. Thanks to Roko, multifoliaterose, Louie, jmmcd, jsalvatier, and others I forget for help, corrections, encouragement, and bothering me until I finally remembered to post this here.
Imagine you are setting out on a dangerous expedition through the Arctic on a limited budget. The grizzled old prospector at the general store shakes his head sadly: you can't afford everything you need; you'll just have to purchase the bare essentials and hope you get lucky. But what is essential? Should you buy the warmest parka, if it means you can't afford a sleeping bag? Should you bring an extra week's food, just in case, even if it means going without a rifle? Or can you buy the rifle, leave the food, and hunt for your dinner?
And how about the field guide to Arctic flowers? You like flowers, and you'd hate to feel like you're failing to appreciate the harsh yet delicate environment around you. And a digital camera, of course - if you make it back alive, you'll have to put the Arctic expedition pics up on Facebook. And a hand-crafted scarf with authentic Inuit tribal patterns woven from organic fibres! Wicked!
...but of course buying any of those items would be insane. The problem is what economists call opportunity costs: buying one thing costs money that could be used to buy others. A hand-crafted designer scarf might have some value in the Arctic, but it would cost so much it would prevent you from buying much more important things. And when your life is on the line, things like impressing your friends and buying organic pale in comparison. You have one goal - staying alive - and your only problem is how to distribute your resources to keep your chances as high as possible. These sorts of economics concepts are natural enough when faced with a journey through the freezing tundra.
Reference Points
I just spent some time reading Thomas Schelling's "Choice and Consequences" and I heartily recommend it. Here's a Google books link to the chapter I was reading, "The Intimate Contest for Self Command."
It's fascinating, and if you like LessWrong, rationality, understanding things, decision theories, figuring people and the world out - well, then I think you'd like Schelling. Actually, you'll probably be amazed with how much of his stuff you're already familiar with - he really established a heck of a lot modern thinking on game theory.
Allow me to depart from Schelling a moment, and talk of Sam Snyder. He's a very intelligent guy who has lots of intelligent thoughts. Here's a link to his website - there's massive amounts of data and references there, so I'd recommend you just skim his site if you go visit until you find something interesting. You'll probably find something interesting pretty quickly.
I got a chance to have a conversation with him a while back, and we covered immense amounts of ground. He introduced me to a concept I've been thinking about nonstop since learning it from him - reference points.
Now, he explained it very eloquently, and I'm afraid I'm going to mangle and not do justice to his explanation. But to make a long story really short, your reference points affect your motivation a lot.
An example would help.
What does the average person think about he thinks of running? He thinks of huffing, puffing, being tired and sore, having a hard time getting going, looking fat in workout clothes and being embarrassed at being out of shape. A lot of people try running at some point in their life, and most people don't keep doing it.
On the other hand, what does a regular runner think of? He thinks of the "runner's high" and gliding across the pavement, enjoying a great run, and feeling like a million bucks afterwards.
Since that conversation, I've been trying to change my reference points. For instance, if I feel like I'd like some fried food, I try not to imagine/reference eating the salty greased food. Yes, eating french fries and a grilled chicken sandwich will be salty and fatty and delicious. It's a superstimulus, we're not really evolved to handle that stuff appropriately.
So when most people think of the McChicken Sandwich, large fry, large drink, they think about the grease and salt and sugar and how good it'll taste.
I still like that stuff. In fact, since I quit a lot of vices, sometimes I crave even harder for the few I have left. But I was able to cut my junk food consumption way down by changing my reference point. When I start to have a desire for that sort of food, I think about how my stomach and energy levels are going to feel 90 minutes after eating it. That answer is - not too good. So I go out to a local restaurant and order plain chicken, rice, and vegetables, and I feel good later.
Beautiful Probability
Followup to: Beautiful Math, Expecting Beauty, Is Reality Ugly?
Should we expect rationality to be, on some level, simple? Should we search and hope for underlying beauty in the arts of belief and choice?
Let me introduce this issue by borrowing a complaint of the late great Bayesian Master, E. T. Jaynes (1990):
"Two medical researchers use the same treatment independently, in different hospitals. Neither would stoop to falsifying the data, but one had decided beforehand that because of finite resources he would stop after treating N=100 patients, however many cures were observed by then. The other had staked his reputation on the efficacy of the treatment, and decided he would not stop until he had data indicating a rate of cures definitely greater than 60%, however many patients that might require. But in fact, both stopped with exactly the same data: n = 100 [patients], r = 70 [cures]. Should we then draw different conclusions from their experiments?" (Presumably the two control groups also had equal results.)
According to old-fashioned statistical procedure - which I believe is still being taught today - the two researchers have performed different experiments with different stopping conditions. The two experiments could have terminated with different data, and therefore represent different tests of the hypothesis, requiring different statistical analyses. It's quite possible that the first experiment will be "statistically significant", the second not.
Whether or not you are disturbed by this says a good deal about your attitude toward probability theory, and indeed, rationality itself.
The Affect Heuristic, Sentiment, and Art
I was having a discussion with a friend and reading some related blog articles about the question of whether race affects IQ. (N.B. This post is NOT about the content of the arguments surrounding that question.) Now, like your typical LessWrong member, I subscribe to the Litany of Gendlin, I don’t want to hide from any truth, I believe in honest intellectual inquiry on all subjects. Also, like your typical LessWrong member, I don’t want to be a bigot. These two goals ought to be compatible, right?
But when I finished my conversation and went to lunch, something scary happened. Something I hesitate to admit publicly. I found myself having a negative attitude to all the black people in the cafeteria.
Needless to say, this wasn’t what I wanted. It makes no sense, and it isn’t the way I normally think. But human beings have an affect heuristic. We identify categories as broadly “good” or “bad,” and we tend to believe all good things or all bad things about a category, even when it doesn’t make sense. When we discuss the IQ’s of black and white people, we’re primed to think “yay white, boo black.” Even the act of reading perfectly sound research has that priming effect.
And conscious awareness and effort doesn’t seem to do much to fix this. The Implicit Awareness Test measures how quickly we group black faces with negative-affect words and white faces with positive-affect words, compared to our speed at grouping the black faces with the positive words and the white faces with the negative words. Nearly everyone, of every race, shows some implicit association of black with “bad.” And the researchers who created the test found no improvement with practice or effort.
The one thing that did reduce implicit bias scores was if test-takers primed themselves ahead of time by reading about eminent black historical figures. They were less likely to associate black with “bad” if they had just made a mental association between black and “good.” Which, in fact, was exactly how I snapped out of my moment of cafeteria racism: I recalled to my mind's ear a recording I like of Marian Anderson singing Schubert. The music affected me emotionally and allowed me to escape my mindset.
The Threat of Cryonics
It is obvious that many people find cryonics threatening. Most of the arguments encountered in debates on the topic are not calculated to persuade on objective grounds, but function as curiosity-stoppers. Here are some common examples:
- Elevated burden of proof. As if cryonics demands more than a small amount of evidence to be worth trying.
- Elevated cost expectation. Thinking that cryonics is (and could only ever be) affordable only for the very rich.
- Unresearched suspicions regarding the ethics and business practices of cryonics organizations.
- Sudden certainty that earth-shattering catastrophes are just around the corner.
- Assuming the worst about the moral attitudes of humanity's descendants towards cryonics patients.
- Associations with prescientific mummification, or sci-fi that handwaves the technical difficulties.
The question is what causes this sensation that cryonics is a threat? What does it specifically threaten?
The Apologist and the Revolutionary
Rationalists complain that most people are too willing to make excuses for their positions, and too unwilling to abandon those positions for ones that better fit the evidence. And most people really are pretty bad at this. But certain stroke victims called anosognosiacs are much, much worse.
Anosognosia is the condition of not being aware of your own disabilities. To be clear, we're not talking minor disabilities here, the sort that only show up during a comprehensive clinical exam. We're talking paralysis or even blindness1. Things that should be pretty hard to miss.
Take the example of the woman discussed in Lishman's Organic Psychiatry. After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".
Why won't these patients admit they're paralyzed, and what are the implications for neurotypical humans? Dr. Vilayanur Ramachandran, leading neuroscientist and current holder of the world land-speed record for hypothesis generation, has a theory.
An Especially Elegant Evpsych Experiment
Followup to: Adaptation-Executers not Fitness-Maximizers, The Evolutionary-Cognitive Boundary
"In a 1989 Canadian study, adults were asked to imagine the death of children of various ages and estimate which deaths would create the greatest sense of loss in a parent. The results, plotted on a graph, show grief growing until just before adolescence and then beginning to drop. When this curve was compared with a curve showing changes in reproductive potential over the life cycle (a pattern calculated from Canadian demographic data), the correlation was fairly strong. But much stronger - nearly perfect, in fact - was the correlation between the grief curves of these modern Canadians and the reproductive-potential curve of a hunter-gatherer people, the !Kung of Africa. In other words, the pattern of changing grief was almost exactly what a Darwinian would predict, given demographic realities in the ancestral environment... The first correlation was .64, the second an extremely high .92."
(Robert Wright, summarizing: "Human Grief: Is Its Intensity Related to the Reproductive Value of the Deceased?" Crawford, C. B., Salter, B. E., and Lang, K.L. Ethology and Sociobiology 10:297-307.)
Disclaimer: I haven't read this paper because it (a) isn't online and (b) is not specifically relevant to my actual real job. But going on the given description, it seems like a reasonably awesome experiment. [Gated version here, thanks Benja Fallenstein. Odd, I thought I searched for that. Reading now... seems to check out on the basics. Correlations are as described, N=221.]
The most obvious inelegance of this study, as described, is that it was conducted by asking human adults to imagine parental grief, rather than asking real parents with children of particular ages. (Presumably that would have cost more / allowed fewer subjects.) However, my understanding is that the results here squared well with the data from closer studies of parental grief that were looking for other correlations (i.e., a raw correlation between parental grief and child age).
That said, consider some of this experiment's elegant aspects:
- A correlation of .92(!) This may sound suspiciously high - could evolution really do such exact fine-tuning? - until you realize that this selection pressure was not only great enough to fine-tune parental grief, but, in fact, carve it out of existence from scratch in the first place.
- People who say that evolutionary psychology hasn't made any advance predictions are (ironically) mere victims of "no one knows what science doesn't know" syndrome. You wouldn't even think of this as an experiment to be performed if not for evolutionary psychology.
- The experiment illustrates as beautifully and as cleanly as any I have ever seen, the distinction between a conscious or subconscious ulterior motive and an executing adaptation with no realtime sensitivity to the original selection pressure that created it.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)