Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
So what you probably mean is, "I intend to do school to improve my chances on the market". But this statement is still false, unless it is also true that "I intend to improve my chances on the market". Do you, in actual fact, intend to improve your chances on the market?
I expect not. Rather, I expect that your motivation is to appear to be the sort of person who you think you would be if you were ambitiously attempting to improve your chances on the market... which is not really motivating enough to actually DO the work. However, by persistently trying to do so, and presenting yourself with enough suffering at your failure to do it, you get to feel as if you are that sort of person without having to actually do the work. This is actually a pretty optimal solution to the problem, if you think about it. (Or rather, if you DON'T think about it!) -- PJ Eby
I have become convinced that problems of this kind are the number one problem humanity has. I'm also pretty sure that most people here, no matter how much they've been reading about signaling, still fail to appreciate the magnitude of the problem.
Here are two major screw-ups and one narrowly averted screw-up that I've been guilty of. See if you can find the pattern.
- When I began my university studies back in 2006, I felt strongly motivated to do something about Singularity matters. I genuinely believed that this was the most important thing facing humanity, and that it needed to be urgently taken care of. So in order to become able to contribute, I tried to study as much as possible. I had had troubles with procrastination, and so, in what has to be one of the most idiotic and ill-thought-out acts of self-sabotage possible, I taught myself to feel guilty whenever I was relaxing and not working. Combine an inability to properly relax with an attempted course load that was twice the university's recommended pace, and you can guess the results: after a year or two, I had an extended burnout that I still haven't fully recovered from. I ended up completing my Bachelor's degree in five years, which is the official target time for doing both your Bachelor's and your Master's.
- A few years later, I became one of the founding members of the Finnish Pirate Party, and on the basis of some writings the others thought were pretty good, got myself elected as the spokesman. Unfortunately – and as I should have known before taking up the post – I was a pretty bad choice for this job. I'm good at expressing myself in writing, and when I have the time to think. I hate talking with strangers on the phone, find it distracting to look people in the eyes when I'm talking with them, and have a tendency to start a sentence over two or three times before hitting on a formulation I like. I'm also bad at thinking quickly on my feet and coming up with snappy answers in live conversation. The spokesman task involved things like giving quick statements to reporters ten seconds after I'd been woken up by their phone call, and live interviews where I had to reply to criticisms so foreign to my thinking that they would never have occurred to me naturally. I was pretty terrible at the job, and finally delegated most of it to other people until my term ran out – though not before I'd already done noticeable damage to our cause.
- Last year, I was a Visiting Fellow at the Singularity Institute. At one point, I ended up helping Eliezer in writing his book. Mostly this involved me just sitting next to him and making sure he did get writing done while I surfed the Internet or played a computer game. Occasionally I would offer some suggestion if asked. Although I did not actually do much, the multitasking required still made me unable to spend this time productively myself, and for some reason it always left me tired the next day. I felt somewhat unhappy with this, in that I felt I was doing something that anyone could do. Eventually Anna Salamon pointed out to me that maybe this was something that I was more capable of doing than others, exactly because so many people would feel that ”anyone” could do this and thus would prefer to do something else.
It may not be immediately obvious, but all three examples have something in common. In each case, I thought I was working for a particular goal (become capable of doing useful Singularity work, advance the cause of a political party, do useful Singularity work). But as soon as I set that goal, my brain automatically and invisibly re-interpreted it as the goal of doing something that gave the impression of doing prestigious work for a cause (spending all my waking time working, being the spokesman of a political party, writing papers or doing something else few others could do). "Prestigious work" could also be translated as "work that really convinces others that you are doing something valuable for a cause".
This is the fourth part in a mini-sequence presenting material from Robert Kurzban's excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.
In the previous post, Strategic ignorance and plausible deniability, we discussed some ways by which people might have modules designed to keep them away from certain kinds of information. These arguments were relatively straightforward.
The next step up is the hypothesis that our "press secretary module" might be designed to contain information that is useful for certain purposes, even if other modules have information that not only conflicts with this information, but is also more likely to be accurate. That is, some modules are designed to acquire systematically biased - i.e. false - information, including information that other modules "know" is wrong.
Robert Kurzban's Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind is a book about how our brains are composed of a variety of different, interacting systems. While that premise is hardly new, many of our intuitions are still grounded in the idea of a unified, non-compartmental self. Why Everyone (Else) Is a Hypocrite takes the modular view and systematically attacks a number of ideas based on the unified view, replacing them with a theory based on the modular view. It clarifies a number of issues previously discussed on Overcoming Bias and Less Wrong, and even debunks some outright fallacious theories that we on Less Wrong have implicitly accepted. It is quite possibly the best single book on psychology that I've read. In this posts and posts that follow, I will be summarizing some of its most important contributions.
Chapter 1: Consistently Inconsistent (available for free here) presents evidence of our brains being modular, and points out some implications of this.
As previously discussed, severing the connection between the two hemispheres of a person's brain causes some odd effects. Present the left hemisphere with a picture of a chicken claw, and the right with a picture of a wintry scene. Now show the patient an array of cards with pictures of objects on them, and ask them to point (with each hand) something related to what they saw. The hand controlled by the left hemisphere points to a chicken, the hand controlled by the right hemisphere points to a snow shovel. Fine so far.
But what happens when you ask the patient to explain why they pointed to those objects in particular? The left hemisphere is in control of the verbal apparatus. It knows that it saw a chicken claw, and it knows that it pointed at the picture of the chicken, and that the hand controlled by the other hemisphere pointed at the picture of a shovel. Asked to explain this, it comes up with the explanation that the shovel is for cleaning up after the chicken. While the right hemisphere knows about the snowy scene, it doesn't control the verbal apparatus and can't communicate directly with the left hemisphere, so this doesn't affect the reply.
Now one asks, what did ”the patient” think was going on? A crucial point of the book is that there's no such thing as the patient. ”The patient” is just two different hemispheres, to some extent disconnected. You can either ask what the left hemisphere thinks, or what the right hemisphere thinks. But asking about ”the patient's beliefs” is a wrong question. If you know what the left hemisphere believes, what the right hemisphere believes, and how this influences the overall behavior, then you know all that there is to know.
tl;dr: Just because it doesn't seem like we should be able to have beliefs we acknowledge to be irrational, doesn't mean we don't have them. If this happens to you, here's a tool to help conceptualize and work around that phenomenon.
There's a general feeling that by the time you've acknowledged that some belief you hold is not based on rational evidence, it has already evaporated. The very act of realizing it's not something you should believe makes it go away. If that's your experience, I applaud your well-organized mind! It's serving you well. This is exactly as it should be.
If only we were all so lucky.
Brains are sticky things. They will hang onto comfortable beliefs that don't make sense anymore, view the world through familiar filters that should have been discarded long ago, see significances and patterns and illusions even if they're known by the rest of the brain to be irrelevant. Beliefs should be formed on the basis of sound evidence. But that's not the only mechanism we have in our skulls to form them. We're equipped to come by them in other ways, too. It's been observed1 that believing contradictions is only bad because it entails believing falsehoods. If you can't get rid of one belief in a contradiction, and that's the false one, then believing a contradiction is the best you can do, because then at least you have the true belief too.
The mechanism I use to deal with this is to label my beliefs "official" and "unofficial". My official beliefs have a second-order stamp of approval. I believe them, and I believe that I should believe them. Meanwhile, the "unofficial" beliefs are those I can't get rid of, or am not motivated to try really hard to get rid of because they aren't problematic enough to be worth the trouble. They might or might not outright contradict an official belief, but regardless, I try not to act on them.
Interesting new study out on moral behavior. The one sentence summary of the most interesting part is that people who did one good deed were less likely to do another good deed in the near future. They had, quite literally, done their good deed for the day.
In the first part of the study, they showed that people exposed to environmentally friendly, "green" products were more likely to behave nicely. Subjects were asked to rate products in an online store; unbeknownst to them, half were in a condition where the products were environmentally friendly, and the other half in a condition where the products were not. Then they played a Dictator Game. Subjects who had seen environmentally friendly products shared more of their money.
In the second part, instead of just rating the products, they were told to select $25 worth of products to buy from the store. One in twenty five subjects would actually receive the products they'd purchased. Then they, too, played the Dictator Game. Subjects who had bought environmentally friendly products shared less of their money.
In the third part, subjects bought products as before. Then, they participated in a "separate, completely unrelated" experiment "on perception" in which they earned money by identifying dot patterns. The experiment was designed such that participants could lie about their perceptions to earn more. People who purchased the green products were more likely to do so.
This does not prove that environmentalists are actually bad people - remember that whether a subject purchased green products or normal products was completely randomized. It does suggest that people who have done one nice thing feel less of an obligation to do another.
This meshes nicely with a self-signalling conception of morality. If part of the point of behaving morally is to convince yourself that you're a good person, then once you're convinced, behaving morally loses a lot of its value.
It seems that back when the Prisoner's Dilemma was still being worked out, Merrill Flood and Melvin Drescher tried a 100-fold iterative PD on two smart but unprepared subjects, Armen Alchian of UCLA and John D. Williams of RAND.
The kicker being that the payoff matrix was asymmetrical, with dual cooperation awarding JW twice as many points as AA:
|(AA, JW)||JW: D||JW: C|
|AA: D||(0, 0.5)||(1, -1)|
|AA: C||(-1, 2)||(0.5, 1)|
The resulting 100 iterations, with a log of comments written by both players, make for fascinating reading.
JW spots the possibilities of cooperation right away, while AA is slower to catch on.
But once AA does catch on to the possibilities of cooperation, AA goes on throwing in an occasional D... because AA thinks the natural meeting point for cooperation is a fair outcome, where both players get around the same number of total points.
JW goes on trying to enforce (C, C) - the option that maximizes total utility for both players - by punishing AA's attempts at defection. JW's log shows comments like "He's crazy. I'll teach him the hard way."
Meanwhile, AA's log shows comments such as "He won't share. He'll punish me for trying!"
Followup to: Pretending to be Wise
For comparison purposes, here's an essay with similar content to yesterday's "Pretending to be Wise", which I wrote in 2006 in a completely different style, edited down slightly (content has been deleted but not added). Note that the 2006 concept of "pretending to be Wise" hasn't been narrowed down as much compared to the 2009 version; also when I wrote it, I was in more urgent need of persuasive force.
I thought it would be an interesting data point to check whether this essay seems more convincing than yesterday's, following Robin's injuction "to avoid emotion, color, flash, stories, vagueness, repetition, rambling, and even eloquence" - this seems like rather the sort of thing he might have had in mind.
And conversely the stylistic change also seems like the sort of thing Orwell might have had in mind, when Politics and the English Language compared: "I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all." Versus: "Objective considerations of contemporary phenomena compel the conclusion that success or failure in competitive activities exhibits no tendency to be commensurate with innate capacity, but that a considerable element of the unpredictable must invariably be taken into account." That would be the other side of it.
At any rate, here goes Eliezer2006...
I do not fit the stereotype of the Wise. I am not Gandalf, Ged, or Gandhi. I do not sit amidst my quiet garden, staring deeply into the truths engraved in a flower or a drop of dew; speaking courteously to all who come before me, and answering them gently regardless of how they speak to me.
If I tried to look Wise, and succeeded, I would receive more respect from my fellows. But there would be a price.
To pretend to be Wise means that you must always appear to give people the benefit of the doubt. Thus people will admire you for your courtesy. But this is not always true.
To pretend to be Wise, you must always pretend that both sides have merit, and solemnly refuse to judge between them. For if you took one side or another, why then, you would no longer be one of the aloof Wise, but merely another partisan, on a level with all the other mere bickerers.
As one of the Wise, you are omnipotent on the condition that you never exercise your power. Otherwise people would start thinking that you were no better than they; and they would no longer hold you in awe.
Followup to: Against Maturity
"The hottest place in Hell is reserved for those who in time of crisis remain neutral."
-- Dante Alighieri, famous hell expert John F. Kennedy, misquoter
A special case of adulthood-signaling worth singling out, is the display of neutrality or suspended judgment, in order to signal maturity, wisdom, impartiality, or just a superior vantage point.
An example would be the case discussed yesterday of my parents, who respond to theological questions like "Why does ancient Egypt, which had good records on many other matters, lack any records of Jews having ever been there?" with "Oh, when I was your age, I also used to ask that sort of question, but now I've grown out of it."
Another example would be the principal who, faced with two children who were caught fighting on the playground, sternly says: "It doesn't matter who started the fight, it only matters who ends it." Of course it matters who started the fight. The principal may not have access to good information about this critical fact, but if so, he should say so, not dismiss the importance of who threw the first punch. Let a parent try punching the principal, and we'll see how far "It doesn't matter who started it" gets in front of a judge. But to adults it is just inconvenient that children fight, and it matters not at all to their convenience which child started it, it is only convenient that the fight end as rapidly as possible.
A similar dynamic, I believe, governs the occasions in international diplomacy where Great Powers sternly tell smaller groups to stop that fighting right now. It doesn't matter to the Great Power who started it - who provoked, or who responded disproportionately to provocation - because the Great Power's ongoing inconvenience is only a function of the ongoing conflict. Oh, can't Israel and Hamas just get along?
This I call "pretending to be Wise". Of course there are many ways to try and signal wisdom. But trying to signal wisdom by refusing to make guesses - refusing to sum up evidence - refusing to pass judgment - refusing to take sides - staying above the fray and looking down with a lofty and condescending gaze - which is to say, signaling wisdom by saying and doing nothing - well, that I find particularly pretentious.
I remember the moment of my first break with Judaism. It was in kindergarten, when I was being forced to memorize and recite my first prayer. It was in Hebrew. We were given a transliteration, but not a translation. I asked what the prayer meant. I was told that I didn't need to know - so long as I prayed in Hebrew, it would work even if I didn't understand the words. (Any resemblance to follies inveighed against in my writings is not coincidental.)
Of course I didn't accept this, since it was blatantly stupid, and I figured that God had to be at least as smart as I was. So when I got home, I asked my parents, and they didn't bother arguing with me. They just said, "You're too young to argue with; we're older and wiser; adults know best; you'll understand when you're older."
They were right about that last part, anyway.
Of course there were plenty of places my parents really did know better, even in the realms of abstract reasoning. They were doctorate-bearing folks and not stupid. I remember, at age nine or something silly like that, showing my father a diagram full of filled circles and trying to convince him that the indeterminacy of particle collisions was because they had a fourth-dimensional cross-section and they were bumping or failing to bump in the fourth dimension.
My father shot me down flat. (Without making the slightest effort to humor me or encourage me. This seems to have worked out just fine. He did buy me books, though.)
But he didn't just say, "You'll understand when you're older." He said that physics was math and couldn't even be talked about without math. He talked about how everyone he met tried to invent their own theory of physics and how annoying this was. He may even have talked about the futility of "providing a mechanism", though I'm not actually sure if I originally got that off him or Baez.
You see the pattern developing here. "Adulthood" was what my parents appealed to when they couldn't verbalize any object-level justification. They had doctorates and were smart; if there was a good reason, they usually would at least try to explain it to me. And it gets worse...
Followup to: The Thing That I Protect
Anything done with an ulterior motive has to be done with a pure heart. You cannot serve your ulterior motive, without faithfully prosecuting your overt purpose as a thing in its own right, that has its own integrity. If, for example, you're writing about rationality with the intention of recruiting people to your utilitarian Cause, then you cannot talk too much about your Cause, or you will fail to successfully write about rationality.
This doesn't mean that you never say anything about your Cause, but there's a balance to be struck. "A fanatic is someone who can't change his mind and won't change the subject."
In previous months, I've pushed this balance too far toward talking about Singularity-related things. And this was for (first-order) selfish reasons on my part; I was finally GETTING STUFF SAID that had been building up painfully in my brain for FRICKIN' YEARS. And so I just kept writing, because it was finally coming out. For those of you who have not the slightest interest, I'm sorry to have polluted your blog with that.
There's a number of reasons for this. One of them is simply to restore the balance. Another is to make sure that a forum intended to have a more general audience, doesn't narrow itself down and disappear.
But more importantly - there are certain subjects which tend to drive people crazy, even if there's truth behind them. Quantum mechanics would be the paradigmatic example; you don't have to go funny in the head but a lot of people do. Likewise Godel's Theorem, consciousness, Artificial Intelligence -
The concept of "Friendly AI" can be poisonous in certain ways. True or false, it carries risks to mental health. And not just the obvious liabilities of praising a Happy Thing. Something stranger and subtler that drains enthusiasm.
View more: Next