Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Followup to: Fallacies of Compression
Among the many genetic variations and mutations you carry in your genome, there are a very few alleles you probably know—including those determining your blood type: the presence or absence of the A, B, and + antigens. If you receive a blood transfusion containing an antigen you don't have, it will trigger an allergic reaction. It was Karl Landsteiner's discovery of this fact, and how to test for compatible blood types, that made it possible to transfuse blood without killing the patient. (1930 Nobel Prize in Medicine.) Also, if a mother with blood type A (for example) bears a child with blood type A+, the mother may acquire an allergic reaction to the + antigen; if she has another child with blood type A+, the child will be in danger, unless the mother takes an allergic suppressant during pregnancy. Thus people learn their blood types before they marry.
Oh, and also: people with blood type A are earnest and creative, while people with blood type B are wild and cheerful. People with type O are agreeable and sociable, while people with type AB are cool and controlled. (You would think that O would be the absence of A and B, while AB would just be A plus B, but no...) All this, according to the Japanese blood type theory of personality. It would seem that blood type plays the role in Japan that astrological signs play in the West, right down to blood type horoscopes in the daily newspaper.
This fad is especially odd because blood types have never been mysterious, not in Japan and not anywhere. We only know blood types even exist thanks to Karl Landsteiner. No mystic witch doctor, no venerable sorcerer, ever said a word about blood types; there are no ancient, dusty scrolls to shroud the error in the aura of antiquity. If the medical profession claimed tomorrow that it had all been a colossal hoax, we layfolk would not have one scrap of evidence from our unaided senses to contradict them.
There's never been a war between blood types. There's never even been a political conflict between blood types. The stereotypes must have arisen strictly from the mere existence of the labels.
AnnaSalamon's recent post on "flinching" and "buckets" nicely complements PhilGoetz's 2009 post Reason as memetic immune disorder. (I'll be assuming that readers have read Anna's post, but not necessarily Phil's.) Using Anna's terminology, I take Phil to be talking about the dangers of merging buckets that started out as separate. Anna, on the other hand, is talking about how to deal with one bucket that should actually be several.
Phil argued (paraphrasing) that rationality can be dangerous because it leads to beliefs of the form "P implies Q". If you convince yourself of that implication, and you believe P, then you are compelled to believe Q. This is dangerous because your thinking about P might be infected by a bad meme. Now rationality has opened the way for this bad meme to infect your thinking about Q, too.
It's even worse if you reason yourself all the way to believing "P if and only if Q". Now any corruption in your thinking about either one of P and Q will corrupt your thinking about the other. In terms of buckets: If you put "Yes" in the P bucket, you must put "Yes" in the Q bucket, and vice versa. In other words, the P bucket and the Q bucket are now effectively one and the same.
In this sense, Phil was pointing out that rationality merges buckets. (More precisely, rationality creates dependencies among buckets. In the extreme case, buckets become effectively identical). This can be bad for the reasons that Anna gives. Phil argues that some people resist rationality because their "memetic immune system" realizes that rational thinking might merge buckets inappropriately. To avoid this danger, people often operate on the principle that it's suspect even to consider merging buckets from different domains (e.g., religious scripture and personal life).
This suggests a way in which Anna's post works at the meta-level, too.
Phil's argument is that people resist rationality because, in effect, they've identified the two buckets "Think rationally" and "Spread memetic infections". They fear that saying "Yes" to "Think rationally" forces them to say "Yes" to the dangers inherent to merged buckets.
But Anna gives techniques for "de-merging" buckets in general if it turns out that some buckets were inappropriately merged, or if one bucket should have been several in the first place.
In other words, Anna's post essentially de-merges the two particular buckets "Think rationally" and "Spread memetic infections". You can go ahead and use rational thinking, even though you will risk inappropriately merging buckets, because you now have techniques for de-merging those buckets if you need to.
In this way, Anna's post may diminish the "memetic immune system" obstacle to rational thinking that Phil observed.
"When you surround the enemy
Always allow them an escape route.
They must see that there is
An alternative to death."
—Sun Tzu, The Art of War, Cloud Hands edition
"Don't raise the pressure, lower the wall."
—Lois McMaster Bujold, Komarr
Last night I happened to be conversing with a nonrationalist who had somehow wandered into a local rationalists' gathering. She had just declared (a) her belief in souls and (b) that she didn't believe in cryonics because she believed the soul wouldn't stay with the frozen body. I asked, "But how do you know that?" From the confusion that flashed on her face, it was pretty clear that this question had never occurred to her. I don't say this in a bad way—she seemed like a nice person with absolutely no training in rationality, just like most of the rest of the human species. I really need to write that book.
Most of the ensuing conversation was on items already covered on Overcoming Bias—if you're really curious about something, you probably can figure out a good way to test it; try to attain accurate beliefs first and then let your emotions flow from that—that sort of thing. But the conversation reminded me of one notion I haven't covered here yet:
"Make sure," I suggested to her, "that you visualize what the world would be like if there are no souls, and what you would do about that. Don't think about all the reasons that it can't be that way, just accept it as a premise and then visualize the consequences. So that you'll think, 'Well, if there are no souls, I can just sign up for cryonics', or 'If there is no God, I can just go on being moral anyway,' rather than it being too horrifying to face. As a matter of self-respect you should try to believe the truth no matter how uncomfortable it is, like I said before; but as a matter of human nature, it helps to make a belief less uncomfortable, before you try to evaluate the evidence for it."
Why you should be very careful about trying to openly seek truth in any political discussion
1. Rationality considered harmful for Scott Aaronson in the great gender debate
In 2015, complexity theorist and rationalist Scott Aaronson was foolhardy enough to step into the Gender Politics war on his blog with a comment stating that extreme feminism that he bought into made him hate himself and try to seek ways to chemically castrate himself. The feminist blogoshere got hold of this and crucified him for it, and he has written a few followup blog posts about it. Recently I saw this comment by him on his blog:
As the comment 171 affair blew up last year, one of my female colleagues in quantum computing remarked to me that the real issue had nothing to do with gender politics; it was really just about the commitment to truth regardless of the social costs—a quality that many of the people attacking me (who were overwhelmingly from outside the hard sciences) had perhaps never encountered before in their lives. That remark cheered me more than anything else at the time
2. Rationality considered harmful for Sam Harris in the islamophobia war
I recently heard a very angry, exasperated 2 hour podcast by the new atheist and political commentator Sam Harris about how badly he has been straw-manned, misrepresented and trash talked by his intellectual rivals (who he collectively refers to as the "regressive left"). Sam Harris likes to tackle hard questions such as when torture is justified, which religions are more or less harmful than others, defence of freedom of speech, etc. Several times, Harris goes to the meta-level and sees clearly what is happening:
Rather than a searching and beautiful exercise in human reason to have conversations on these topics [ethics of torture, military intervention, Islam, etc], people are making it just politically so toxic, reputationally so toxic to even raise these issues that smart people, smarter than me, are smart enough not to go near these topics
Everyone on the left at the moment seems to be a mind reader.. no matter how much you try to take their foot out of your mouth, the mere effort itself is going to be counted against you - you're someone who's in denial, or you don't even understand how racist you are, etc
3. Rationality considered harmful when talking to your left-wing friends about genetic modification
In the SlateStarCodex comments I posted complaining that many left-wing people were responding very personally (and negatively) to my political views.
One long term friend openly and pointedly asked whether we should still be friends over the subject of eugenics and genetic engineering, for example altering the human germ-line via genetic engineering to permanently cure a genetic disease. This friend responded to a rational argument about why some modifications of the human germ line may in fact be a good thing by saying that "(s)he was beginning to wonder whether we should still be friends".
A large comment thread ensued, but the best comment I got was this one:
One of the useful things I have found when confused by something my brain does is to ask what it is *for*. For example: I get angry, the anger is counterproductive, but recognizing that doesn’t make it go away. What is anger *for*? Maybe it is to cause me to plausibly signal violence by making my body ready for violence or some such.
Similarly, when I ask myself what moral/political discourse among friends is *for* I get back something like “signal what sort of ally you would be/broadcast what sort of people you want to ally with.” This makes disagreements more sensible. They are trying to signal things about distribution of resources, I am trying to signal things about truth value, others are trying to signal things about what the tribe should hold sacred etc. Feeling strong emotions is just a way of signaling strong precommitments to these positions (i.e. I will follow the morality I am signaling now because I will be wracked by guilt if I do not. I am a reliable/predictable ally.) They aren’t mad at your positions. They are mad that you are signaling that you would defect when push came to shove about things they think are important.
Let me repeat that last one: moral/political discourse among friends is for “signalling what sort of ally you would be/broadcast what sort of people you want to ally with”. Moral/political discourse probably activates specially evolved brainware in human beings; that brainware has a purpose and it isn't truthseeking. Politics is not about policy!
This post is already getting too long so I deleted the section on lessons to be learned, but if there is interest I'll do a followup. Let me know what you think in the comments!
One of the most valuable services the Less Wrong community has to offer are the meetup groups. However, it strikes me that there isn't a lot of knowledge sharing between different meetup groups. Presumably there's a lot that the different groups could learn from each other -- things that can be done, experiments that have or haven't worked out, procedural and organisational tips. Hence this post. Please go ahead and write a summary about your local less wrong meetup below:
- What meetups do you run?
- What's worked?
- What hasn't?
- How is the group organised?
(Thread B for January is here, created as a duplicate by accident)
Hi, do you read the LessWrong website, but haven't commented yet (or not very much)? Are you a bit scared of the harsh community, or do you feel that questions which are new and interesting for you could be old and boring for the older members?
This is the place for the new members to become courageous and ask what they wanted to ask. Or just to say hi.
The older members are strongly encouraged to be gentle and patient (or just skip the entire discussion if they can't).
The long version:
A few notes about the site mechanics
A few notes about the community
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
- The Worst Argument in the World
- That Alien Message
- How to Convince Me that 2 + 2 = 3
- Lawful Uncertainty
- Your Intuitions are Not Magic
- The Planning Fallacy
- The Apologist and the Revolutionary
- Scope Insensitivity
- The Allais Paradox (with two followups)
- We Change Our Minds Less Often Than We Think
- The Least Convenient Possible World
- The Third Alternative
- The Domain of Your Utility Function
- Newcomb's Problem and Regret of Rationality
- The True Prisoner's Dilemma
- The Tragedy of Group Selectionism
- Policy Debates Should Not Appear One-Sided
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
[I first posted this as a link to my blog post, but I'm reposting as a focused article here that trims some fat of the original post, which was less accessible]
I think a lot about heuristics and biases, and I admit that many of my ideas on rationality and debiasing get lost in the sea of my own thoughts. They’re accessible, if I’m specifically thinking about rationality-esque things, but often invisible otherwise.
That seems highly sub-optimal, considering that the whole point of having usable mental models isn’t to write fancy posts about them, but to, you know, actually use them.
To that end, I’ve been thinking about finding some sort of systematic way to integrate all of these ideas into my actual life.
(If you’re curious, here’s the actual picture of what my internal “concept-verse” (w/ associated LW and CFAR memes) looks like)
Open Image In New Tab for all the details
So I have all of these ideas, all of which look really great on paper and in thought experiments. Some of them even have some sort of experimental backing. Given this, how do I put them together into a kind of coherent notion?
Equivalently, what does it look like if I successfully implement these mental models? What sorts of changes might I expect to see? Then, knowing the end product, what kind of process can get me there?
One way of looking it would to say that if I implemented techniques well, then I’d be better able to tackle my goals and get things done. Maybe my productivity would go up. That sort of makes sense. But this tells us nothing about how I’d actually be going about, using such skills.
We want to know how to implement these skills and then actually utilize them.
Yudkowsky gives a highly useful abstraction when he talks about the five-second level. He gives some great tips on breaking down mental techniques into their component mental motions. It’s a step-by-step approach that really goes into the details of what it feels like to undergo one of the LessWrong epistemological techniques. We’d like our mental techniques to be actual heuristics that we can use in the moment, so having an in-depth breakdown makes sense.
Here’s my attempt at a 5-second-level breakdown for Going Meta, or "popping" out of one's head to stay mindful of the moment:
- Notice the feeling that you are being mentally “dragged” towards continuing an action.
- (It can feel like an urge, or your mind automatically making a plan to do something. Notice your brain simulating you taking an action without much conscious input.)
- Remember that you have a 5-second-level series of steps to do something about it.
- Feel aversive towards continuing the loop. Mentally shudder at the part of you that tries to continue.
- Close your eyes. Take in a breath.
- Think about what 1-second action you could take to instantly cut off the stimulus from whatever loop you’re stuck in. (EX: Turning off the display, closing the window, moving to somewhere else).
- Tense your muscles and clench, actually doing said action.
- Run a search through your head, looking for an action labeled “productive”. Try to remember things you’ve told yourself you “should probably do” lately.
- (If you can’t find anything, pattern-match to find something that seems “productive-ish”.)
- Take note of what time it is. Write it down.
- Do the new thing. Finish.
- Note the end time. Calculate how long you did work.
Next, the other part is actually accessing the heuristic in the situations where you want it. We want it to be habitual.
After doing some quick searches on the existing research on habits, it appears that many of the links go to Charles Duhigg, author of The Power of Habit, or B J Fogg of Tiny Habits. Both models focus on two things: Identifying the Thing you want to do. Then setting triggers so you actually do It. (There’s some similarity to CFAR’s Trigger Action Plans.)
B J’s approach focuses on scaffolding new habits into existing routines, like brushing your teeth, which are already automatic. Duhigg appears to be focused more on reinforcement and rewards, with several nods to Skinner. CFAR views actions as self-reinforcing, so the reward isn’t even necessary— they see repetition as building automation.
Overlearning the material also seems to be useful in some contexts, for skills like acquiring procedural knowledge. And mental notions do seem to be more like procedural knowledge.
For these mental skills specifically, we’d want them to go off, time irrespective, so anchoring it to an existing routine might not be best. Having it as a response to an internal state (EX: “When I notice myself being ‘dragged’ into a spiral, or automatically making plans to do a thing”) may be more useful.
(Follow-up post forthcoming on concretely trying to apply habit research to implementing heuristics.)
View more: Next