Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Today, we're going to talk about Dark rationalist techniques: productivity tools which seem incoherent, mad, and downright irrational. These techniques include:
- Willful Inconsistency
- Intentional Compartmentalization
- Modifying Terminal Goals
I expect many of you are already up in arms. It seems obvious that consistency is a virtue, that compartmentalization is a flaw, and that one should never modify their terminal goals.
I claim that these 'obvious' objections are incorrect, and that all three of these techniques can be instrumentally rational.
In this article, I'll promote the strategic cultivation of false beliefs and condone mindhacking on the values you hold most dear. Truly, these are Dark Arts. I aim to convince you that sometimes, the benefits are worth the price.
There is a lot of bad science and controversy in the realm of how have a healthy lifestyle. Every week we are bombarded with new studies conflicting older studies telling us X is good or Y is bad. Eventually we reach our psychological limit, throw up our hands, and give up. I used to do this a lot. I knew exercise was good, I knew flossing was good, and I wanted to eat better. But I never acted on any of that knowledge. I would feel guilty when I thought about this stuff and go back to what I was doing. Unsurprisingly, this didn't really cause me to make any positive lifestyle changes.
Instead of vaguely guilt-tripping you with potentially unreliable science news, this post aims to provide an overview lifestyle interventions that have very strong evidence behind them and concrete ways to implement them.
Followup to: Ask and Guess
Ask culture: "I'll be in town this weekend for a business trip. Is it cool if I crash at your place?" Response: “Yes“ or “no”.
Guess culture: "Hey, great news! I'll be in town this weekend for a business trip!" Response: Infer that they might be telling you this because they want something from you, conclude that they might want a place to stay, and offer your hospitality only if you want to. Otherwise, pretend you didn’t infer that.
The two basic rules of Ask Culture: 1) Ask when you want something. 2) Interpret things as requests and feel free to say "no".
The two basic rules of Guess Culture: 1) Ask for things if, and *only* if, you're confident the person will say "yes". 2) Interpret requests as expectations of "yes", and, when possible, avoid saying "no".
Both approaches come with costs and benefits. In the end, I feel pretty strongly that Ask is superior.
But these are not the only two possibilities!
"I'll be in town this weekend for a business trip. I would like to stay at your place, since it would save me the cost of a hotel, plus I would enjoy seeing you and expect we’d have some fun. I'm looking for other options, though, and would rather stay elsewhere than inconvenience you." Response: “I think I need some space this weekend. But I’d love to get a beer or something while you’re in town!” or “You should totally stay with me. I’m looking forward to it.”
There is a third alternative, and I think it's probably what rationalist communities ought to strive for. I call it "Tell Culture".
The two basic rules of Tell Culture: 1) Tell the other person what's going on in your own mind whenever you suspect you'd both benefit from them knowing. (Do NOT assume others will accurately model your mind without your help, or that it will even occur to them to ask you questions to eliminate their ignorance.) 2) Interpret things people tell you as attempts to create common knowledge for shared benefit, rather than as requests or as presumptions of compliance.
Suppose you’re in a conversation that you’re finding aversive, and you can’t figure out why. Your goal is to procure a rain check.
- Guess: *You see this annoyed body language? Huh? Look at it! If you don’t stop talking soon I swear I’ll start tapping my foot.* (Or, possibly, tell a little lie to excuse yourself. “Oh, look at the time…”)
- Ask: “Can we talk about this another time?”
- Tell: "I'm beginning to find this conversation aversive, and I'm not sure why. I propose we hold off until I've figured that out."
Here are more examples from my own life:
- "I didn't sleep well last night and am feeling frazzled and irritable today. I apologize if I snap at you during this meeting. It isn’t personal."
- "I just realized this interaction will be far more productive if my brain has food. I think we should head toward the kitchen."
- "It would be awfully convenient networking for me to stick around for a bit after our meeting to talk with you and [the next person you're meeting with]. But on a scale of one to ten, it's only about 3 useful to me. If you'd rate the loss of utility for you as two or higher, then I have a strong preference for not sticking around."
The burden of honesty is even greater in Tell culture than in Ask culture. To a Guess culture person, I imagine much of the above sounds passive aggressive or manipulative, much worse than the rude bluntness of mere Ask. It’s because Guess people aren’t expecting relentless truth-telling, which is exactly what’s necessary here.
If you’re occasionally dishonest and tell people you want things you don't actually care about--like their comfort or convenience--they’ll learn not to trust you, and the inherent freedom of the system will be lost. They’ll learn that you only pretend to care about them to take advantage of their reciprocity instincts, when in fact you’ll count them as having defected if they respond by stating a preference for protecting their own interests.
Tell culture is cooperation with open source codes.
This kind of trust does not develop overnight. Here is the most useful Tell tactic I know of for developing that trust with a native Ask or Guess. It’s saved me sooooo much time and trouble, and I wish I’d thought of it earlier.
"I'm not asking because I expect you to say ‘yes’. I'm asking because I'm having trouble imagining the inside of your head, and I want to understand better. You are completely free to say ‘no’, or to tell me what you’re thinking right now, and I promise it will be fine." It is amazing how often people quickly stop looking shifty and say 'no' after this, or better yet begin to discuss further details.
As far as I can tell, most people around these parts consider the principle of charity and its super saiyan form, steelmanning, to be Very Good Rationalist Virtues. I basically agree and I in fact operate under these principles more or less automatically now. HOWEVER, no matter how good the rule is, there are always exceptions, which I have found myself increasingly concerned about.
This blog post that I found in the responses to Yvain's anti-reactionary FAQ argues that even though the ancient Romans had welfare, this policy was motivated not for concern for the poor or for a desire for equality like our modern welfare policies, but instead "the Roman dole was wrapped up in discourses about a) the might and wealth of Rome and b) goddess worship... The dole was there because it made the emperor more popular and demonstrated the wealth of Rome to the people. What’s more, the dole was personified as Annona, a goddess to be worshiped and thanked."
So let's assume this guy is right, and imagine that an ancient Roman travels through time to the present day. He reads an article by some progressive arguing (using the rationale one would typically use) that Obama should increase unemployment benefits. "This makes no sense," the Roman thinks to himself. "Why would you give money to someone who doesn't work for it? Why would you reward lack of virtue? Also, what's this about equality? Isn't it right that an upper class exists to rule over a lower class?" Etc.
But fortunately, between when he hopped out of the time machine and when he found this article, a rationalist found him and explained to him steelmanning and the principle of charity. "Ah, yes," he thinks. "Now I remember what the rationalist said. I was not being so charitable. I now realize that this position kind of makes sense, if you read between the lines. Giving more unemployment benefits would, now that I think about it, demonstrate the power of America to the people, and certainly Annona would approve. I don't know why whoever wrote this article didn't just come out and say that, though. Maybe they were confused".
Hopefully you can see what I'm getting at. When you regularly use the principle of charity and steelmanning, you run the risk of:
1. Sticking rigidly to a certain worldview/paradigm/established belief set, even as you find yourself willing to consider more and more concrete propositions. The Roman would have done better to really read what the modern progressive's logic was, think about it, and try to see where he was coming from than to automatically filter it through his own worldview. If he consistently does this he will never find himself considering alternative ways of seeing the world that might be better.
2. Falsely developing the sense that your worldview/paradigm/established belief set is more popular than it is. Pretty much no one today holds the same values that an ancient Roman does, but if the Roman goes around being charitable all the time then he will probably see his own beliefs reflected back at him a fair amount.
3. Taking arguments more seriously than you possibly should. I feel like I see all the time on rationalist communities people say stuff like "this argument by A sort of makes sense, you just need to frame it in objective, consequentialist terms like blah blah blah blah blah" and then follow with what looks to me like a completely original thought that I've never seen before. But why didn't A just frame her argument in objective, consequentialist terms? Do we assume that what she wrote was sort of a telephone-game approximation of what was originally a highly logical consequentialist argument? If so where can I find that argument? And if not, why are we assuming that A is a crypto-consequentialist when she probably isn't? And if we're sure that objective, consequentialist logic is The Way To Go, then shouldn't we be very skeptical of arguments that seem like their basis is in some other reasoning system entirely?
4. Just having a poor model of people's beliefs in general, which could lead to problems.
Hopefully this made sense, and I'm sorry if this is something that's been pointed out before.
This is the final post in my productivity sequence.
The first post described what I achieved. The next three posts describe how. This post describes why, explaining the sources of my passion and the circumstances that convinced a young Nate to try and save the world. Within, you will find no suggestions, no techniques to emulate, no new ideas to ponder. This is a rationalist coming-of-age story. With luck, you may find it inspiring. Regardless, I hope you can learn from my mistakes.
Never fear, I'll be back to business soon — there's lots of studying to do. But before then, there's a story to tell, a memorial to what I left behind.
I was raised Catholic. On my eighth birthday, having received my first communion about a year prior, I casually asked my priest how to reaffirm my faith and do something for the Lord. The memory is fuzzy, but I think I donated a chunk of allowance money and made a public confession at the following mass.
A bunch of the grownups made a big deal out of it, as grownups are like to do. "Faith of a child", and all that. This confused me, especially when I realized that what I had done was rare. I wasn't trying to get pats on the head, I was appealing to the Lord of the Heavens and the Earth. Were we all on the same page, here? This was the creator. He was infinitely virtuous, and he had told us what to do.
And yet, everyone was content to recite hymns once a week and donate for the reconstruction of the church. What about the rest of the world, the sick, the dying? Where were the proselytizers, the missionary opportunities? Why was everyone just sitting around?
On that day, I became acquainted with civilizational inadequacy. I realized you could hand a room full of people the literal word of God, and they'd still struggle to pay attention for an hour every weekend.
This didn't shake my faith, mind you. It didn't even occur to me that the grownups might not actually believe their tales. No, what I learned that day was that there are a lot of people who hold beliefs they aren't willing to act upon.
Eventually, my faith faded. The distrust remained.
Thanks to everyone who took the 2013 Less Wrong Census/Survey. Extra thanks to Ozy, who helped me out with the data processing and statistics work, and to everyone who suggested questions.
This year's results are below. Some of them may make more sense in the context of the original survey questions, which can be seen here. Please do not try to take the survey as it is over and your results will not be counted.
A decade ago, I decided to save the world. I was fourteen, and the world certainly wasn't going to save itself.
I fumbled around for nine years; it's surprising how long one can fumble around. I somehow managed to miss the whole idea of existential risk and the whole concept of an intelligence explosion. I had plenty of other ideas in my head, and while I spent a lot of time honing them, I wasn't particularly looking for new ones.
A year ago, I finally read the LessWrong sequences. My road here was roundabout, almost comical. It took me a while to come to terms with the implications of what I'd read.
Five months ago, after resolving a few internal crises, I started donating to MIRI and studying math.
Three weeks ago, I attended the December MIRI workshop on logic, probability, and reflection. I was invited to visit for the first two days and stay longer if things went well. They did: I was able to make some meaningful contributions.
On Saturday I was invited to become a MIRI research associate.
It's been an exciting year, to say the least.
(ETA: Note that being a research associate gives me access to a number of MIRI resources, but is not a full time position. I will be doing FAI research, but it will be done outside of work. I will be retaining my day job and continuing to donate.)
To commemorate the occasion — and because a few people have expressed interest in my efforts — I'll be writing a series of posts about my experience, about what I did and how I did it. This is the first post in the series.
Summary: We outline the case for CFAR, including:
CFAR is in the middle of our annual matching fundraiser right now. If you've been thinking of donating to CFAR, now is the best time to decide for probably at least half a year. Donations up to $150,000 will be matched until January 31st; and Matt Wage, who is matching the last $50,000 of donations, has vowed not to donate unless matched.
Our workshops are cash-flow positive, and subsidize our basic operations (you are not subsidizing workshop attendees). But we can't yet run workshops often enough to fully cover our core operations. We also need to do more formal experiments, and we want to create free and low-cost curriculum with far broader reach than the current workshops. Donations are needed to keep the lights on at CFAR, fund free programs like the Summer Program on Applied Rationality and Cognition, and let us do new and interesting things in 2014 (see below, at length).
- There is a substantial flaw or missing element to my model that someone will point out.
- Many readers, who are bad at small talk because they don't see the point, will get better at it as a result of acquiring understanding.
We have very broad intellectual interests, cutting across topics such as rationality, economics, pure math, psychology, humanitarian issues and classical music. I have a PhD in pure math, have been an active participant on Less Wrong, worked at GiveWell for a year, and have done research for Machine Intelligence Research Institute (MIRI) on how effectively can we plan for future decades and on how well policy-makers will handle AGI. Vipul has a PhD in pure math, and started Open Borders, a website devoted to discussing immigration liberalization.
We both have experience working with intellectually curious young people. I worked for three summers at MathPath (a summer camp for middle school students who are interested in math), taught at Thomas Jefferson High School for Science and Technology (an academic magnet high school), and currently teach for Art of Problem Solving (an online school for high performing math students). Vipul has trained students for mathematical olympiads, and taught calculus and linear algebra at University of Chicago for years.
We spent several months researching the educational resources that are available to high performing students, college selection and college admissions, psychological findings on intellectual giftedness, and the experiences of past and current members of the population that we're serving, and we’re ready to help. We're currently offering free personalized advising on these things by email, Skype, or phone. You can connect with us here. If you're interested, we look forward to hearing from you.
View more: Next