Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
The Center for Applied Rationality's perspective on rationality is quite similar to Less Wrong's. In particular, we share many of Less Wrong's differences from what's sometimes called "traditional" rationality, such as Less Wrong's inclusion of Bayesian probability theory and the science on heuristics and biases.
But after spending the last year and a half with CFAR as we've developed, tested, and attempted to teach hundreds of different versions of rationality techniques, I've noticed that my picture of what rationality looks like has shifted somewhat from what I perceive to be the most common picture of rationality on Less Wrong. Here are three ways I think CFAR has come to see the landscape of rationality differently than Less Wrong typically does – not disagreements per se, but differences in focus or approach. (Disclaimer: I'm not speaking for the rest of CFAR here; these are my own impressions.)
1. We think less in terms of epistemic versus instrumental rationality.
Formally, the methods of normative epistemic versus instrumental rationality are distinct: Bayesian inference and expected utility maximization. But methods like "use Bayes' Theorem" or "maximize expected utility" are usually too abstract and high-level to be helpful for a human being trying to take manageable steps towards improving her rationality. And when you zoom in from that high-level description of rationality down to the more concrete level of "What five-second mental habits should I be training?" the distinction between epistemic and instrumental rationality becomes less helpful.
Here's an analogy: epistemic rationality is like physics, where the goal is to figure out what's true about the world, and instrumental rationality is like engineering, where the goal is to accomplish something you want as efficiently and effectively as possible. You need physics to do engineering; or I suppose you could say that doing engineering is doing physics, but with a practical goal. However, there's plenty of physics that's done for its own sake, and doesn't have obvious practical applications, at least not yet. (String theory, for example.) Similarly, you need a fair amount of epistemic rationality in order to be instrumentally rational, though there are parts of epistemic rationality that many of us practice for their own sake, and not as a means to an end. (For example, I appreciate clarifying my thinking about free will even though I don't expect it to change any of my behavior.)
In this analogy, many skills we focus on at CFAR are akin to essential math, like linear algebra or differential equations, which compose the fabric of both physics and engineering. It would be foolish to expect someone who wasn't comfortable with math to successfully calculate a planet's trajectory or design a bridge. And it would be similarly foolish to expect you to successfully update like a Bayesian or maximize your utility if you lacked certain underlying skills. Like, for instance: Noticing your emotional reactions, and being able to shift them if it would be useful. Doing thought experiments. Noticing and overcoming learned helplessness. Visualizing in concrete detail. Preventing yourself from flinching away from a thought. Rewarding yourself for mental habits you want to reinforce.
These and other building blocks of rationality are essential both for reaching truer beliefs, and for getting what you value; they don't fall cleanly into either an "epistemic" or an "instrumental" category. Which is why, when I consider what pieces of rationality CFAR should be developing, I've been thinking less in terms of "How can we be more epistemically rational?" or "How can we be more instrumentally rational?" and instead using queries like, "How can we be more metacognitive?"
2. We think more in terms of a modular mind.
The human mind isn't one coordinated, unified agent, but rather a collection of different processes that often aren't working in sync, or even aware of what each other is up to. Less Wrong certainly knows this; see, for example, discussions of anticipations versus professions, aliefs, and metawanting. But in general we gloss over that fact, because it's so much simpler and more natural to talk about "what I believe" or "what I want," even if technically there is no single "I" doing the believing or wanting. And for many purposes that kind of approximation is fine.
But a rationality-for-humans usually can't rely on that shorthand. Any attempt to change what "I" believe, or optimize for what "I" want, forces a confrontation of the fact that there are multiple, contradictory things that could reasonably be called "beliefs," or "wants," coexisting in the same mind. So a large part of applied rationality turns out to be about noticing those contradictions and trying to achieve coherence, in some fashion, before you can even begin to update on evidence or plan an action.
Many of the techniques we're developing at CFAR fall roughly into the template of coordinating between your two systems of cognition: implicit-reasoning System 1 and explicit-reasoning System 2. For example, knowing when each system is more likely to be reliable. Or knowing how to get System 2 to convince System 1 of something ("We're not going to die if we go talk to that stranger"). Or knowing what kinds of questions System 2 should ask of System 1 to find out why it's uneasy about the conclusion at which System 2 has arrived.
This is all, of course, with the disclaimer that the anthropomorphizing of the systems of cognition, and imagining them talking to each other, is merely a useful metaphor. Even the classification of human cognition into Systems 1 and 2 is probably not strictly true, but it's true enough to be useful. And other metaphors prove useful as well – for example, some difficulties with what feels like akrasia become more tractable when you model your future selves as different entities, as we do in the current version of our "Delegating to yourself" class.
3. We're more focused on emotions.
There's relatively little discussion of emotions on Less Wrong, but they occupy a central place in CFAR's curriculum and organizational culture.
It used to frustrate me when people would say something that revealed they held a Straw Vulcan-esque belief that "rationalist = emotionless robot". But now when I encounter that misconception, it just makes me want to smile, because I'm thinking to myself: "If you had any idea how much time we spend at CFAR talking about our feelings…"
Being able to put yourself into particular emotional states seems to make a lot of pieces of rationality easier. For example, for most of us, it's instrumentally rational to explore a wider set of possible actions – different ways of studying, holding conversations, trying to be happy, and so on – beyond whatever our defaults happen to be. And for most of us, inertia and aversions get in the way of that exploration. But getting yourself into "playful" mode (one of the hypothesized primary emotional circuits common across mammals) can make it easier to branch out into a wider swath of Possible-Action Space. Similarly, being able to call up a feeling of curiosity or of "seeking" (another candidate for a primary emotional circuit) can help you conquer motivated cognition and learned blankness.
And simply being able to notice your emotional state is rarer and more valuable than most people realize. For example, if you're in fight-or-flight mode, you're going to feel more compelled to reject arguments that feel like a challenge to your identity. Being attuned to the signs of sympathetic nervous system activation – that you're tensing up, or that your heart rate is increasing – means you get cues to double-check your reasoning, or to coax yourself into another emotional state.
We also use emotions as sources of data. You can learn to tap into feelings of surprise or confusion to get a sense of how probable you implicitly expect some event to be. Or practice simulating hypotheticals ("What if I knew that my novel would never sell well?") and observing your resultant emotions, to get a clearer picture of your utility function.
And emotions-as-data can be a valuable check on your System 2's conclusions. One of our standard classes is "Goal Factoring," which entails finding some alternate set of actions through which you can purchase the goods you want more cheaply. So you might reason, "I'm doing martial arts for the exercise and self-defense benefits... but I could purchase both of those things for less time investment by jogging to work and carrying Mace." If you listened to your emotional reaction to that proposal, however, you might notice you still feel sad about giving up martial arts even if you were getting the same amount of exercise and self-defense benefits somehow else.
Which probably means you've got other reasons for doing martial arts that you haven't yet explicitly acknowledged -- for example, maybe you just think it's cool. If so, that's important, and deserves a place in your decisionmaking. Listening for those emotional cues that your explicit reasoning has missed something is a crucial step, and to the extent that aspiring rationalists sometimes forget it, I suppose that's a Steel-Manned Straw Vulcan (Steel Vulcan?) that actually is worth worrying about.
I'll name one more trait that unites, rather than divides, CFAR and Less Wrong. We both diverge from "traditional" rationality in that we're concerned with determining which general methods systematically perform well, rather than defending some set of methods as "rational" on a priori criteria alone. So CFAR's picture of what rationality looks like, and how to become more rational, will and should change over the coming years as we learn more about the effects of our rationality training efforts.
In grade school, I read a series of books titled Sideways Stories from Wayside School by Louis Sachar, who you may know as the author of the novel Holes which was made into a movie in 2003. The series included two books of math problems, Sideways Arithmetic from Wayside School and More Sideways Arithmetic from Wayside School, the latter of which included the following problem (paraphrased):
The students have Mrs. Jewl's class have been given the privilege of voting on the height of the school's new flagpole. She has each of them write down what they think would be the best hight for the flagpole. The votes are distributed as follows:
- 1 student votes for 6 feet.
- 1 student votes for 10 feet.
- 7 students vote for 25 feet.
- 1 student votes for 30 feet.
- 2 students vote for 50 feet.
- 2 students vote for 60 feet.
- 1 student votes for 65 feet.
- 3 students vote for 75 feet.
- 1 student votes for 80 feet, 6 inches.
- 4 students vote for 85 feet.
- 1 student votes for 91 feet.
- 5 students vote for 100 feet.
At first, Mrs. Jewls declares 25 feet the winning answer, but one of the students who voted for 100 feet convinces her there should be a runoff between 25 feet and 100 feet. In the runoff, each student votes for the height closest to their original answer. But after that round of voting, one of the students who voted for 85 feet wants their turn, so 85 feet goes up against the winner of the previous round of voting, and the students vote the same way, with each student voting for the height closest to their original answer. Then the same thing happens again with the 50 foot option. And so on, with each number, again and again, "very much like a game of tether ball."
Question: if this process continues until it settles on an answer that can't be beaten by any other answer, how tall will the new flagpole be?
Answer (rot13'd): fvkgl-svir srrg, orpnhfr gung'f gur zrqvna inyhr bs gur bevtvany frg bs ibgrf. Naq abj lbh xabj gur fgbel bs zl svefg rapbhagre jvgu gur zrqvna ibgre gurberz.
Why am I telling you this? There's a minor reason and a major reason. The minor reason is that this shows it is possible to explain little-known academic concepts, at least certain ones, in a way that grade schoolers will understand. It's a data point that fits nicely with what Eliezer has written about how to explain things. The major reason, though, is that a month ago I finished my systematic read-through of the sequences and while I generally agree that they're awesome (perhaps moreso than most people; I didn't see the problem with the metaethics sequence), I thought the mini-discussion of political parties and voting was on reflection weak and indicative of a broader nerd failure mode.
TLDR (courtesy of lavalamp):
- Politicians probably conform to the median voter's views.
- Most voters are not the median, so most people usually dislike the winning politicians.
- But people dislike the politicians for different reasons.
- Nerds should avoid giving advice that boils down to "behave optimally". Instead, analyze the reasons for the current failure to behave optimally and give more targeted advice.
It's that time of year again.
If you are reading this post, and have not been sent here by some sort of conspiracy trying to throw off the survey results, then you are the target population for the Less Wrong Census/Survey. Please take it. Doesn't matter if you don't post much. Doesn't matter if you're a lurker. Take the survey.
This year's census contains a "main survey" that should take about ten or fifteen minutes, as well as a bunch of "extra credit questions". You may do the extra credit questions if you want. You may skip all the extra credit questions if you want. They're pretty long and not all of them are very interesting. But it is very important that you not put off doing the survey or not do the survey at all because you're intimidated by the extra credit questions.
It also contains a chance at winning a MONETARY REWARD at the bottom. You do not need to fill in all the extra credit questions to get the MONETARY REWARD, just make an honest stab at as much of the survey as you can.
Please make things easier for my computer and by extension me by reading all the instructions and by answering any text questions in the simplest and most obvious possible way. For example, if it asks you "What language do you speak?" please answer "English" instead of "I speak English" or "It's English" or "English since I live in Canada" or "English (US)" or anything else. This will help me sort responses quickly and easily. Likewise, if a question asks for a number, please answer with a number such as "4", rather than "four".
Last year there was some concern that the survey period was too short, or too uncertain. This year the survey will remain open until 23:59 PST December 31st 2013, so as long as you make time to take it sometime this year, you should be fine. Many people put it off last year and then forgot about it, so why not take it right now while you are reading this post?
Okay! Enough preliminaries! Time to take the...
Thanks to everyone who suggested questions and ideas for the 2013 Less Wrong Census/Survey. I regret I was unable to take all of your suggestions into account, because of some limitations in Google Docs, concern about survey length, and contradictions/duplications among suggestions. I think I got most of them in, and others can wait until next year.
By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.
I have been autodidacting quite a bit lately. You may have seen my reviews of books on the MIRI course list. I've been going for about ten weeks now. This post contains my notes about the experience thus far.
Much of this may seem obvious, and would have seemed obvious if somebody had told me in advance. But nobody told me in advance. As such, this is a collection of things that were somewhat surprising at the time.
Part of the reason I'm posting this is because I don't know a lot of autodidacts, and I'm not sure how normal any of my experiences are. (Though on average, I'd guess they're about average.) As always, keep in mind that I am only one person and that your mileage may vary.
When I began my quest for more knowledge, I figured that in this modern era, a well-written textbook and an account on math.stackexchange would be enough to get me through anything. And I was right… sort of.
But not really.
The problem is, most of the time that I get stuck, I get stuck on something incredibly stupid. I've either misread something somewhere or misremembered a concept from earlier in the book. Usually, someone looking over my shoulder could correct me in ten seconds with three words.
"Dude. Disjunction. Disjunction."
These are the things that eat my days.
In principle, places like stackexchange can get me unstuck, but they're an awkward tool for the job. First of all, my stupid mistakes are heavily contextualized. A full context dump is necessary before I can even ask my question, and this takes time. Furthermore, I feel dumb asking stupid questions on stackexchange-type sites. My questions are usually things that I can figure out with a close re-read (except, I'm not sure which part needs a re-read). I usually opt for a close re-read of everything rather than asking for help. This is even more time consuming.
The infuriating thing is that answering these questions usually doesn't require someone who already knows the answers: it just requires someone who didn't make exactly the same mistakes as me. I lose hours on little mistakes that could have been fixed within seconds if I was doing this with someone else.
That's why my number one piece of advice for other people attempting to learn on their own is do it with a friend. They don't need to be more knowledgeable than you to answer most of the questions that come up. They just need to make different misunderstandings, and you'll be able to correct each other as you go along.
The thing I miss most about college is tight feedback loops while learning. When autodidacting, the feedback loop can be long.
I still haven't managed to follow my own advice here. I'm writing this advice in part because it should motivate me to actually pair up. Unfortunately, there is nobody in my immediate circle who has the time or patience to read along with me, but there are a number of resources I have not yet explored (the LessWrong study hall, for example, or soliciting to actual mathematicians). It's on my list of things to do.
Read, reread, rereread
Reading Model Theory was one of the hardest things I've done. Not necessarily because the content was hard, but because it was the first time I actually learned something that was way outside my comfort zone.
The short version is that Basic Category Theory and Naïve Set Theory left me somewhat overconfident, and that I should have read a formal logic textbook before diving in. I had basic familiarity with logic, but no practice. Turns out practice is important.
Anyway, it's not like Model Theory was impossible just because I skipped my logic exercises. It was just hard. There are a number of little misconceptions you have when you're familiar with something but you've never applied it, and I found myself having to clean those out just to understand what Model Theory was trying to say to me.
In retrospect, this was an efficient way to strengthen my understanding of mathematical logic and learn Model Theory at the same time. (I've moved on to a logic textbook, and it's been a cakewalk.) That said, I wouldn't wish the experience on others.
In the process, I learned how to learn things that are way outside my comfort zone. In the past, all the stuff I've learned has been either easy, or an extension of things that I was already interested in and experienced with. Reading Model Theory was the first time in my life where I read a chapter of a textbook and it made absolutely no sense. In fact, it took about three passes per chapter before they made sense.
- The first pass was barely sufficient to understand all the words and symbols. I constantly had to go research a topic. I followed proofs one step at a time, able to verify the validity of each step but not really understand what was going on. I came out the other end believing the results, but not knowing them.
- Another pass was required to figure out what the book was actually trying to say to me. Once all the words made sense and I was comfortable with their usage, the second pass allowed me to see what the theorems and proofs were actually saying. This was nice, but it still wasn't sufficient: I understood the theorems, but they seemed like a random walk through theorem-space. I couldn't yet understand why anyone would say those particular things on purpose.
- The third pass was necessary to understand the greater theory. I've never been particularly good at memorizing things, and it's not sufficient for me to believe and memorize a theorem. If it's going to stick, I have to understand why it's important. I have to understand why this theorem in particular is being stated, rather than another. I have to understand the problem that's being solved. A third pass was necessary to figure out the context in which the text made sense.
After a third pass of any given chapter, the next chapter didn't seem quite so random. When the upcoming content started feeling like a natural progression instead of a random walk, I knew I was making progress.
I note this because this is the first time that I had to read a math text more than once to understand what was going on. I'm not talking about individual sentences or paragraphs, I'm talking about finishing a chapter, feeling like "wat", and then starting the whole chapter over. Twice.
I'm not sure if I'm being naïve (for never having needed to do this before) or slow (for having to do this for Model Theory), but I did not anticipate requiring three passes. Mostly, I didn't anticipate gaining as much as I did from a re-read; I would have guessed that something opaque on the first pass would remain opaque on a second pass.
This, I'm pretty sure, was naïvety.
So take note: if you stumble upon something that feels very hard, it might be more useful than anticipated to re-read it.
Cognitive exchange rates
When reading Model Theory, I was only able to convert 30-50% of my allotted "study time" into actual study.
This is somewhat surprising, as I had no such troubles with Basic Category Theory or Naïve Set Theory.
(I often have the opposite problem when writing code; this is probably due to the different reward structure.)
I was somewhat frustrated with my inability to study as much as I would have liked. My usual time-into-studying conversion rate is much higher (I'd guess 80%ish, though I haven't been measuring).
I'm not sure what factor made it harder for me to study model theory. I don't think it was the difficulty directly, as I often tend to work harder in the face of a challenge. I'd guess that it was either the slower rate of rewards (caused by a slower pace of learning) or actual cognitive exhaustion.
In the vein of cognitive exhaustion, there were a few times while reading Model Theory where I seem to have become cognitively exhausted before becoming physically exhausted. This was a first for me. I'm not referring to those times when you've done a lot of mental work and you shy away from doing anything difficult, that's happened to me plenty. Rather, in this case, I felt fully awake and ready to keep reading. And I did keep reading. It just… didn't work. I'd have trouble following simple proofs. I'd fail at parsing sentences that were quite clear after resting.
I'm still not sure what to make of this, and I don't have sufficient data to draw conclusions. However, it seems like there are mental states where my I feel awake and able to continue, but my mind is just not capable of doing the heavy lifting.
Again, the fact that I'm only just realizing this now is probably naïvety, but it's something to remember before getting frustrated with yourself.
Explain it to someone
As I've said before, one of the best ways to learn something is to do the problem sets. For Model Theory, though, there were times when I finished reading through a chapter and was not capable of doing the problems.
Re-reading helped, as mentioned above. Another thing that helped was explaining the concepts.
I explained model theory pretty extensively to a text file on my computer. I sketched the proofs in my own words and stated their significance. I explained the syntax being used. I tried to motivate each idea. (The notes are still lying around somewhere; I haven't posted them because they're pretty much a derivative work at this point.)
I found that this went a long way towards helping me track down places where I'd thought I learned something, but actually hadn't. If you're having trouble, go explain the concept to somebody (or to a text file). This can bridge the gap between "I read it" and "I can do the problems" quite well. For me, this technique often took problems from "unapproachable" to "easy" in one fell swoop.
Don't book yourself solid
I'm pretty good at avoiding stress. I have the (apparently rare) ability to drop all work-related concerns at the door when I leave. I don't even know how to get stressed by bad luck, especially if I made good choices given the information I had at the time. I get normally tense in stressful situations with time constraints, but I'm adept at avoiding the permastress that I've seen plague friends and family — unless I've booked myself solid.
I've had a packed schedule these past few weeks. I try to move the needle on at least two projects a day (more on weekends). Even if it's entirely reasonable to fit all these things into my schedule, I have not yet found a way to avoid the stress.
Even when I know that, if I push myself, I can read this much and write that much and code this feature all in one day, I haven't found a good way to push myself without pressure-stress.
I'm still hoping that I'll learn how to move quickly without stress as I learn my capabilities, but I'm not sure I've been adequately accounting for the cost of stress.
It's worth remembering that doing less than you're capable of on purpose might be a good strategy for maximizing long-term output.
There you go. Those are my notes gathered from trying to learn lots of things very quickly (and trying to learn one hard thing in particular). Comments are encouraged; I am by no means an expert.
When I was a teenager, I picked up my mom's copy of Dale Carnegie's How to Win Friends and Influence People. One of the chapters that most made an impression on me was titled "You Can't Win an Argument," in which Carnegie writes:
Nine times out of ten, an argument ends with each of the contestants more firmly convinced than ever that he is absolutely right.
You can’t win an argument. You can’t because if you lose it, you lose it; and if you win it, you lose it. Why? Well, suppose you triumph over the other man and shoot his argument full of holes and prove that he is non compos mentis. Then what? You will feel fine. But what about him? You have made him feel inferior. You have hurt his pride. He will resent your triumph. And -
"A man convinced against his will
"Is of the same opinion still."
In the next chapter, Carnegie quotes Benjamin Franklin saying how he had made it a rule never to contradict anyone. Carnegie approves: he thinks you should never argue with or contradict anyone, because you won't convince them (even if you "hurl at them all the logic of a Plato or an Immanuel Kant"), and you'll just make them mad at you.
It may seem strange to hear this advice cited on a rationalist blog, because the atheo-skeptico-rational-sphere violates this advice on a routine basis. In fact I've never tried to follow Carnegie's advice—and yet, I don't think the rationale behind it is completely stupid. Carnegie gets human psychology right, and I fondly remember reading his book as being when I first really got clued in about human irrationality.
At the recent CFAR Workshop in NY, someone mentioned that they were uncomfortable with pauses in conversation, and that got me thinking about different conversational styles.
Growing up with friends who were disproportionately male and disproportionately nerdy, I learned that it was a normal thing to interrupt people. If someone said something you had to respond to, you’d just start responding. Didn’t matter if it “interrupted” further words – if they thought you needed to hear those words before responding, they’d interrupt right back.
Occasionally some weird person would be offended when I interrupted, but I figured this was some bizarre fancypants rule from before people had places to go and people to see. Or just something for people with especially thin skins or delicate temperaments, looking for offense and aggression in every action.
Then I went to St. John’s College – the talking school (among other things). In Seminar (and sometimes in Tutorials) there was a totally different conversational norm. People were always expected to wait until whoever was talking was done. People would apologize not just for interrupting someone who was already talking, but for accidentally saying something when someone else looked like they were about to speak. This seemed totally crazy. Some people would just blab on unchecked, and others didn’t get a chance to talk at all. Some people would ignore the norm and talk over others, and nobody interrupted them back to shoot them down.
But then a few interesting things happened:
1) The tutors were able to moderate the discussions, gently. They wouldn’t actually scold anyone for interrupting, but they would say something like, “That’s interesting, but I think Jane was still talking,” subtly pointing out a violation of the norm.
2) People started saying less at a time.
#1 is pretty obvious – with no enforcement of the social norm, a no-interruptions norm collapses pretty quickly. But #2 is actually really interesting. If talking at all is an implied claim that what you’re saying is the most important thing that can be said, then polite people keep it short.
With 15-20 people in a seminar, this also meant that people rarely tried to force the conversation in a certain direction. When you’re done talking, the conversation is out of your hands. This can be frustrating at first, but with time, you learn to trust not your fellow conversationalists individually, but the conversation itself, to go where it needs to. If you haven’t said enough, then you trust that someone will ask you a question, and you’ll say more.
When people are interrupting each other – when they’re constantly tugging the conversation back and forth between their preferred directions – then the conversation itself is just a battle of wills. But when people just put in one thing at a time, and trust their fellows to only say things that relate to the thing that came right before – at least, until there’s a very long pause – then you start to see genuine collaboration.
And when a lull in the conversation is treated as an opportunity to think about the last thing said, rather than an opportunity to jump in with the thing you were holding onto from 15 minutes ago because you couldn’t just interrupt and say it – then you also open yourself up to being genuinely surprised, to seeing the conversation go somewhere that no one in the room would have predicted, to introduce ideas that no one brought with them when they sat down at the table.
By the time I graduated, I’d internalized this norm, and the rest of the world seemed rude to me for a few months. Not just because of the interrupting – but more because I’d say one thing, politely pause, and then people would assume I was done and start explaining why I was wrong – without asking any questions! Eventually, I realized that I’d been perfectly comfortable with these sorts of interactions before college. I just needed to code-switch! Some people are more comfortable with a culture of interrupting when you want to, and accepting interruptions. Others are more comfortable with a culture of waiting their turn, and courteously saying only one thing at a time, not trying to cram in a whole bunch of arguments for their thesis.
Now, I’ve praised the virtues of wait culture because I think it’s undervalued, but there’s plenty to say for interrupt culture as well. For one, it’s more robust in “unwalled” circumstances. If there’s no one around to enforce wait culture norms, then a few jerks can dominate the discussion, silencing everyone else. But someone who doesn’t follow “interrupt” norms only silences themselves.
Second, it’s faster and easier to calibrate how much someone else feels the need to talk, when they’re willing to interrupt you. It takes willpower to stop talking when you’re not sure you were perfectly clear, and to trust others to pick up the slack. It’s much easier to keep going until they stop you.
So if you’re only used to one style, see if you can try out the other somewhere. Or at least pay attention and see whether you’re talking to someone who follows the other norm. And don’t assume that you know which norm is the “right” one; try it the “wrong” way and maybe you’ll learn something.
Cross-posted at my personal blog.
Some highlights from The Power of Habit: Why We Do What We Do in Life And Business by Charles Duhigg, a book which seems like an invaluable resource for pretty much everyone who wants to improve their lives. The below summarizes the first three chapters of the book, as well as the appendix, for I found those to be the most valuable and generally applicable parts. These chapters discuss individual habits, while the rest of the book discusses the habits of companies and individuals. The later chapters also contain plenty of interesting content (some excerpts: [1 2 3]), and help explain the nature of e.g. some institutional failures.
Chapter One: The Habit Loop - How Habits Work
When a rat first navigates a foreign environment, such as a maze, its brain is full of activity as it works to process the new environment and to learn all the environmental cues. As the environment becomes more familiar, the rat's brain becomes less and less active, until even brain structures related to memory quiet down a week later. Navigating the maze no longer requires higher processing: it has become an automatic habit.
The process of converting a complicated sequence of actions into an automatic routine is known as "chunking", and human brains carry out a similar process. They vary in complexity, from putting toothpaste on your toothbrush before putting it in your mouth, to getting dressed or preparing breakfast, to very complicated processes such as backing one's car out of the driveway. All of these actions initially required considerable effort to learn, but eventually they became so automatic as to be carried out without conscious attention. As soon as we identify the right cue, such as pulling out the car keys, our brain activates the stored habit and lets our conscious minds focus on something else. In order to conserve effort, the brain will attempt to turn almost any routine into a habit.
However, it can be dangerous to deactivate our brains at the wrong time, for there may be something unanticipated in the environment that will turn a previously-safe routine into something life-threatening. To help avoid such situations, our brains evaluate prospective habits using a three-stage habit loop:
I recently ran across Nick Bostrom’s idea of subjecting your strongest beliefs to a hypothetical apostasy in which you try to muster the strongest arguments you can against them. As you might have figured out, I believe strongly in effective altruism—the idea of applying evidence and reason to finding the best ways to improve the world. As such, I thought it would be productive to write a hypothetical apostasy on the effective altruism movement.
(EDIT: As per the comments of Vaniver, Carl Shulman, and others, this didn't quite come out as a hypothetical apostasy. I originally wrote it with that in mind, but decided that a focus on more plausible, more moderate criticisms would be more productive.)
- How to read this post
- Philosophical difficulties
- Poor cause choices
- Efficient markets for giving
- Inconsistent attitude towards rigor
- Poor psychological understanding
- Historical analogues
- Community problems
- Movement building issues
- Are these problems solvable?
How to read this post
(EDIT: the following two paragraphs were written before I softened the tone of the piece. They're less relevant to the more moderate version that I actually published.)
Hopefully this is clear, but as a disclaimer: this piece is written in a fairly critical tone. This was part of an attempt to get “in character”. This tone does not indicate my current mental state with regard to the effective altruism movement. I agree, to varying extents, with some of the critiques I present here, but I’m not about to give up on effective altruism or stop cooperating with the EA movement. The apostasy is purely hypothetical.
Also, because of the nature of a hypothetical apostasy, I’d guess that for effective altruist readers, the critical tone of this piece may be especially likely to trigger defensive rationalization. Please read through with this in mind. (A good way to counteract this effect might be, for instance, to imagine that you’re not an effective altruist, but your friend is, and it’s them reading through it: how should they update their beliefs?)
(End less relevant paragraphs.)
Finally, if you’ve never heard of effective altruism before, I don’t recommend making this piece your first impression of it! You’re going to get a very skewed view because I don’t bother to mention all the things that are awesome about the EA movement.
Effective altruism is, to my knowledge, the first time that a substantially useful set of ethics and frameworks to analyze one’s effect on the world has gained a broad enough appeal to resemble a social movement. (I’d say these principles are something like altruism, maximization, egalitarianism, and consequentialism; together they imply many improvements over the social default for trying to do good in the world—earning to give as opposed to doing direct charity work, working in the developing world rather than locally, using evidence and feedback to analyze effectiveness, etc.) Unfortunately, as a movement effective altruism is failing to use these principles to acquire correct nontrivial beliefs about how to improve the world.
By way of clarification, consider a distinction between two senses of the word “trying” I used above. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.
Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.
This is based on a concept we developed at the Vancouver Rationalists meetup.
Different experiences level a person up at different rates. You could work some boring job all your life and be 60 and not be much more awesome than your average teenager. On the other hand, some people have such varied and so much life experience that by 30 they are as awesome as a 1000 year old vampire.
This reminds me that it's possible to conduct your life with more or less efficiency, sometimes by orders of magnitude. Further, while we don't have actual life extension, it's content we care about, not run time. If you can change your habits such that you get 3 times as much done, that's like tripling your effective lifespan.
So how might one get a 100x speedup and become like a 1000 year old vampire in 10 years? This is absurdly ambitious, but we can try:
Do Hard Things
Some experiences catapult you forward in personal development. You can probably systematically collect these to build formidability as fast as possible.
Paul Graham says that many of the founders he sees (as head of YC) become much more awesome very quickly as need forces them to. This seems plausible and it seems back up by other sources as well. Basically "learn to swim by jumping in the deep end"; people have a tendency to take the easy way that results in less development when given the chance, so the chance to slack off being removed can be beneficial.
That has definitely been my personal experience as well. At work, the head engineer got brain cancer and I got de-facto promoted to head of two of the projects, which I then leveled up to be able to do. It felt pretty scary at first, but now I'm bored and wishing something further would challenge me. (addendum: not bored right now at all; crazy crunch time for the other team, which which I am helping) It seems really hard to just do better without such forcing; as far as I can tell I could work much harder than now, but willpower basically doesn't exist so I don't.
On that note, a friend of mine got big results from joining the Army and getting tear gassed in a trench while wet, cold, exhausted, sleep deprived, and hungry, which pushed him through stuff he wouldn't have thought he could deal with. Apparently it sortof re-calibrated his feelings about how well he should be doing and how hard things are such that he is now a millionaire and awesome.
So the mechanism behind a lot of this seems to be recalibrating what seems hard or scary or beyond your normal sphere. I used to be afraid of phone calls and doing weird stuff like climbing trees in front of strangers, but not so much anymore; it feels like I just forget that they were scary. In the case of the phone there were a few times where I didn't have time to be scared, I needed to just get things done. In the case of climbing trees, I did it on my own enough for it to become normalized so that it didn't even come up that people would see me, because it didn't seem weird.
So tying that back in, there are experiences that you can put yourself into to force that normalization and acclimatization to hard stuff. For example, some people do this thing called "Rejection Therapy" or "Comfort Zone Expansion", basically going out and doing embarrassing or scary things deliberately to recalibrate your intuitions and teach your brain that they are not so scary.
On the failure end, self-improvement projects tend to fail when they require constant application of willpower. It's just a fact that you will fall off the wagon on those things. So you have to make it impossible to fall off the wagon. You have to make it scarier to fall off the wagon than it is to level up and just do it. This is the idea behind Beeminder, which takes your money if you don't do what your last-week self said you would.
I guess the thesis behind all this is that these level-ups are permanent, in that they make you more like a 1000 year old vampire, and you don't just go back to being your boring old mortal self. If this is true, the implication that you should seek out hard stuff seems pretty interesting and important.
Broadness of Experience
Think of a 1000 year old vampire; they would have done everything. Fought in battles, led armies, built great works, been in love, been everywhere, observed most aspects of the human experience, and generally seen it all.
Things you can do have sharply diminishing returns; the first few times you watch great movies is most of the benefit thereof, likewise with video games, 4chan, most jobs, and most experiences in general. Thus it's really important to switch around the things you do a lot so that you stay in that sharp initially growing part of the learning curve. You can get 90% of the vampire's experience with 10% of his time investment if you focus on those most enlightening parts of each experience.
So besides doing hard things that level you up, you can get big gains by doing many things and switching as soon as you get bored (which is hopefully calibrated to how challenged you are).
You may remember early in the Arabian revolutions in Libya, an American student took the summer off college to fight in the revolution. I bet he learned a lot. If you could do enough things like that, you'd be well on your way to matching the vampire.
This actually goes hand in hand with doing hard things; when you're not feeling challenged (you're on the flat part of that experience curve), its probably best to throw yourself face first into some new project, both because it's new, and because it's hard.
Switching often has the additional benefit of normalizing strategic changes and practicing "what should I be doing"-type thoughts, which can't hurt if you intend to actually do useful stuff with your life.
There are probably many cases where full on switching is not best. For example, you don't become an expert in X by switching out of X as soon as you know the basics. It might be that you want to switch often on side-things but go deep on X. Alternatively, you probably want to do some kind of switch every now and then in X, maybe look at things from a different perspective, tackle a different problem, or something like that. This is the Deliberate Practice theory of expertise.
So don't forget the shape of that experience curve. As soon as you start to feel that leveling off, find a way to make it fresh again.
Do Things Quickly
Another big angle on this idea is that every hour is an opportunity, and you want to make the best of them. This seems totally obvious but I definitely "get it" a lot more having thought about it in terms of becoming a 1000 year old vampire.
A big example is procrastination. I have a lot of things that have been hanging around on my todo list for a long time, basically oppressing me by their presence. I can't relax and look to new things to do while there's still that one stupid thing on my todo list. The key insight is that if you process the stuff on your todo list now instead of slacking now and doing it later, you get it out of the way and then you can do something else later, and thereby become a 1000 year old vampire faster.
So a friend and I have internalized this a bit more and started really noticing those opportunity costs, and actually started knocking things off faster. I'm sure there's more where that came from; we are nowhere near optimal in Doing It Now, so it's probably good to meditate on this more.
As a concrete example, I'm writing tonight because I realized that I need to just get all my writing ideas out of the way to make room for more awesomeness.
The flipside of this idea is that a lot of things are complete wastes of time, in the sense that they just burn up lifespan and don't get you anything, or even weaken you.
Bad habits like reading crap on the Internet, watching TV, watching porn, playing video games, sleeping in, and so on are obvious losses. It's really hard to internalize that, but this 1000-year-old-vampire concept has been helpful for me by making the magnitude of the cost more salient. Do you want to wake up when you're 30 and realize you wasted your youth on meaningless crap, or do you want to get off your ass and write that thing you've been meaning to right now, and be a fscking vampire in 10 years?
It's not just bad habits, though; a lot of it is your broader position in life that wastes time or doesn't. For example, repetitive wage work that doesn't challenge you is really just trading a huge chunk of your life for not even much money. Obviously sometimes you have to, but you have to realize that trading away half your life is a pretty raw deal that is to be avoided. You don't even really get anything for commuting and housework. Maybe I really should quit my job soon...
I have 168 hours a week, of which only 110 are feasible to use (sleep), and by the time we include all the chores, wage-work, bad habits, and procrastination, I probably only live 30 hours a week. That's bullshit; three quarters of my life pissed away. I could live four times as much if I could cut out that stuff.
So this is just the concept of time opportunity costs dressed up to be more salient. Basic economics concepts seem really quite valuable in this way.
Do it now so you can do something else later. Avoid crap work.
Social Environment and Stimulation
I notice that I'm most alive and do my best intellectual work when talking to other people who are smart and interested in having deep technical conversations. Other things like certain patterns of time pressure create this effect where I work many times harder and more effectively than otherwise. A great example is technical exams; I can blast out answers to hundreds of technical questions at quite a rate.
It seems like a good idea to induce this state where you are more alive (is it the "flow" state?) if you want to live more life. It also seems totally possible to do so more often by hanging out with the right people and exposing yourself to the right working conditions and whatnot.
One thing that will come up is that it's quite draining, in that I sometimes feel exhausted and can't get much done after a day of more intense work. Is this a real thing? Probably. Still, I'm nowhere near the limit even given the need to rest, in general.
I ought to do some research to learn more about this. If it's connected to "flow", there's been a lot of research, AFAIK.
I also ought to just hurry up and move to California where there is a proper intellectual community that will stimulate me much better than the meager group of brains I could scrape together in Vancouver.
The other benefit of a good intellectual community is that they can incentivize doing cooler things. When all your friends are starting companies or otherwise doing great work, sitting around on the couch feels like a really bad idea.
So if we want to live more life, finding more ways to enter that stimulated flow state seems like a prudent thing to do, whether that means just making way for it in your work habits, putting yourself in more challenging social and intellectual environments, or whatever.
Adding It Up
So how fast can we go overall if we do all of this?
By seeking many new experiences to keep learning, I think we can plausibly get 10x speedup over what you might do by default. Obviously this can be more or less, based on circumstances and things I'm not thinking of.
On top of that, it seems like I could do 4x as much by maintaining a habit of doing it now and avoiding crap work. How to do this, I don't know, but it's possible.
I don't know how to estimate the actual gains from a stimulating environment. It seems like it could be really really high, or just another incremental gain in efficiency, depending how it goes down. Let's say that on top of the other things, we can realistically push ourselves 2x or 3x harder by social and environmental effects.
Doing hard things seems huge, but also quite related to the doing new things angle that we already accounted for. So explicitly remembering to do hard things on top of that? Maybe 5x? This again will vary a lot based on what opportunities you are able to find, and unknown factors, but 5x seems safe enough given mortal levels of ingenuity and willpower.
So all together, someone who:
Often thinks about where they are on the experience curve for everything they do, and takes action on that when appropriate,
Maintains a habit of doing stuff now and visualizing those opportunity costs,
Puts themselves in a stimulating environment like the bay area intellectual community and surrounds themselves with stimulating people and events,
Seeks out the hardest character-building experiences like getting tear gassed in a trench or building a company from scratch,
Can plausibly get 500x speedup and live 1000 normal years in 2. That seems pretty wild, but none of these things are particularly out there, and people like Elon Musk or Eliezer Yudkowsky do seem to do around that magnitude more than the average joe.
Perhaps they don't multiply quite that conveniently, or there's some other gotcha, but the target seems reachable, and these things will help. On the other hand, they almost certainly self-reinforce; a 1000 year old vampire would have mastered the art of living life life at ever higher efficiencies.
This does seem to be congruent with all this stuff being power-law distributed, which of course makes it difficult to summarize by a single number like 500.
The final question of course is what real speedup we can expect you or I to gain from writing or reading this. Getting more than 2 or 3 times by having a low-level insight or reading a blog post seems stretching of the imagination, never mind 500 times. But still, power laws happen. There's probably massive payoff to taking this idea seriously.
Recently, I completed my first systematic read-through of the sequences. One of the biggest effects this had on me was considerably warming my attitude towards Bayesianism. Not long ago, if you'd asked me my opinion of Bayesianism, I'd probably have said something like, "Bayes' theorem is all well and good when you know what numbers to plug in, but all too often you don't."
Now I realize that that objection is based on a misunderstanding of Bayesianism, or at least Bayesianism-as-advocated-by-Eliezer-Yudkowsky. "When (Not) To Use Probabilities" is all about this issue, but a cleaner expression of Eliezer's true view may be this quote from "Beautiful Probability":
No, you can't always do the exact Bayesian calculation for a problem. Sometimes you must seek an approximation; often, indeed. This doesn't mean that probability theory has ceased to apply, any more than your inability to calculate the aerodynamics of a 747 on an atom-by-atom basis implies that the 747 is not made out of atoms. Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation - and fails to the extent that it departs.
The practical upshot of seeing Bayesianism as an ideal to be approximated, I think, is this: you should avoid engaging in any reasoning that's demonstrably nonsensical in Bayesian terms. Furthermore, Bayesian reasoning can be fruitfully mined for heuristics that are useful in the real world. That's an idea that actually has real-world applications for human beings, hence the title of this post, "Bayesianism for Humans."
View more: Next