Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
The following posts may be useful background material: Sorting Out Sticky Brains; Mental Crystallography; Generalizing From One Example
I took the word "luminosity" from "Knowledge and its Limits" by Timothy Williamson, although I'm using it in a different sense than he did. (He referred to "being in a position to know" rather than actually knowing, and in his definition, he doesn't quite restrict himself to mental states and events.) The original ordinary-language sense of "luminous" means "emitting light, especially self-generated light; easily comprehended; clear", which should put the titles into context.
Luminosity, as I'll use the term, is self-awareness. A luminous mental state is one that you have and know that you have. It could be an emotion, a belief or alief, a disposition, a quale, a memory - anything that might happen or be stored in your brain. What's going on in your head? What you come up with when you ponder that question - assuming, nontrivially, that you are accurate - is what's luminous to you. Perhaps surprisingly, it's hard for a lot of people to tell. Even if they can identify the occurrence of individual mental events, they have tremendous difficulty modeling their cognition over time, explaining why it unfolds as it does, or observing ways in which it's changed. With sufficient luminosity, you can inspect your own experiences, opinions, and stored thoughts. You can watch them interact, and discern patterns in how they do that. This lets you predict what you'll think - and in turn, what you'll do - in the future under various possible circumstances.
[Epistemic status | Contains generalization based on like three data points.]
In grad school, I took a philosophy of science class that was based around looking for examples of bad reasoning in the scientific literature. The kinds of objections to published scientific studies we talked about were not stupid ones. The professor had a background in statistics, and as far as I could tell knew her stuff in that area (though she dismissed Bayesianism in favor of orthodox statistics). And no, unlike some of the professors in the department, she wasn't an anti-evolutionist or anything like that.
Instead she was convinced that cellphones cause cancer. In spite of the fact that there's scant evidence for that claim, and there's no plausible physial mechanism for how that could happen. This along with a number of other borderline-fringe beliefs that I won't get into here, but that was the big screaming red flag.*
Over the course of the semester, I got a pretty good idea of what was going on. She had an agenda—it happened to be an environmentalist, populist, pro-"natural"-things agenda, but that's incidental. The problem was that when she saw a scientific study that seemed at odds with her agenda, she went looking for flaws. And often she could find them! Real flaws, not ones she was imagining! But people who've read the rationalization sequence will see a problem here...
In my last post, I quoted Robin Hanson on the tendency of some physicists to be unduly dismissive of other fields. But based the above case and a couple others like it, I've come to suspect statistics may be even worse than physics in that way. That fluency in statistics sometimes causes a supercharged sophistication effect.
For example, some anthropogenic global warming skeptics make a big deal of alleged statistical errors in global warming research, but as I wrote in my post Trusting Expert Consensus:
Michael Mann et al's so-called "hockey stick" graph has come under a lot of fire from skeptics, but (a) many other reconstructions have reached the same conclusion and (b) a panel formed by the National Research Council concluded that, while there were some problems with Mann et al's statistical analysis, these problems did not affect the conclusion. Furthermore, even if we didn't have the pre-1800 reconstructions, I understand that given what we know about CO2's heat-trapping properties, and given the increase in atmospheric CO2 levels due to burning fossil fuels, it would be surprising if humans hadn't caused significant warming.
Most recently, I got into a Twitter argument with someone who claimed that "IQ is demonstrably statistically meaningless" and that this was widely accepted among statisticians. Not only did this set off my "academic clique!" alarm bells, but I'd just come off doing a spurt of reading about intelligence, including the excellent Intelligence: A Very Short Introduction. The claim that IQ is meaningless was wildly contrary to what I understood was the consensus among people who study intelligence for a living.
In response to my surprise, I got an article that contained lengthy and impressive-looking statistical arguments... but completely ignored a couple key points from the intelligence literature I'd read: first, that there's a strong correlation between IQ and real-world performance, and second that correlations between the components of intelligence we know how to test for turn out to be really strong. If IQ is actually made up of several independent factors, we haven't been able to find them. Maybe some people in intelligence research really did make the mistakes alleged, but there was more to intelligence research than the statistician who wrote the article let on.
It would be fair to shout a warning about correspondence bias before inferring anything from these cases. But consider two facts:
- Essentially all scientific fields rely heavily on statistics.
- There's a lot more to mastering a scientific discipline than learning statistics, which limits how well most scientists will ever master statistics.
The first fact may make it tempting to think that if you know a lot of statistics, you're in a priviledged position to judge the validity of any scientific claim you come across. But the second fact means that if you've specialized in statistics, you'll probably be better at it than most scientists, even good scientists. So if you go scrutinizing their papers, there's a good chance you'll find clear mistakes in their stats, and an even better chance you'll find arguable ones.
Bayesians will realize that, since there's a good chance that of happening even when the conclusion is correct and well-supported by the evidence, finding mistakes in the statistics is only weak evidence that the conclusion is wrong. Call it the statistician's fallacy: thinking that finding a mistake in the statistics is sufficient grounds to dismiss a finding.
Oh, if you're dealing with a novel finding that experts in the field aren't sure what to make of yet, and the statistics turns out to be wrong, then that may be enough. You may have better things to do than investigate further. But when a solid majority of the experts agree on a conclusion, and you see flaws in their statistics, I think the default assumption should be that they still know the issue better than you and very likely the sum total of the available evidence does support the conclusion. Even if the specific statistical arguments youv'e seen from them are wrong.
*Note: I've done some Googling to try to find rebuttals to this link, and most of what I found confirms it. I did find some people talking about multi-photon effects and heating, but couldn't find defenses of these suggestions that rise beyond people saying, "well there's a chance."
The "Prisoner's Dilemma" refers to a game theory problem developed in the 1950's. Two prisoners are taken and interrogated separately. If either of them confesses and betrays the other person - "defecting" - they'll receive a reduced sentence, and their partner will get a greater sentence. However, if both defect, then they'll both receive higher sentences than if neither of them confessed.
This brings the prisoner to a strange problem. The best solution individually is to defect. But if both take the individually best solution, then they'll be worst off overall. This has wide ranging implications for international relations, negotiation, politics, and many other fields.
Members of LessWrong are incredibly smart people who tend to like game theory, and debate and explore and try to understand problems like this. But, does knowing game theory actually make you more effective in real life?
I think the answer is yes, with a caveat - you need the basic social skills to implement your game theory solution. The worst-case scenario in an interrogation would be to "defect by accident" - meaning that you'd just blurt out something stupidly because you didn't think it through before speaking. This might result in you and your partner both receiving higher sentences... a very bad situation. Game theory doesn't take over until basic skill conditions are met, so that you could actually execute any plan you come up with.
The Purpose of This Post: I think many smart people "defect" by accident. I don't mean in serious situations like a police investigation. I mean in casual, everyday situations, where they tweak and upset people around them by accident, due to a lack of reflection of desired outcomes.
Rationalists should win. Defecting by accident frequently results in losing. Let's examine this phenomenon, and ideally work to improve it.
Contents Of This Post
- I'll define "defecting by accident."
- I'll explain a common outcome of defecting by accident.
- I'll give some recent, mild examples of accidental defections.
- I'll give examples of how to turn accidental defections into cooperation.
- I'll give some examples of how this can make you more successful at your goals.
- I'll list some books I recommend if you decide to learn more on the topic.
When I was a teenager, I picked up my mom's copy of Dale Carnegie's How to Win Friends and Influence People. One of the chapters that most made an impression on me was titled "You Can't Win an Argument," in which Carnegie writes:
Nine times out of ten, an argument ends with each of the contestants more firmly convinced than ever that he is absolutely right.
You can’t win an argument. You can’t because if you lose it, you lose it; and if you win it, you lose it. Why? Well, suppose you triumph over the other man and shoot his argument full of holes and prove that he is non compos mentis. Then what? You will feel fine. But what about him? You have made him feel inferior. You have hurt his pride. He will resent your triumph. And -
"A man convinced against his will
"Is of the same opinion still."
In the next chapter, Carnegie quotes Benjamin Franklin saying how he had made it a rule never to contradict anyone. Carnegie approves: he thinks you should never argue with or contradict anyone, because you won't convince them (even if you "hurl at them all the logic of a Plato or an Immanuel Kant"), and you'll just make them mad at you.
It may seem strange to hear this advice cited on a rationalist blog, because the atheo-skeptico-rational-sphere violates this advice on a routine basis. In fact I've never tried to follow Carnegie's advice—and yet, I don't think the rationale behind it is completely stupid. Carnegie gets human psychology right, and I fondly remember reading his book as being when I first really got clued in about human irrationality.
I'm not sure this is something that can be consciously done, but in this post I want to prime you to consider whether something you do is really, totally, completely wacky and absurd.
We have trained ourselves a lot to notice when we are wrong. We trained ourselves even more to notice when we are confused and to tell word confusion from substance confusion.
But here is the tale of what happened to me today, and I don't think it qualifies as any of those:
I had a serious motivational problem yesterday, and got absolutely nothing done. So today I thought I should do things in a different manner, so as to decrease probability of two bad days in a row. One of the most effective things for me is going into the LW Study Hall (the password is in the group's description when you click this link). A very nice place to work that I recommend for everyone to check out, and do one or two pomo's every now and then.
And I did, I gave myself ten minutes observing others working, and I noticed something remarkable: The property of the LW chat that causes me to be motivated is "Presence of long haired people". Yes. Presence of people with a long hair. For weeks I had been trying to work out why it was efficient sometimes and not others. The most obvious initial alternative was that when there was a woman, I would feel more driven. I assumed that was the case. But I started getting false negatives and false positives. Today I finally came to terms with the fact. I am motivated by the presence of people whose hair goes to their shoulders. Women or Men.
Now why did I not notice this before? Seems to me that basically it was such a far fetched hypothesis that I simply had no prior for it. In vain hopes of being rational, I would read about how we fear the twinge of starting, how to beat procrastination, and how to get things done, and valid as those were and are, they would never have given me a complete picture of the unbelievable things my brain thinks behind my back.
Maybe there is something similar taking place in your mind. Even if there isn't, just update with me on the fact that this is true at least for someone, and how there may be millions of other tiny absurd facts controlling people's actions way beyond the scope of imagination of any economist or psychologist.
I have now one more piece of understanding about what is it like to be me, about how to tame and steer my future behavior, and specially one more thing to tell people in awkward silence moments to break the ice and face the absurdity of reality.
For obvious reasons, if you have long hair, I'd like to make an even stronger case for you to try to work and do pomos at the LW study hall. It's not only yourself that you'll be helping!
Since I moved into the Boston rationalist house, I've found myself having an overwhelming amount of conversation compared to my previous baseline. The conversations at Citadel tend to be fairly intellectual and interesting, but there is a lot of topic drift and tendency for entertainment over depth, which seems to be a fairly common pitfall. How can we optimize conversations and direct them towards areas of usefulness and insight?
There have been some previous discussions on this topic on LW, e.g. on useful ways to avoid low-value conversations or steer out of them. I would like to focus on the complementary skill of stimulating high-value directions in a conversation.
First of all, what makes a conversation high-value? There are several possible metrics:
- people learning from each other’s expertise and experience
- people getting to know each other better
- exchange of advice and feedback
- generating ideas and insights
All of these involve increasing the total amount of information available to the participants, either through revealing information that is already there, or through creating new information. This is more likely to happen in a topic area where someone has strong opinions or expertise, or, on the other hand, an area that someone finds challenging where they stand to learn a lot.
One effective way to steer a conversation is through asking purposeful questions. The questions should have sufficient depth to lead to interesting answers, but not be vague or put the other person on the spot. In that sense, a question like “What have you been thinking about lately?” is better than “What do you care about?” or “What are you terminal goals?”. It is better if the question leaves a line of retreat and doesn’t make the person feel low status if they don’t have an answer.
The types of questions that are productive and comfortable are generally different for group and one-on-one conversations. Two-person conversations are more conducive to openness, so one would be able to ask personal questions like
- what memes have affected you strongly in the past or shaped your beliefs?
- what has been important to you lately?
- what has been difficult for you lately?
- what eccentric things have you done?
Some questions are likely to lead to interesting topics in an N-person conversation for any N:
- what have you learned recently?
- what surprised you about experience X?
- what have you been reading?
- who are your role models?
- I have been confused about X, does anyone have advice?
It is generally harder to steer a group conversation in productive directions than a two-person conversation, but the payoff is higher as well, since more people’s time is at stake. Since a single person has less influence in a group conversation, it’s important to use it well. Sometimes the most useful thing to do in a group conversation is to split it into smaller conversations. Asking someone about a subject that only they are likely to be interested in might be considered impolite to the others, but often leads to better separate conversations for everyone involved.
Questions do have limitations as a conversation tactic, and can sometimes result in awkward silence or a string of brief uninformative replies. If this happens, it’s handy to be prepared to answer your own question, which might inspire others to answer it as well. It is generally a good idea to have something that you’d like to talk about, perhaps something you've been working on or a concept that puzzles you, that you can bring up independently of whether and how people respond to your questions. Thinking in advance of topics to discuss with specific people is especially useful, e.g. relating to their past experiences or skill areas.
Do people have advice or good examples of directing conversations? Recalling the best conversations you've ever had, what made them happen?
I have been autodidacting quite a bit lately. You may have seen my reviews of books on the MIRI course list. I've been going for about ten weeks now. This post contains my notes about the experience thus far.
Much of this may seem obvious, and would have seemed obvious if somebody had told me in advance. But nobody told me in advance. As such, this is a collection of things that were somewhat surprising at the time.
Part of the reason I'm posting this is because I don't know a lot of autodidacts, and I'm not sure how normal any of my experiences are. (Though on average, I'd guess they're about average.) As always, keep in mind that I am only one person and that your mileage may vary.
When I began my quest for more knowledge, I figured that in this modern era, a well-written textbook and an account on math.stackexchange would be enough to get me through anything. And I was right… sort of.
But not really.
The problem is, most of the time that I get stuck, I get stuck on something incredibly stupid. I've either misread something somewhere or misremembered a concept from earlier in the book. Usually, someone looking over my shoulder could correct me in ten seconds with three words.
"Dude. Disjunction. Disjunction."
These are the things that eat my days.
In principle, places like stackexchange can get me unstuck, but they're an awkward tool for the job. First of all, my stupid mistakes are heavily contextualized. A full context dump is necessary before I can even ask my question, and this takes time. Furthermore, I feel dumb asking stupid questions on stackexchange-type sites. My questions are usually things that I can figure out with a close re-read (except, I'm not sure which part needs a re-read). I usually opt for a close re-read of everything rather than asking for help. This is even more time consuming.
The infuriating thing is that answering these questions usually doesn't require someone who already knows the answers: it just requires someone who didn't make exactly the same mistakes as me. I lose hours on little mistakes that could have been fixed within seconds if I was doing this with someone else.
That's why my number one piece of advice for other people attempting to learn on their own is do it with a friend. They don't need to be more knowledgeable than you to answer most of the questions that come up. They just need to make different misunderstandings, and you'll be able to correct each other as you go along.
The thing I miss most about college is tight feedback loops while learning. When autodidacting, the feedback loop can be long.
I still haven't managed to follow my own advice here. I'm writing this advice in part because it should motivate me to actually pair up. Unfortunately, there is nobody in my immediate circle who has the time or patience to read along with me, but there are a number of resources I have not yet explored (the LessWrong study hall, for example, or soliciting to actual mathematicians). It's on my list of things to do.
Read, reread, rereread
Reading Model Theory was one of the hardest things I've done. Not necessarily because the content was hard, but because it was the first time I actually learned something that was way outside my comfort zone.
The short version is that Basic Category Theory and Naïve Set Theory left me somewhat overconfident, and that I should have read a formal logic textbook before diving in. I had basic familiarity with logic, but no practice. Turns out practice is important.
Anyway, it's not like Model Theory was impossible just because I skipped my logic exercises. It was just hard. There are a number of little misconceptions you have when you're familiar with something but you've never applied it, and I found myself having to clean those out just to understand what Model Theory was trying to say to me.
In retrospect, this was an efficient way to strengthen my understanding of mathematical logic and learn Model Theory at the same time. (I've moved on to a logic textbook, and it's been a cakewalk.) That said, I wouldn't wish the experience on others.
In the process, I learned how to learn things that are way outside my comfort zone. In the past, all the stuff I've learned has been either easy, or an extension of things that I was already interested in and experienced with. Reading Model Theory was the first time in my life where I read a chapter of a textbook and it made absolutely no sense. In fact, it took about three passes per chapter before they made sense.
- The first pass was barely sufficient to understand all the words and symbols. I constantly had to go research a topic. I followed proofs one step at a time, able to verify the validity of each step but not really understand what was going on. I came out the other end believing the results, but not knowing them.
- Another pass was required to figure out what the book was actually trying to say to me. Once all the words made sense and I was comfortable with their usage, the second pass allowed me to see what the theorems and proofs were actually saying. This was nice, but it still wasn't sufficient: I understood the theorems, but they seemed like a random walk through theorem-space. I couldn't yet understand why anyone would say those particular things on purpose.
- The third pass was necessary to understand the greater theory. I've never been particularly good at memorizing things, and it's not sufficient for me to believe and memorize a theorem. If it's going to stick, I have to understand why it's important. I have to understand why this theorem in particular is being stated, rather than another. I have to understand the problem that's being solved. A third pass was necessary to figure out the context in which the text made sense.
After a third pass of any given chapter, the next chapter didn't seem quite so random. When the upcoming content started feeling like a natural progression instead of a random walk, I knew I was making progress.
I note this because this is the first time that I had to read a math text more than once to understand what was going on. I'm not talking about individual sentences or paragraphs, I'm talking about finishing a chapter, feeling like "wat", and then starting the whole chapter over. Twice.
I'm not sure if I'm being naïve (for never having needed to do this before) or slow (for having to do this for Model Theory), but I did not anticipate requiring three passes. Mostly, I didn't anticipate gaining as much as I did from a re-read; I would have guessed that something opaque on the first pass would remain opaque on a second pass.
This, I'm pretty sure, was naïvety.
So take note: if you stumble upon something that feels very hard, it might be more useful than anticipated to re-read it.
Cognitive exchange rates
When reading Model Theory, I was only able to convert 30-50% of my allotted "study time" into actual study.
This is somewhat surprising, as I had no such troubles with Basic Category Theory or Naïve Set Theory.
(I often have the opposite problem when writing code; this is probably due to the different reward structure.)
I was somewhat frustrated with my inability to study as much as I would have liked. My usual time-into-studying conversion rate is much higher (I'd guess 80%ish, though I haven't been measuring).
I'm not sure what factor made it harder for me to study model theory. I don't think it was the difficulty directly, as I often tend to work harder in the face of a challenge. I'd guess that it was either the slower rate of rewards (caused by a slower pace of learning) or actual cognitive exhaustion.
In the vein of cognitive exhaustion, there were a few times while reading Model Theory where I seem to have become cognitively exhausted before becoming physically exhausted. This was a first for me. I'm not referring to those times when you've done a lot of mental work and you shy away from doing anything difficult, that's happened to me plenty. Rather, in this case, I felt fully awake and ready to keep reading. And I did keep reading. It just… didn't work. I'd have trouble following simple proofs. I'd fail at parsing sentences that were quite clear after resting.
I'm still not sure what to make of this, and I don't have sufficient data to draw conclusions. However, it seems like there are mental states where my I feel awake and able to continue, but my mind is just not capable of doing the heavy lifting.
Again, the fact that I'm only just realizing this now is probably naïvety, but it's something to remember before getting frustrated with yourself.
Explain it to someone
As I've said before, one of the best ways to learn something is to do the problem sets. For Model Theory, though, there were times when I finished reading through a chapter and was not capable of doing the problems.
Re-reading helped, as mentioned above. Another thing that helped was explaining the concepts.
I explained model theory pretty extensively to a text file on my computer. I sketched the proofs in my own words and stated their significance. I explained the syntax being used. I tried to motivate each idea. (The notes are still lying around somewhere; I haven't posted them because they're pretty much a derivative work at this point.)
I found that this went a long way towards helping me track down places where I'd thought I learned something, but actually hadn't. If you're having trouble, go explain the concept to somebody (or to a text file). This can bridge the gap between "I read it" and "I can do the problems" quite well. For me, this technique often took problems from "unapproachable" to "easy" in one fell swoop.
Don't book yourself solid
I'm pretty good at avoiding stress. I have the (apparently rare) ability to drop all work-related concerns at the door when I leave. I don't even know how to get stressed by bad luck, especially if I made good choices given the information I had at the time. I get normally tense in stressful situations with time constraints, but I'm adept at avoiding the permastress that I've seen plague friends and family — unless I've booked myself solid.
I've had a packed schedule these past few weeks. I try to move the needle on at least two projects a day (more on weekends). Even if it's entirely reasonable to fit all these things into my schedule, I have not yet found a way to avoid the stress.
Even when I know that, if I push myself, I can read this much and write that much and code this feature all in one day, I haven't found a good way to push myself without pressure-stress.
I'm still hoping that I'll learn how to move quickly without stress as I learn my capabilities, but I'm not sure I've been adequately accounting for the cost of stress.
It's worth remembering that doing less than you're capable of on purpose might be a good strategy for maximizing long-term output.
There you go. Those are my notes gathered from trying to learn lots of things very quickly (and trying to learn one hard thing in particular). Comments are encouraged; I am by no means an expert.
In response to the question
"Does anyone happen to know of reliable ways for increasing one's supply of executive function, by the way? I seem to run out of it very quickly in general."
I posted that my military experience seems effectively designed to increase executive function. Some examples of this from myself and metastable are
Uniforms- not having to think about your wardrobe, ever, saves a lot of time, mental effort, and money. Steve Jobs and President Obama are known for also using uniforms specifically for this purpose.
PT- Daily, routinized exercise. Done in a way that very few people are deciding what comes next.
-Maximum use of daylight hours
Med Group and Force Support-Minimized high-risk projects outside of workplace (paternalistic health care, insurance, and in many cases, housing and continuing education.)
After a moment's thought it occurred to me that there are some double-edged swords in Military Rationality as well, some of which lead to classic jokes like 'Military Intelligence is an oxymoron.'
Regulations- A select few 'experts' create policies which everyone else is required to follow at all times. Unfortunately these experts are never (never ever) encouraged to consider knock-on effects. Ugh.
Anybody else have insights on the military they want to share here? I feel a couple of good posts on increasing executive function might come out of a discussion on the rationalities and irrationalities of the armed forces.
The Center for Applied Rationality's perspective on rationality is quite similar to Less Wrong's. In particular, we share many of Less Wrong's differences from what's sometimes called "traditional" rationality, such as Less Wrong's inclusion of Bayesian probability theory and the science on heuristics and biases.
But after spending the last year and a half with CFAR as we've developed, tested, and attempted to teach hundreds of different versions of rationality techniques, I've noticed that my picture of what rationality looks like has shifted somewhat from what I perceive to be the most common picture of rationality on Less Wrong. Here are three ways I think CFAR has come to see the landscape of rationality differently than Less Wrong typically does – not disagreements per se, but differences in focus or approach. (Disclaimer: I'm not speaking for the rest of CFAR here; these are my own impressions.)
1. We think less in terms of epistemic versus instrumental rationality.
Formally, the methods of normative epistemic versus instrumental rationality are distinct: Bayesian inference and expected utility maximization. But methods like "use Bayes' Theorem" or "maximize expected utility" are usually too abstract and high-level to be helpful for a human being trying to take manageable steps towards improving her rationality. And when you zoom in from that high-level description of rationality down to the more concrete level of "What five-second mental habits should I be training?" the distinction between epistemic and instrumental rationality becomes less helpful.
Here's an analogy: epistemic rationality is like physics, where the goal is to figure out what's true about the world, and instrumental rationality is like engineering, where the goal is to accomplish something you want as efficiently and effectively as possible. You need physics to do engineering; or I suppose you could say that doing engineering is doing physics, but with a practical goal. However, there's plenty of physics that's done for its own sake, and doesn't have obvious practical applications, at least not yet. (String theory, for example.) Similarly, you need a fair amount of epistemic rationality in order to be instrumentally rational, though there are parts of epistemic rationality that many of us practice for their own sake, and not as a means to an end. (For example, I appreciate clarifying my thinking about free will even though I don't expect it to change any of my behavior.)
In this analogy, many skills we focus on at CFAR are akin to essential math, like linear algebra or differential equations, which compose the fabric of both physics and engineering. It would be foolish to expect someone who wasn't comfortable with math to successfully calculate a planet's trajectory or design a bridge. And it would be similarly foolish to expect you to successfully update like a Bayesian or maximize your utility if you lacked certain underlying skills. Like, for instance: Noticing your emotional reactions, and being able to shift them if it would be useful. Doing thought experiments. Noticing and overcoming learned helplessness. Visualizing in concrete detail. Preventing yourself from flinching away from a thought. Rewarding yourself for mental habits you want to reinforce.
These and other building blocks of rationality are essential both for reaching truer beliefs, and for getting what you value; they don't fall cleanly into either an "epistemic" or an "instrumental" category. Which is why, when I consider what pieces of rationality CFAR should be developing, I've been thinking less in terms of "How can we be more epistemically rational?" or "How can we be more instrumentally rational?" and instead using queries like, "How can we be more metacognitive?"
2. We think more in terms of a modular mind.
The human mind isn't one coordinated, unified agent, but rather a collection of different processes that often aren't working in sync, or even aware of what each other is up to. Less Wrong certainly knows this; see, for example, discussions of anticipations versus professions, aliefs, and metawanting. But in general we gloss over that fact, because it's so much simpler and more natural to talk about "what I believe" or "what I want," even if technically there is no single "I" doing the believing or wanting. And for many purposes that kind of approximation is fine.
But a rationality-for-humans usually can't rely on that shorthand. Any attempt to change what "I" believe, or optimize for what "I" want, forces a confrontation of the fact that there are multiple, contradictory things that could reasonably be called "beliefs," or "wants," coexisting in the same mind. So a large part of applied rationality turns out to be about noticing those contradictions and trying to achieve coherence, in some fashion, before you can even begin to update on evidence or plan an action.
Many of the techniques we're developing at CFAR fall roughly into the template of coordinating between your two systems of cognition: implicit-reasoning System 1 and explicit-reasoning System 2. For example, knowing when each system is more likely to be reliable. Or knowing how to get System 2 to convince System 1 of something ("We're not going to die if we go talk to that stranger"). Or knowing what kinds of questions System 2 should ask of System 1 to find out why it's uneasy about the conclusion at which System 2 has arrived.
This is all, of course, with the disclaimer that the anthropomorphizing of the systems of cognition, and imagining them talking to each other, is merely a useful metaphor. Even the classification of human cognition into Systems 1 and 2 is probably not strictly true, but it's true enough to be useful. And other metaphors prove useful as well – for example, some difficulties with what feels like akrasia become more tractable when you model your future selves as different entities, as we do in the current version of our "Delegating to yourself" class.
3. We're more focused on emotions.
There's relatively little discussion of emotions on Less Wrong, but they occupy a central place in CFAR's curriculum and organizational culture.
It used to frustrate me when people would say something that revealed they held a Straw Vulcan-esque belief that "rationalist = emotionless robot". But now when I encounter that misconception, it just makes me want to smile, because I'm thinking to myself: "If you had any idea how much time we spend at CFAR talking about our feelings…"
Being able to put yourself into particular emotional states seems to make a lot of pieces of rationality easier. For example, for most of us, it's instrumentally rational to explore a wider set of possible actions – different ways of studying, holding conversations, trying to be happy, and so on – beyond whatever our defaults happen to be. And for most of us, inertia and aversions get in the way of that exploration. But getting yourself into "playful" mode (one of the hypothesized primary emotional circuits common across mammals) can make it easier to branch out into a wider swath of Possible-Action Space. Similarly, being able to call up a feeling of curiosity or of "seeking" (another candidate for a primary emotional circuit) can help you conquer motivated cognition and learned blankness.
And simply being able to notice your emotional state is rarer and more valuable than most people realize. For example, if you're in fight-or-flight mode, you're going to feel more compelled to reject arguments that feel like a challenge to your identity. Being attuned to the signs of sympathetic nervous system activation – that you're tensing up, or that your heart rate is increasing – means you get cues to double-check your reasoning, or to coax yourself into another emotional state.
We also use emotions as sources of data. You can learn to tap into feelings of surprise or confusion to get a sense of how probable you implicitly expect some event to be. Or practice simulating hypotheticals ("What if I knew that my novel would never sell well?") and observing your resultant emotions, to get a clearer picture of your utility function.
And emotions-as-data can be a valuable check on your System 2's conclusions. One of our standard classes is "Goal Factoring," which entails finding some alternate set of actions through which you can purchase the goods you want more cheaply. So you might reason, "I'm doing martial arts for the exercise and self-defense benefits... but I could purchase both of those things for less time investment by jogging to work and carrying Mace." If you listened to your emotional reaction to that proposal, however, you might notice you still feel sad about giving up martial arts even if you were getting the same amount of exercise and self-defense benefits somehow else.
Which probably means you've got other reasons for doing martial arts that you haven't yet explicitly acknowledged -- for example, maybe you just think it's cool. If so, that's important, and deserves a place in your decisionmaking. Listening for those emotional cues that your explicit reasoning has missed something is a crucial step, and to the extent that aspiring rationalists sometimes forget it, I suppose that's a Steel-Manned Straw Vulcan (Steel Vulcan?) that actually is worth worrying about.
I'll name one more trait that unites, rather than divides, CFAR and Less Wrong. We both diverge from "traditional" rationality in that we're concerned with determining which general methods systematically perform well, rather than defending some set of methods as "rational" on a priori criteria alone. So CFAR's picture of what rationality looks like, and how to become more rational, will and should change over the coming years as we learn more about the effects of our rationality training efforts.
View more: Next