So you might reason, "I'm doing martial arts for the exercise and self-defense benefits... but I could purchase both of those things for less time investment by jogging to work and carrying Mace." If you listened to your emotional reaction to that proposal, however, you might notice you still feel sad about giving up martial arts even if you were getting the same amount of exercise and self-defense benefits somehow else.
Which probably means you've got other reasons for doing martial arts that you haven't yet explicitly acknowledged -- for example, maybe you just think it's cool. If so, that's important, and deserves a place in your decisionmaking. Listening for those emotional cues that your explicit reasoning has missed something is a crucial step
This is a great example of how human value is complicated. Optimizing for stated or obvious values can miss unstated or subtler values. Before we can figure out how to get what we want, we have to know what we want. I'm glad CFAR is taking this into account.
I've been wondering whether utilitarians should be more explicit about what they're screening off. For example, trying to maximize QWALYs might mean doing less to support your own social network.
There's relatively little discussion of emotions on Less Wrong, but they occupy a central place in CFAR's curriculum and organizational culture.
[...]
"If you had any idea how much time we spend at CFAR talking about our feelings…"
This greatly raises my opinion of CFAR.
I'm quite happy to see this post. While I'm not in agreement with all points, I think that it's very useful for people on LessWrong to be aware of the extent to which CFAR is and isn't divergent from the "LW canon."
I believe that emotions play a big part in thinking clearly, and understanding our emotions would be a helpful step. Would you mind saying more about the time you spend focused on emotions? Are you paying attention to your concrete current or past emotions (i.e. "this is how I'm feeling now", or "this is how I felt when he said X"), or more theoretical discussions "when someone is in fight-or-flight mode, they're more likely to Y than when they're feeling curiosity"?
You also mentioned exercises about exploiting emotional states; would you say more about what CFAR has learned about mindfully getting oneself in particular emotional states?
Nice article Julia!
You need physics to do engineering; or I suppose you could say that doing engineering is doing physics, but with a practical goal.
I would say it is a misconception that engineers are applied scientists. It varies from person to person, but in general, an engineer is expected to apply aesthetic, ethical, legal and economic reasoning (among other types) to any given problem. Indeed many engineers do much more of those than they do applied science. My job is about 15% applied science, by time & importance.
Of course, engineering relying on science at all is a very recent thing. Throughout most of human history, engineering knowledge were mostly family trade secrets developed through trial and error which didn't depend at all on the theories spun out by the intellectuals of the time. For instance, Egypt's pyramid builders had basically no use for theoretical science that we know of.
"And simply being able to notice your emotional state is rarer and more valuable than most people realize. For example, if you're in fight-or-flight mode, you're going to feel more compelled to reject arguments that feel like a challenge to your identity."
Can I have some specific examples that might help illustrate this point?
A hypothetical based on an amalgamation of my own experiences during a co-op:
You work as a programmer at a company that writes websites with the programming languages VBScript and VB.Net. You have learned enough about those languages to do your job, but you think the Ruby language is much more efficient, and you write your personal programming projects in Ruby. You occasionally go to meetings in your city for Ruby programmers, which talk about new Ruby-related technologies and techniques.
You are nearing the deadline for the new feature you were assigned to write. You had promised you would get the web page looking good in all browsers by today’s followup meeting about that feature. Fifteen minutes before the meeting, you realize that you forget to test in Internet Explorer 8. You open it in IE8 and find that the web page looks all messed up. You spend fifteen rushed minutes frantically looking up the problem you see and trying out code fixes, and you manage to fix the problem just before the meeting.
It’s just you, the technical lead, and the project manager at the meeting. You explain that you’ve finished your feature, and he nods, congratulates you, and makes note of that in his project tracker. Then he tells you what he wants you to work on next: an XML Reformatter. The XML documents used internally in one of the company’s products are poorly formatted and organized, with incorrect indentation and XML elements in random order. He suggests that you talk to the technical lead about how to get started, and leaves the meeting.
This project sounds like something that will be run only once – a one-time project. You have worked with XML in Ruby before, and are excited at the idea of being able to use your Ruby expertise in this project. You suggest to the technical lead that you write this program in Ruby.
“Hmm… no, I don’t think we should use Ruby for this project. We’re going to be using this program for a long time – running it periodically on our XML files. And all of our other programmers know VB.Net. We should write it in VB.Net, because I am pretty sure that another programmer is going to have to make a change to your program at some point.”
If you’re not thinking straight, at this point, you might complain, “I could write this program so much faster in Ruby. We should use Ruby anyway.” Yet that does not address the technical lead’s point, and ignores the fact that one of your assumptions has been revealed to be wrong.
If you are aware enough of your emotions to notice that you’re still on adrenaline from your last-minute fix, you might instead think, I don’t like the sound of missing this chance to use Ruby, but I might not be thinking straight. I’ll just accept that reasoning for now, and go back and talk to the technical lead in his office later if I think of a good argument against that point.
This is a contrived example. It is based on my experiences, but I exaggerate the situation and “your” behavior. Since I had to make many changes to the real situation to make an example that was somewhat believable, that would indicate that the specific tip you quoted isn’t applicable very often – in my life, at least.
Instrumental and epistemic rationality were always kind of handwavey, IMO. For example, if you want to achieve your goals, it often helps to have money. So if I deposit $10,000 in your bank account, does that make you more instrumentally rational?
You could define instrumental rationality as "mental skills that help people better achieve their goals". Then I could argue that learning graphic design makes you more instrumentally rational, because it's a mental skill and if you learn it, you'll be able to make money from anywhere using your computer, which is often useful for achieving your goals.
You could define epistemic rationality as "mental skills that help you know what's true". Then I could argue that learning about chess makes you more epistemically rational, because you can better know the truth of statements about who's going to win chess games that are in progress.
I like the idea of thinking of rationality in terms of mental skills that are very general in the sense that they can be used by many different people in many different situations, kind of like how Paul Graham defines "philosophy". "Mental skills that are useful to many people in many situations" seems like it should have received more study as a topic by now... I guess maybe people have developed memetic antibodies towards anything that sounds too good to be true in that way? (In this case, the relevant antibodies would have been developed thanks to the self-help industry?)
I agree there's been some inconsistency in usage over the years. In fact, I think What Do We Mean By Rationality? and Rationality are simply wrong, which is surprising since they're two of the most popular and widely-relied-on pages on LessWrong.
Rationality doesn't ensure that you'll win, or have true beliefs; and having true beliefs doesn't ensure that you're rational; and winning doesn't ensure that you're rational. Yes, winning and having true beliefs is the point of rationality; and rational agents should win (and avoid falsehood) on average, in the long haul. But I don't think it's pedantic, if you're going to write whole articles explaining these terms, to do a bit more to firewall the optimal from the rational and recognize that rationality must be systematic and agent-internal.
Instrumental and epistemic rationality were always kind of handwavey, IMO. For example, if you want to achieve your goals, it often helps to have money. So if I deposit $10,000 [≈ Average community college tuition, four years, 2010] in your bank account, does that make you more instrumentally rational?
Instrumental rationality isn't the same thing as winning. It's not even the same thing as 'instantiating cognitive algorithms that make you win'. Rather, it's, 'instantiating cognitive algorithms that tend to make one win'. So being unlucky doesn't mean you were irrational.
Luke's way of putting this is to say that 'the rational decision isn't always the right decision'. Though that depends on whether by 'right' you mean 'defensible' or 'useful'. So I'd rather just say that rationalists can get unlucky.
You could define instrumental rationality as "mental skills that help people better achieve their goals". Then I could argue that learning graphic design makes you more instrumentally rational, because it's a mental skill and if you learn it, you'll be able to make money from anywhere using your computer, which is often useful for achieving your goals.
I'm happy to say that being good at graphic design is instrumentally rational, for people who are likely to use that skill and have the storage space to fit more abilities. The main reason we wouldn't speak of it that way is that it's not one of the abilities that's instrumentally rational for every human, and it's awkward to have to index instrumentality to specific goals or groups.
Becoming good at graphic design is another story. That can require an investment large enough to make it instrumentally irrational, again depending on the agent and its environment.
You could define epistemic rationality as "mental skills that help you know what's true". Then I could argue that learning about chess makes you more epistemically rational, because you can better know the truth of statements about who's going to win chess games that are in progress.
I don't see any reason not to bite that bullet. This is why epistemic rationality can become trivial when it's divorced from instrumental rationality.
Yes, if it's both predictable and changeable. Though I'm not sure why we'd call something that meets both those conditions 'luck'.
Are you familiar with Richard Wiseman, who has found that "luck" (as the phrase is used by people in everyday life to refer to people and events) appears to be both predictable and changeable?
That's an interesting result! It doesn't surprise me that people frequently confuse which complex outcomes they can and can't control, though. Do you think I'm wrong about the intension of "luck"? Or do you think most people are just wrong about its extension?
I think the definition of 'luck' as 'complex outcomes I have only minor control over' is useful, as well as the definition of 'luck' as 'the resolution of uncertain outcomes.' For both of them, I think there's meat to the sentence "rationalists should not be predictably unlucky": in the first, it means rationalists should exert a level of effort justified by the system they're dealing with, and not be dissuaded by statistically insignificant feedback; in the second, it means rationalists should be calibrated (and so P_10 or worse events happen to them 10% of the time, i.e. rationalists are not surprised that they lose money at the casino).
Ahh, thanks! This helps me better understand what Eliezer was getting at. I was having trouble thinking my way into other concepts of 'luck' that might avoid triviality.
"Predictable" and "changeable" have limits, but people generally don't know where those limits are. What looks like bad luck to one person might look like the probable consequences of taking stupid chances to another.
Or what looks like a good strategy for making an improvement to one person might looking like knocking one's head against a wall to another.
The point you and Eliezer (and possibly Vaniver) seem to be making is that "perfectly rational agents are allowed to get unlucky" isn't a useful meme, either because we tend to misjudge which things are out of our control or because it's just not useful to pay any attention to those things.
Is that a fair summary? And, if so, can you think of a better way to express the point I was making earlier about conceptually distinguishing rational conduct from conduct that happens to be optimal?
ETA: Would "rationality doesn't require omnipotence" suit you better?
Theoretically speaking (rare though it would be in practice), there are circumstances where that might happen- a rationalist simply refuses to use on moral grounds methods that would grant him an epistemic advantage.
It seems to me that some of LW's attempts to avoid "a priori" reasoning have tripped up right at their initial premises, by assuming as premises propositions of the form "The probability of possible-fact X is y%." (LW's annual survey repeatedly insists that readers make this mistake, too.)
I may have a guess about whether X is true; I may even be willing to give or accept odds on one or both sides of the question; but that is not the same thing as being able to assign a probability. For that you need conditions (such as where X is the outcome of a die roll or coin toss) where there's a basis for assigning the number. Otherwise the right answer to most questions of "How likely is X?" (where we don't know for certain whether X is true) will be some vague expression ("It could be true, but I doubt it") or simply "I don't know."
Refusing to assign numerical probabilities because you don't have a rigorous way to derive them is like refusing to choose whether or not to buy things because you don't have a rigorous way to decide how much they're worth to you.
Explicitly assigning a probability isn't always (perhaps isn't usually) worth the trouble it takes, and rushing to assign numerical probabilities can certainly lead you astray -- but that doesn't mean it can't be done or that it shouldn't be done (carefully!) in cases where making a good decision matters most.
When you haven't taken the trouble to decide a numerical probability, then indeed vague expressions are all you've got, but unless you have a big repertoire of carefully graded vague expressions (which would, in fact, not be so very different from assigning probabilities) you'll find that sometimes there are two propositions for both of which you'd say "it could be true, but I doubt it" -- but you definitely find one more credible than the other. If you can make that distinction mentally, why shouldn't you make it verbally?
If it were a case like you describe (two competing products in a store), I would have to guess, and thus would have to try to think of some "upstream" questions and guess those, too. Not impossible, but unlikely to unearth worthwhile information. For questions as remote as P(aliens), I don't see a reason to bother.
Have you seen David Friedman's discussion of rational voter ignorance in The Machinery of Freedom?
I thought the difference what what set of beliefs the method was attracted to: For epistemic, it's whatever is "really true" with no if or but, for instrumental, it's whatever in actuality leads to the best outcome. Things where it differs include believing the right thing for the wrong reasons/being overconfident in something true, in game theoretical situations like blackmail and signaling, or in situations where mental states are leaky like the placebo effect or expectation-controlled dementors.
Given this interpretation, I decided on the policy of a mixed strategy where most people are mainly instrumentally rational, some are pure epistemic, and the former obey the later unquestioningly in crisis situations.
That last paragraph is really interesting. I don't know your reasoning behind it, but I'd perhaps suggest that this correlation may be a result of instrumentally rational people working mostly on cached conclusions from society, which were developed somewhat behind the curtains by trial and error and memes being passed around etc., whereas epistemically rational people are able to adapt more quickly, because they can think right away, rather than allow the memetic environment to catch up, which simply won't happen in crisis situations (the cached-conclusions system for memetic environments doesn't work that fast).
Maybe you have no idea what I'm talking about though. I can't tell whether this could bridge inferential distance. Either way though, what's your reasoning behind that statement? What does it mean that most people are working mostly in instrumental, whereas some are pure epistemic, and why do the former obey the latter in crisis situations?
I assumed that was obvious or I'd have elaborated. Basically, of the situations in which they differ, they do so by the epistemic making a better decision, but the instrumental having some other benefit. Decisions can be delegated, including the decisions of many to just a few, so you can have only a few people need to take the instrumental hit of strict epistemic conduct, while still having everyone get most of the benefits of decisions based in good epistemic rationality. In return for their sacrifice, the epistemics get status.
This is not a "how things are" or "how everyone should do" thing, just one strategy a coordinated group of rationalists could use.
In my other message I said wealth doesn't automatically make you more rational, because rationality is "systematic and agent-internal". I don't want to dismiss the problem you raised, though, because it gets us into deep waters pretty fast. So here's a different response.
If I reliably use my money in a way that helps me achieve my ends, regardless of how much money I have, then giving me more money can make me more instrumentally rational, in the sense that I consistently win more often. Certainly it's beyond dispute that being in such a situation has instrumental value, bracketing 'rationality'. The reason we don't normally think of this as an increase in instrumental rationality is that when we're evaluating your likelihood of winning, the contributing factors we call 'instrumental rationality' are the set of win-relevant cognitive algorithms. Having money isn't a cognitive algorithm, so it doesn't qualify.
Why isn't having money a cognitive algorithm? 'Because it's not in your skull' isn't a very satisfying answer. It's not necessary: A species that exchanges wealth by exchanging memorized passcodes might make no use of objects outside of vocal utterances and memes. And it's probably not sufficient: If I start making better economic decisions by relying more heavily on a calculator, it's plausible that part of my increased instrumental rationality is distributed outside my skull, since part of it depends on the proper functioning of the calculator. Future inventions may do a lot more to blur the lines between cognitive enhancements inside and outside my brain.
So the more relevant point may be that receiving a payment is an isolated event, not a repeatable process. If you found a way to receive a steady paycheck, and reliably used that paycheck to get what you wanted more often, then I'd have a much harder time saying that you (= the you-money system) haven't improved the instrumental rationality of your cognitive algorithms. It would be like trying to argue that your gene-activated biochemistry is agent-internal, but the aspects of your biochemistry that depend on your daily nootropic cocktail are agent-external. I despair of drawing clear lines on the issue.
Money isn't a cognitive algorithm because it doesn't actually help you decide what to do. You don't generally use your money to make decisions. Having more money does put you in a better position where the available options are more favourable, but that's not really the same thing.
Of course, if you spend that money on nootropics (or a calculator, I suppose), you might be said to have used money to improve your instrumental rationality!
So if I deposit $10,000 in your bank account, does that make you more instrumentally rational?
It can, if I use the money to pay someone more instrumentally-rational than me to come and make my decisions for me for a time.
I don't think they are hand-wavy. I maintain that they are extremely well-defined terms, at least when you are speaking of idealized agents. Here are some counter-points:
So if I deposit $10,000 in your bank account, does that make you more instrumentally rational?
No. Instrumental rationality is about choosing the optimal action, not having nice things happen to you. Take away the element of choice, and there is no instrumental rationality. I've got to cause you to drop the money in my account for you to call it to instrumental rationality.
Then I could argue that learning about chess makes you more epistemically rational, because you can better know the truth of statements about who's going to win chess games that are in progress.'
No, because "learning about chess" is an action. Choosing where to look for evidence is an action. You'd be instrumentally (ir)rational to (not) seek out information about chess, depending on goals and circumstance.
Epistemic rationality is what you do with evidence after acquiring it, not the process of acquiring evidence. It describes your effectiveness at learning the rules of chess given that you have the relevant info. It doesn't describe your choice to go out and acquire chess learning info. If you were strapped to a chair and made to watch chess (or casually observed it) and failed to make rational guesses concerning the underlying rules, then you failed at epistemic rationality.
No, because "learning about chess" is an action.
Same for learning about Bayes' rule.
Learning about Bayes' rule improves one's epistemic rationality; I'm arguing that learning about chess does the same.
I guess this is the point where humans and theoretical rational agents diverge. Rational agents don't learn rationality - it's just assumed that they come pre-wired with all the correct mathematics and philosophy required to make optimal choices for all possible games.
But on the human side, I still don't think that's really a valid comparison. Being able to use Bayes' rule improves rationality in the general case. It falls under the heading of "philosophy, epistemology, mathematics".
Chess just gives you knowledge about a specific system. It falls under the heading of "science, inference, evidence".
There's a qualitative difference between the realm of philosophy and mathematics and the realm of reality and observation.
If we go by a definition based on actions, rather than skills, I think this problem goes away:
Let's define an action as instrumentally rational if it brings you closer to your goal. Let's define an action as epistemicly rationality if it brings your mental model of reality closer to reality itself.
Those are the definitions which I generally use and find useful, and I think they successfully sidestep your problems.
The question then remains how does one define rational skills. However, answering that question is less of an issue once you know what actions are instrumentally/epistemicly rational. If you may want to learn a skill, it is possible to ask whether the action of learning that skill falls under the categories mentioned above.
Let's define an action as instrumentally rational if it brings you closer to your goal.
Suppose my goal is to get rich. Suppose, on a whim, I walk into a casino and put a large amount of money on number 12 in a single game of roulette. Suppose number 12 comes up. Was that rational?
Same objection applies to your definition of epistemicaly rational actions.
Any attempt to change what "I" believe, or optimize for what "I" want, forces a confrontation of the fact that there are multiple, contradictory things that could reasonably be called "beliefs," or "wants," coexisting in the same mind.
I have found the same and in response, have learned to habitually refer to "myself" as "ourselves" and "I" as "We", in internal dialogues. This feels very helpful, but we don't have clear-cut ideas of how to test whether it actually is.
My own response has been to train myself into treating "I want/believe/think/expect/etc X" and "I don't want/believe/think/expect/etc not-X" as different propositions which I can't derive from one another.
Can I summarise that as saying that CFAR takes account of what we are, while LW generally does not?
Well, I'd say that LW does take account of who we are. They just haven't had the impetus to do so quite as thoroughly as CFAR has. As a result there are aspects of applied rationality, or "rationality for humans" as I sometimes call it, that CFAR has developed and LW hasn't.
I am currently reading the sequence "How to actually change your mind" and while I understand most of the concepts and things discussed, I often miss clear and easy to remember instructions and tasks to actually become better at what was just explained and discussed. It won't sink in and integrate in the daily life that easy this way. (Though it is helping if you re-read HPMOR and recognize many of the experiments and concepts when HP explains things).
And if I understood the article correctly, it kind of says exactly that when it speaks of instrumental rationality vs epistemic rationality.
This made me curious about CFAR.
Thanks for the article :)
The terminology is a bit new to me, but it seems to me epistemic and instrumental rationality are necessarily identical.
If epistemic rationality is implementation of any of a set of reliable procedures for making true statements about reality, and instrumental rationality is use of any of a set of reliable procedures for achieving goals, then the latter is contained in the former, since reliably achieving goals entails possession of some kind of high-fidelity model of reality.
Furthermore, what kind of rationality does not pursue goals? If I have no interest in chess, and ability to play chess will have no impact on any of my present or future goals, then it would seem to be irrational of me to learn to play chess.
Loosely speaking, epistemic and instrumental rationality are prescriptions for the two sides of the is/ought gap. While 'ought statements' generally need to make reference to 'is statements', they cannot be entirely reduced to them.
If epistemic rationality is implementation of any of a set of reliable procedures for making true statements about reality, and instrumental rationality is use of any of a set of reliable procedures for achieving goals, then the latter is contained in the former, since reliably achieving goals entails possession of some kind of high-fidelity model of reality.
One possible goal is to have false beliefs about reality; another is to have no impact on reality. (For humans in particular, there are unquestionably some facts that are both true and harmful (i.e. instrumentally irrational) to learn.)
Furthermore, what kind of rationality does not pursue goals?
Epistemic rationality.
(I assume that you mean 'isn't about pursuing goals.' Otherwise, epistemic rationality might pursue the goal of matching the map to the territory.)
Thanks for bringing that article to my attention.
You explain how you learned skills of instrumental rationality from debating, but in doing so, you also learned reliable answers to questions of fact about the universe: how to win debates. When I'm learning electrostatics I learn that charges come with different polarities. If I later learn about gravity, and that gravitationally everything attracts, this doesn't make the electrostatics wrong! Similarly your debating skills were not wrong, just not the same skills you needed for writing research papers.
Regarding Kelly 2003, I'd argue that learning movie spoilers is only desirable, by definition, if it contributes to one's goals. If it is not desriable, then I contend that it isn't rational, in any way.
Regarding Bostrom 2011, you say he demonstrates that, "a more accurate model of the world can be hazardous to various instrumental objectives." I absolutely agree. But if we have reliable reasons to expect that some knowledge would be dangerous, then it is not rational to seek this knowledge.
Thus, I'm inclined to reject your conclusion that epistemic and instrumental rationality can come into conflict, and to reject the proposition that they are different.
(I note that whoever wrote the wiki entry on rationality was quite careful, writing
Epistemic rationality is that part of rationality which involves achieving accurate beliefs about the world.
The use of "involves" instead of e.g. "consists entirely of" is crucial, as the latter would not normally describe a part of rationality.)
When I'm learning electrostatics I learn that charges come with different polarities. If I later learn about gravity, and that gravitationally everything attracts, this doesn't make the electrostatics wrong! Similarly your debating skills were not wrong, just not the same skills you needed for writing research papers.
In a vacuum, this is certainly true and in fact I agree with all of your points. But I believe that human cognitive biases make this sort of compartmentalization between mental skillsets more difficult than one might otherwise expect. As the old saying goes, "To a man with a hammer, everything looks like a nail."
It would be fair to say that I believe tradeoffs between epistemic and instrumental rationality exist only thanks to quirks in human reasoning-- however, I also believe that we need to take those quirks into account.
The Center for Applied Rationality's perspective on rationality is quite similar to Less Wrong's. In particular, we share many of Less Wrong's differences from what's sometimes called "traditional" rationality, such as Less Wrong's inclusion of Bayesian probability theory and the science on heuristics and biases.
But after spending the last year and a half with CFAR as we've developed, tested, and attempted to teach hundreds of different versions of rationality techniques, I've noticed that my picture of what rationality looks like has shifted somewhat from what I perceive to be the most common picture of rationality on Less Wrong. Here are three ways I think CFAR has come to see the landscape of rationality differently than Less Wrong typically does – not disagreements per se, but differences in focus or approach. (Disclaimer: I'm not speaking for the rest of CFAR here; these are my own impressions.)
1. We think less in terms of epistemic versus instrumental rationality.
Formally, the methods of normative epistemic versus instrumental rationality are distinct: Bayesian inference and expected utility maximization. But methods like "use Bayes' Theorem" or "maximize expected utility" are usually too abstract and high-level to be helpful for a human being trying to take manageable steps towards improving her rationality. And when you zoom in from that high-level description of rationality down to the more concrete level of "What five-second mental habits should I be training?" the distinction between epistemic and instrumental rationality becomes less helpful.
Here's an analogy: epistemic rationality is like physics, where the goal is to figure out what's true about the world, and instrumental rationality is like engineering, where the goal is to accomplish something you want as efficiently and effectively as possible. You need physics to do engineering; or I suppose you could say that doing engineering is doing physics, but with a practical goal. However, there's plenty of physics that's done for its own sake, and doesn't have obvious practical applications, at least not yet. (String theory, for example.) Similarly, you need a fair amount of epistemic rationality in order to be instrumentally rational, though there are parts of epistemic rationality that many of us practice for their own sake, and not as a means to an end. (For example, I appreciate clarifying my thinking about free will even though I don't expect it to change any of my behavior.)
In this analogy, many skills we focus on at CFAR are akin to essential math, like linear algebra or differential equations, which compose the fabric of both physics and engineering. It would be foolish to expect someone who wasn't comfortable with math to successfully calculate a planet's trajectory or design a bridge. And it would be similarly foolish to expect you to successfully update like a Bayesian or maximize your utility if you lacked certain underlying skills. Like, for instance: Noticing your emotional reactions, and being able to shift them if it would be useful. Doing thought experiments. Noticing and overcoming learned helplessness. Visualizing in concrete detail. Preventing yourself from flinching away from a thought. Rewarding yourself for mental habits you want to reinforce.
These and other building blocks of rationality are essential both for reaching truer beliefs, and for getting what you value; they don't fall cleanly into either an "epistemic" or an "instrumental" category. Which is why, when I consider what pieces of rationality CFAR should be developing, I've been thinking less in terms of "How can we be more epistemically rational?" or "How can we be more instrumentally rational?" and instead using queries like, "How can we be more metacognitive?"
2. We think more in terms of a modular mind.
The human mind isn't one coordinated, unified agent, but rather a collection of different processes that often aren't working in sync, or even aware of what each other is up to. Less Wrong certainly knows this; see, for example, discussions of anticipations versus professions, aliefs, and metawanting. But in general we gloss over that fact, because it's so much simpler and more natural to talk about "what I believe" or "what I want," even if technically there is no single "I" doing the believing or wanting. And for many purposes that kind of approximation is fine.
But a rationality-for-humans usually can't rely on that shorthand. Any attempt to change what "I" believe, or optimize for what "I" want, forces a confrontation of the fact that there are multiple, contradictory things that could reasonably be called "beliefs," or "wants," coexisting in the same mind. So a large part of applied rationality turns out to be about noticing those contradictions and trying to achieve coherence, in some fashion, before you can even begin to update on evidence or plan an action.
Many of the techniques we're developing at CFAR fall roughly into the template of coordinating between your two systems of cognition: implicit-reasoning System 1 and explicit-reasoning System 2. For example, knowing when each system is more likely to be reliable. Or knowing how to get System 2 to convince System 1 of something ("We're not going to die if we go talk to that stranger"). Or knowing what kinds of questions System 2 should ask of System 1 to find out why it's uneasy about the conclusion at which System 2 has arrived.
This is all, of course, with the disclaimer that the anthropomorphizing of the systems of cognition, and imagining them talking to each other, is merely a useful metaphor. Even the classification of human cognition into Systems 1 and 2 is probably not strictly true, but it's true enough to be useful. And other metaphors prove useful as well – for example, some difficulties with what feels like akrasia become more tractable when you model your future selves as different entities, as we do in the current version of our "Delegating to yourself" class.
3. We're more focused on emotions.
There's relatively little discussion of emotions on Less Wrong, but they occupy a central place in CFAR's curriculum and organizational culture.
It used to frustrate me when people would say something that revealed they held a Straw Vulcan-esque belief that "rationalist = emotionless robot". But now when I encounter that misconception, it just makes me want to smile, because I'm thinking to myself: "If you had any idea how much time we spend at CFAR talking about our feelings…"
Being able to put yourself into particular emotional states seems to make a lot of pieces of rationality easier. For example, for most of us, it's instrumentally rational to explore a wider set of possible actions – different ways of studying, holding conversations, trying to be happy, and so on – beyond whatever our defaults happen to be. And for most of us, inertia and aversions get in the way of that exploration. But getting yourself into "playful" mode (one of the hypothesized primary emotional circuits common across mammals) can make it easier to branch out into a wider swath of Possible-Action Space. Similarly, being able to call up a feeling of curiosity or of "seeking" (another candidate for a primary emotional circuit) can help you conquer motivated cognition and learned blankness.
And simply being able to notice your emotional state is rarer and more valuable than most people realize. For example, if you're in fight-or-flight mode, you're going to feel more compelled to reject arguments that feel like a challenge to your identity. Being attuned to the signs of sympathetic nervous system activation – that you're tensing up, or that your heart rate is increasing – means you get cues to double-check your reasoning, or to coax yourself into another emotional state.
We also use emotions as sources of data. You can learn to tap into feelings of surprise or confusion to get a sense of how probable you implicitly expect some event to be. Or practice simulating hypotheticals ("What if I knew that my novel would never sell well?") and observing your resultant emotions, to get a clearer picture of your utility function.
And emotions-as-data can be a valuable check on your System 2's conclusions. One of our standard classes is "Goal Factoring," which entails finding some alternate set of actions through which you can purchase the goods you want more cheaply. So you might reason, "I'm doing martial arts for the exercise and self-defense benefits... but I could purchase both of those things for less time investment by jogging to work and carrying Mace." If you listened to your emotional reaction to that proposal, however, you might notice you still feel sad about giving up martial arts even if you were getting the same amount of exercise and self-defense benefits somehow else.
Which probably means you've got other reasons for doing martial arts that you haven't yet explicitly acknowledged -- for example, maybe you just think it's cool. If so, that's important, and deserves a place in your decisionmaking. Listening for those emotional cues that your explicit reasoning has missed something is a crucial step, and to the extent that aspiring rationalists sometimes forget it, I suppose that's a Steel-Manned Straw Vulcan (Steel Vulcan?) that actually is worth worrying about.
Conclusion
I'll name one more trait that unites, rather than divides, CFAR and Less Wrong. We both diverge from "traditional" rationality in that we're concerned with determining which general methods systematically perform well, rather than defending some set of methods as "rational" on a priori criteria alone. So CFAR's picture of what rationality looks like, and how to become more rational, will and should change over the coming years as we learn more about the effects of our rationality training efforts.