Comment author: LM7805 23 September 2013 08:18:22PM 11 points [-]

Another meme that arguably reached the Bay Area via the 1960s/1970s counterculture, but predates it, is "intentional community". This influences startup culture and hacker culture (specifically hackerspaces), and to some (lesser?) extent seasteaders, the back-to-the-land movement, and rationalists as well.

Comment author: Julia_Galef 24 September 2013 07:12:20AM 2 points [-]

Great one, thanks!

Comment author: roland 23 September 2013 07:54:58PM 10 points [-]

Psychedelics? Nootropics? I guess they are also a big part connecting lots of those subcultures.

Comment author: Julia_Galef 24 September 2013 06:58:55AM 3 points [-]

Agreed. I might add them to a future version of this map.

This time around I held off mainly because I was confounded by how to add them; drugs really do pervade so many of these groups, in different variants: psychadelics are strong among the counterculture and New Age culture, nootropics are more popular among rationalists and biohackers/Quantified Self, and both are popular among transhumanists. (See this H+ article for a discussion of psychadelic transhumanists.)

A map of Bay Area memespace

43 Julia_Galef 23 September 2013 05:34PM

The main reason we picked the Bay Area as a home for the Center for Applied Rationality was simply because that's where our initial fiscal sponsor, MIRI, was located. Yet as I’ve gotten to know this region better in the year and a half since then, I’ve been struck by how good the fit has turned out to be. The Bay Area is unusually dense with idea-driven subcultures that mix and cross-pollinate in fascinating ways, many of which are already enriching rationalist culture.

This map is my attempt at illustrating that landscape of subcultures, and at situating the rationalist community within it. I’ve limited myself to the last 50 years or so, and to subcultures defined by ideology (as opposed to, say, ethnicity). I’ve also depicted some of the major memes that have influenced, and been influenced by, those subcultures:

(Click to enlarge)

Note that although many of these memes are widely influential, I only drew an arrow connecting a meme to a group if the meme was one of the defining features of the group. (For example, yoga may be popular among many entrepreneurs, but that meme -> subculture relationship isn’t strong enough to make my map.).

Below, I expand on the map with a quick tour through the landscape of Bay Area memes and subcultures. Instead of trying to cover everything in detail, I’ve focused on nine aspects of that memespace that help put the rationalist community in context:

continue reading »

Three ways CFAR has changed my view of rationality

102 Julia_Galef 10 September 2013 06:24PM

The Center for Applied Rationality's perspective on rationality is quite similar to Less Wrong's. In particular, we share many of Less Wrong's differences from what's sometimes called "traditional" rationality, such as Less Wrong's inclusion of Bayesian probability theory and the science on heuristics and biases.

But after spending the last year and a half with CFAR as we've developed, tested, and attempted to teach hundreds of different versions of rationality techniques, I've noticed that my picture of what rationality looks like has shifted somewhat from what I perceive to be the most common picture of rationality on Less Wrong. Here are three ways I think CFAR has come to see the landscape of rationality differently than Less Wrong typically does – not disagreements per se, but differences in focus or approach. (Disclaimer: I'm not speaking for the rest of CFAR here; these are my own impressions.)

 

1. We think less in terms of epistemic versus instrumental rationality.

Formally, the methods of normative epistemic versus instrumental rationality are distinct: Bayesian inference and expected utility maximization. But methods like "use Bayes' Theorem" or "maximize expected utility" are usually too abstract and high-level to be helpful for a human being trying to take manageable steps towards improving her rationality. And when you zoom in from that high-level description of rationality down to the more concrete level of "What five-second mental habits should I be training?" the distinction between epistemic and instrumental rationality becomes less helpful.

Here's an analogy: epistemic rationality is like physics, where the goal is to figure out what's true about the world, and instrumental rationality is like engineering, where the goal is to accomplish something you want as efficiently and effectively as possible. You need physics to do engineering; or I suppose you could say that doing engineering is doing physics, but with a practical goal. However, there's plenty of physics that's done for its own sake, and doesn't have obvious practical applications, at least not yet. (String theory, for example.) Similarly, you need a fair amount of epistemic rationality in order to be instrumentally rational, though there are parts of epistemic rationality that many of us practice for their own sake, and not as a means to an end. (For example, I appreciate clarifying my thinking about free will even though I don't expect it to change any of my behavior.)

In this analogy, many skills we focus on at CFAR are akin to essential math, like linear algebra or differential equations, which compose the fabric of both physics and engineering. It would be foolish to expect someone who wasn't comfortable with math to successfully calculate a planet's trajectory or design a bridge. And it would be similarly foolish to expect you to successfully update like a Bayesian or maximize your utility if you lacked certain underlying skills. Like, for instance: Noticing your emotional reactions, and being able to shift them if it would be useful. Doing thought experiments. Noticing and overcoming learned helplessness. Visualizing in concrete detail. Preventing yourself from flinching away from a thought. Rewarding yourself for mental habits you want to reinforce. 

These and other building blocks of rationality are essential both for reaching truer beliefs, and for getting what you value; they don't fall cleanly into either an "epistemic" or an "instrumental" category. Which is why, when I consider what pieces of rationality CFAR should be developing, I've been thinking less in terms of "How can we be more epistemically rational?" or "How can we be more instrumentally rational?" and instead using queries like, "How can we be more metacognitive?"

 

2. We think more in terms of a modular mind.

The human mind isn't one coordinated, unified agent, but rather a collection of different processes that often aren't working in sync, or even aware of what each other is up to. Less Wrong certainly knows this; see, for example, discussions of anticipations versus professions, aliefs, and metawanting. But in general we gloss over that fact, because it's so much simpler and more natural to talk about "what I believe" or "what I want," even if technically there is no single "I" doing the believing or wanting. And for many purposes that kind of approximation is fine. 

But a rationality-for-humans usually can't rely on that shorthand. Any attempt to change what "I" believe, or optimize for what "I" want, forces a confrontation of the fact that there are multiple, contradictory things that could reasonably be called "beliefs," or "wants," coexisting in the same mind. So a large part of applied rationality turns out to be about noticing those contradictions and trying to achieve coherence, in some fashion, before you can even begin to update on evidence or plan an action.

Many of the techniques we're developing at CFAR fall roughly into the template of coordinating between your two systems of cognition: implicit-reasoning System 1 and explicit-reasoning System 2. For example, knowing when each system is more likely to be reliable. Or knowing how to get System 2 to convince System 1 of something ("We're not going to die if we go talk to that stranger"). Or knowing what kinds of questions System 2 should ask of System 1 to find out why it's uneasy about the conclusion at which System 2 has arrived.

This is all, of course, with the disclaimer that the anthropomorphizing of the systems of cognition, and imagining them talking to each other, is merely a useful metaphor. Even the classification of human cognition into Systems 1 and 2 is probably not strictly true, but it's true enough to be useful. And other metaphors prove useful as well – for example, some difficulties with what feels like akrasia become more tractable when you model your future selves as different entities, as we do in the current version of our "Delegating to yourself" class.

 

3. We're more focused on emotions.

There's relatively little discussion of emotions on Less Wrong, but they occupy a central place in CFAR's curriculum and organizational culture.

It used to frustrate me when people would say something that revealed they held a Straw Vulcan-esque belief that "rationalist = emotionless robot". But now when I encounter that misconception, it just makes me want to smile, because I'm thinking to myself: "If you had any idea how much time we spend at CFAR talking about our feelings…"

Being able to put yourself into particular emotional states seems to make a lot of pieces of rationality easier. For example, for most of us, it's instrumentally rational to explore a wider set of possible actions – different ways of studying, holding conversations, trying to be happy, and so on – beyond whatever our defaults happen to be. And for most of us, inertia and aversions get in the way of that exploration. But getting yourself into "playful" mode (one of the hypothesized primary emotional circuits common across mammals) can make it easier to branch out into a wider swath of Possible-Action Space. Similarly, being able to call up a feeling of curiosity or of "seeking" (another candidate for a primary emotional circuit) can help you conquer motivated cognition and learned blankness.  

And simply being able to notice your emotional state is rarer and more valuable than most people realize. For example, if you're in fight-or-flight mode, you're going to feel more compelled to reject arguments that feel like a challenge to your identity. Being attuned to the signs of sympathetic nervous system activation – that you're tensing up, or that your heart rate is increasing – means you get cues to double-check your reasoning, or to coax yourself into another emotional state.

We also use emotions as sources of data. You can learn to tap into feelings of surprise or confusion to get a sense of how probable you implicitly expect some event to be. Or practice simulating hypotheticals ("What if I knew that my novel would never sell well?") and observing your resultant emotions, to get a clearer picture of your utility function. 

And emotions-as-data can be a valuable check on your System 2's conclusions. One of our standard classes is "Goal Factoring," which entails finding some alternate set of actions through which you can purchase the goods you want more cheaply. So you might reason, "I'm doing martial arts for the exercise and self-defense benefits... but I could purchase both of those things for less time investment by jogging to work and carrying Mace." If you listened to your emotional reaction to that proposal, however, you might notice you still feel sad about giving up martial arts even if you were getting the same amount of exercise and self-defense benefits somehow else.

Which probably means you've got other reasons for doing martial arts that you haven't yet explicitly acknowledged -- for example, maybe you just think it's cool. If so, that's important, and deserves a place in your decisionmaking. Listening for those emotional cues that your explicit reasoning has missed something is a crucial step, and to the extent that aspiring rationalists sometimes forget it, I suppose that's a Steel-Manned Straw Vulcan (Steel Vulcan?) that actually is worth worrying about.

Conclusion

I'll name one more trait that unites, rather than divides, CFAR and Less Wrong. We both diverge from "traditional" rationality in that we're concerned with determining which general methods systematically perform well, rather than defending some set of methods as "rational" on a priori criteria alone. So CFAR's picture of what rationality looks like, and how to become more rational, will and should change over the coming years as we learn more about the effects of our rationality training efforts. 

Comment author: RichardKennaway 10 September 2013 06:56:29AM 2 points [-]

Can I summarise that as saying that CFAR takes account of what we are, while LW generally does not?

Comment author: Julia_Galef 10 September 2013 07:47:00AM 9 points [-]

Well, I'd say that LW does take account of who we are. They just haven't had the impetus to do so quite as thoroughly as CFAR has. As a result there are aspects of applied rationality, or "rationality for humans" as I sometimes call it, that CFAR has developed and LW hasn't.

CFAR workshop, June 15th, Salt Lake City UT

17 Julia_Galef 06 June 2013 04:47PM

CFAR is experimenting with a mobile workshop, so we can bring our material to people who can't make it to Berkeley.  So, next week, we're running a one-and-a-half day workshop in Salt Lake City, Utah!

 

Workshop Details

On Saturday June 15th, CFAR will be running a workshop in the Salt Lake City area. We’ll be presenting selected material from our four-day workshop and giving you the chance to consult with our instructors on how you can put these skills to work.

You’ll arrive for class at 10am on Saturday, and you and eleven other participants will spend the day learning highlights from our applied rationality curriculum: how to recognize and defuse a fight-or-flight response when it doesn't do you any good (you can’t outrun data you don’t like!), how to make sure your desire to complete a long-term project (say, writing a book) trickles down to motivate all the picayune steps along the way (doing a read-through to pick off unnecessary adjectives), and how to make the most of your intuitive judgments. Classes wrap up at 7pm, and then we’ll all go out for dinner, where you’ll have a chance to decompress and digest the day (along with your meal).

After dinner, if you’ve registered for the optional half-day, you’ll sleep over on site with the CFAR staff and play some fun, brain-teasing games. The evening is a time for unstructured conversation and collaboration. What are your pet projects and ambitions? Get feedback from classmates and instructors and start figuring out ways to make the most of your newfound skills.

The next morning, you’ll choose which of the previous day’s skills you really want to practice intensively. Catch any misunderstandings or sticking points while you’re still around to troubleshoot them with a CFAR instructor. At our four-day workshops, many participants report that our final-day review sessions are the point where they were finally able to internalize the material and start to use it instinctively.

After a half-day of review and reinforcement, we send you back out into the world, better prepared to make the most of your brain.

 

Application Details

The cost of the workshop will be $90 for the first day of instruction + $50 if you plan to stick around for the overnight and the second day of practice.

Registration is first come, first serve. Space is limited to 12. To sign-up, fill out this two minute form.

New applied rationality workshops (April, May, and July)

27 Julia_Galef 09 April 2013 02:58AM

In the early days of the Center for Applied Rationality, Anna Salamon and I had a disagreement about whether we were ready to run our first applied rationality workshops in six weeks. My inside view said "No way"; Her inside view said "Should be fine"; My outside view noted that Anna had more relevant experience than I did, and therefore cowed my inside view into grudgingly shutting up.

It turned out well. Granted, the first couple of workshops were a bit chaotic (hey, sleeping in a dogpile on the living room builds character, amiright May minicampers?). But it's clear in retrospect that we got a lot more value out of diving in than we would have from the extra time spent planning.

The "try stuff fast" habit is responsible for a lot of the techniques in our curriculum; we test out classes on each other and on volunteers, observe "Oh hey, this helps other people too" or "Oh hey, no one else thinks this is useful, turns out I'm just weird," and tweak our curriculum accordingly.

And because we cannot help going recursively meta, we've built a lot of material into our curriculum to make people better at trying things that could make them better at pursuing their goals. Quick, off-the-cuff value of information (VOI) calculations help you decide when it's worth it to spend the time, or money or risk, to try something new. Againstness helps you notice and alleviate the stress responses that can keep you from trying something, once you've noticed that you should. Comfort zone expansion is basically a "try a bunch of new things" drill.

For more details on our curriculum, check out a sample schedule. I also made a simplified map of some of our classes, so you can see how I think of them fitting into the bigger picture of rationality (click to enlarge):

To the extent that I've improved my own rationality skills over the last year, I give a lot of credit to "try stuff fast." Like many Less Wrongers I have historically been more of a "thinking about things" person than a "trying stuff fast" person; given the choice of an afternoon spent debating ignorance priors or one spent figuring out how to improve my public speaking skills, I'd pick the former every time, even though the latter would be more useful to me.

I'm partially reformed now, thanks in part to the influence of Anna, whom you'll frequently overhear saying things like "I think I'll try teaching the class as if I were Val" or "We should try a different meeting format today, it's high VOI." So now I'm much more likely to notice, "Hey, in this situation I always do X (e.g., ask for feedback later, by email), so this time let me try X-prime (e.g., ask for feedback in person on the spot) -- the cost is low and it's plausible I'll learn that I like it better than my default."

In that spirit, I recommend coming to one of our upcoming workshops in April, May or July, where you will not only be introduced to all the stuff that we've tried and found promising so far, but will also be plugged into a growing network of several hundred other thoughtful and creative people who have developed their own habits you can borrow and try (we certainly do – past participants have been the origin of some of our best material). And being surrounded by other people with similar aspirations, during the workshop and in the alumni network afterwards, is the best way I know of to keep your motivation and your discipline strong.

At $3900, it's an investment, but a low-risk one, since we have a money-back guarantee. If you don't feel like what you got out of it was worth it, we'll refund your money without hesitation or complaint.

Here are the basics:

You can apply here for any of our next three applied rationality workshops:

  • Friday, April 26 - Monday, April 29
  • Friday, May 17 - Monday, May 20
  • Saturday, July 20 - Tuesday, July 23

Each workshop will consist of an immersive four days at a retreat near San Francisco, training you in the art of actually using rationality. That means figuring out what your goals are, and what you can be doing to pursue them more effectively; noticing when you're acting out of habit or impulse; cultivating curiosity about the world and how it works; and learning to use both your intuitive (System 1) and analytical (System 2) thinking systems to their fullest.

We're soliciting applications not just from Less Wrongers, but from other entrepreneurs, students, teachers, scientists, engineers, activists -- anyone who is analytical, friendly, and motivated to make their own careers, personal lives, and/or societies better.  

For more information on our content, check out our workshop webpage, our checklist of rationality habits, or a detailed sample schedule.

We're constantly tinkering with our curriculum (as mentioned earlier), and collecting follow-up data on what works well. So while you should be aware that our material hasn't yet been subjected to rigorous long-term studies, our alumni do tend to report that they've gotten a lot of value out of their experience. Here are a few write-ups from Less Wrongers about their CFAR workshop experience and any changes they've made as a result: toner, palladias, Qiaochu_Yuan, thejash, BrandonReinhart, ciphergoth, and a bunch of other people.

The total cost is $3900, and that includes:

  • Three days of classes -- Six hours of class a day, with small class sizes (4-6 people) so you get a lot of personal attention from the instructors. We rearrange those small groups several times throughout the workshop to give you a chance to get to know everyone.
  • One day of practice – Optional but recommended, so instructors can help you make and troubleshoot a plan to use the material going forward. (If you choose to skip this day, the total cost is $3400.)
  • Six weeks of personal follow-ups – Talk to our staff in one-on-one follow-ups to help you get the most value out of what you've learned.
  • Staying on site – We rent out lovely retreat centers (lodging and food included in the cost of the workshop) so you can get to know the instructors and other participants in the evenings, during meals, and on breaks. Evenings include everything from unconferences, to parties, to impromptu Rubix-cube lessons.
  • An alumni network -- You'll be included in all future CFAR alumni events, parties, online forums, and so on. We'll make every effort to connect you to alumni from other workshops with whom we think you'll hit it off or have opportunities for collaboration. 

Scholarships and financial aid are available -- including for many who thought they wouldn't qualify.  So if you're interested in attending, definitely apply, and mention you'd like to be considered for this. We'll set up a call to discuss.

And please don't hesitate to email me (Julia at appliedrationality dot org). CFAR staff will also be in this comment thread to field questions, and some of the alumni who frequent Less Wrong may be there as well. 

Apply here (the form takes less than 10 minutes, so you should do it now rather than planning on getting to it later!).

Comment author: shminux 08 April 2013 05:44:07PM *  4 points [-]

At $3900, it's an investment, but a low-risk one, since we have a money-back guarantee. If you don't feel like what you got out of it was worth it, we'll refund your money without hesitation or complaint.

This is still not low-risk. I would hesitate to ask for a refund even if an event like this was below my expectations, as long as it's not a total flop or a con, which it surely isn't. Low-risk (for the participant) would be dividing the camp into billable events with a price tag on each, and refunding a portion of the price of each event based on the post-event evaluations. This is probably unworkable in practice, but at least it would not be misleading. On the other hand, "full refund no questions asked" is a useful marketing strategy, if a bit dark-artsy.

Comment author: Julia_Galef 08 April 2013 06:23:47PM *  12 points [-]

If it makes you feel less hesitant, we've given refunds twice. One person at a workshop last year who said he'd expected polish and suits, and another who said he enjoyed it but wasn't sure it was going to help enough with his current life situation to be worth it.

Comment author: dspeyer 08 April 2013 03:41:13PM 0 points [-]

I also made a simplified map of some of our classes, so you can see how I think of them fitting into the bigger picture of rationality (click to enlarge):

I've clicked everything I can think of and it's not enlarging.

Comment author: Julia_Galef 08 April 2013 03:44:17PM 1 point [-]

Fixed now, sorry!

Comment author: JMiller 08 April 2013 03:26:22PM 1 point [-]

Hi, the "apply here" link is not working for me.

Thanks!

Comment author: Julia_Galef 08 April 2013 03:35:10PM 0 points [-]

Fixed! Thanks, I apparently didn't understand how links worked in this system.

View more: Prev | Next