Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Introducing the Instrumental Rationality Sequence

29 lifelonglearner 26 April 2017 09:53PM

What is this project?

I am going to be writing a new sequence of articles on instrumental rationality. The end goal is to have a compiled ebook of all the essays, so the articles themselves are intended to be chapters in the finalized book. There will also be pictures.


I intend for the majority of the articles to be backed by somewhat rigorous research, similar in quality to Planning 101 (with perhaps a few less citations). Broadly speaking, the plan is to introduce a topic, summarize the research on it, give some models and mechanisms, and finish off with some techniques to leverage the models.


The rest of the sequence will be interspersed with general essays on dealing with these concepts, similar to In Defense of the Obvious. Lastly, there will be a few experimental essays on my attempt to synthesize existing models into useful-but-likely-wrong models of my own, like Attractor Theory.


I will likely also recycle / cannibalize some of my older writings for this new project, but I obviously won’t post the repeated material here again as new stuff.


 


 

What topics will I cover?

Here is a broad overview of the three main topics I hope to go over:


(Ordering is not set.)


Overconfidence in Planning: I’ll be stealing stuff from Planning 101 and rewrite a bit for clarity, so not much will be changed. I’ll likely add more on the actual models of how overconfidence creeps into our plans.


Motivation: I’ll try to go over procrastination, akrasia, and behavioral economics (hyperbolic discounting, decision instability, precommitment, etc.)


Habituation: This will try to cover what habits are, conditioning, incentives, and ways to take the above areas and habituate them, i.e. actually putting instrumental rationality techniques into practice.


Other areas I may want to cover:

Assorted Object-Level Things: The Boring Advice Repository has a whole bunch of assorted ways to improve life that I think might be useful to reiterate in some fashion.


Aversions and Ugh Fields: I don’t know too much about these things from a domain knowledge perspective, but it’s my impression that being able to debug these sorts of internal sticky situations is a very powerful skill. If I were to write this section, I’d try to focus on Focusing and some assorted S1/S2 communication things. And maybe also epistemics.


Ultimately, the point here isn’t to offer polished rationality techniques people can immediately apply, but rather to give people an overview of the relevant fields with enough techniques that they get the hang of what it means to start making their own rationality.


 


 

Why am I doing this?

Niche Role: On LessWrong, there currently doesn’t appear to be a good in-depth series on instrumental rationality. Rationality: From AI to Zombies seems very strong for giving people a worldview that enables things like deeper analysis, but it leans very much into the epistemic side of things.


It’s my opinion that, aside from perhaps Nate Soares’s series on Replacing Guilt (which I would be somewhat hesitant to recommend to everyone), there is no in-depth repository/sequence that ties together these ideas of motivation, planning, procrastination, etc.


Granted, there have been many excellent posts here on several areas, but they've been fairly directed. Luke's stuff on beating procrastination, for example, is fantastic. I'm aiming for a broader overview that hits the current models and research on different things.


I think this means that creating this sequence could add a lot of value, especially to people trying to create their own techniques.


Open-Sourcing Rationality: It’s clear that work is being done on furthering rationality by groups like Leverage and CFAR. However, for various reasons, the work they do is not always available to the public. I’d like to give people who are interested but unable to directly work with these organization something they can use to jump start their own investigations.


I’d like this to become a similar Schelling Point that we could direct people to if they want to get started.


I don’t meant to imply that what I’ll produce is the same caliber, but I do think it makes sense to have some sort of pipeline to get rationalists up to speed with the areas that (in my mind) tie into figuring out instrumental rationality. When I first began looking into this field, there was a lot of information that was scattered in many places.


I’d like to create something cohesive that people can point to when newcomers want to get started with instrumental rationality that similarly gives them a high level overview of the many tools at their disposal.


Revitalizing LessWrong: It’s my impression that independent essays on instrumental rationality have slowed over the years. (But also, as I mentioned above, this doesn’t mean stuff hasn’t happened. CFAR’s been hard at work iterating their own techniques, for example.) As LW 2.0 is being talked about, this seems like an opportune time to provide some new content and help with our reorientation towards LW becoming once again a discussion hub for rationality.


 


 

Where does LW fit in?

Crowd-sourcing Content: I fully expect that many other people will have fantastic ideas that they want to contribute. I think that’s a good idea. Given some basic things like formatting / roughly consistent writing style throughout, I think it’d be great if other potential writers see this post as an invitation to start thinking about things they’d like to write / research about instrumental rationality.


Feedback: I’ll be doing all this writing on a public Google Doc with posts that feature chapters once they’re done, so hopefully there’s ample room to improve and take in constructive criticism. Feedback on LW is often high-quality, and I expect that to definitely improve what I will be writing.


Other Help: I probably can’t come through every single research paper out there, so if you see relevant information I didn’t or want to help with the research process, let me know! Likewise, if you think there are other cool ways you can contribute, feel free to either send me a PM or leave a comment below.


 


 

Why am I the best person to do this?

I’m probably not the best person to be doing this project, obviously.


But, as a student, I have a lot of time on my hands, and time appears to be a major limiting reactant in this whole process.


Additionally, I’ve been somewhat involved with CFAR, so I have some mental models about their flavor of instrumental rationality; I hope this translates into meaning I'm writing about stuff that isn't just a direct rehash of their workshop content.


Lastly, I’m very excited about this project, so you can expect me to put in about 10,000 words (~40 pages) before I take some minor breaks to reset. My short-term goals (for the next month) will be on note-taking and finding research for habits, specifically, and outlining more of the sequence.

 

Plan-Bot: A Simple Planning Tool

4 lifelonglearner 31 March 2017 09:45PM

[I recently made a post in the OT about this, but I figured it might be good as a top-level post for add'l attention.]

After writing Planning 101, I realized that there was no automated tool online for Murphyjitsu, the CFAR technique of problem-proofing plans. (I explain Murphyjitsu in more detail about halfway down the Planning 101 post.)

I was also trying to learn some web-dev at the same time, so I decided to code up this little tool, Plan-Bot, that walks you through a series of planning prompts and displays your answers to the questions. 

In short, you type in what you want to do, it asks you what the steps are, and when you're done, it asks you to evaluate potential ways things can go wrong.

I set it as my homepage, and I've been getting some use out of it. Hopefully it ends up being helpful for other people as well.

You can try it out here.

And here is it on GitHub.

I'm still trying to learn web-dev, so feel free to give suggestions for improvements, and I'll try to incorporate them. 

CFAR Workshop Review: February 2017

5 lifelonglearner 28 February 2017 03:15AM

[A somewhat extensive review of a recent CFAR workshop, with recommendations at the end for those interested in attending one.]

I recently mentored at a CFAR workshop, and this is a review of the actual experience. In broad strokes, this review will cover the physical experience (atmosphere, living, eating, etc.), classes (which ones were good, which ones weren’t), and recommendations (regrets, suggestions, ways to optimize your experience). I’m not officially affiliated with CFAR, and this review represents my own thoughts only.

A little about me: my name is Owen, and I’m here in the Bay Area. This was actually my first real workshop, but I’ve had a fair amount of exposure to CFAR materials from EuroSPARC, private conversations, and LessWrong. So do keep in mind that I’m someone who came into the workshop with a rationalist’s eye.

I’m also happy to answer any questions people might have about the workshop. (Via PM or in the comments below.)


Physical Experience:

Sleeping / Food / Living:

(This section is venue-dependent, so keep that in mind.)

Despite the hefty $3000 plus price tag, the workshop accommodations aren’t exactly plush. You get a bed, and that’s about it. In my workshop, there were always free bathrooms, so that part wasn’t a problem.

There was always enough food at meals, and my impression was that dietary restrictions were handled well. For example, one staff member went out and bought someone lunch when one meal didn’t work. Other than that, there’s ample snacks between meals, usually a mix of chips, fruits, and chocolate. Also, hot tea and a surprisingly wide variety of drinks.

Atmosphere / Social:

(The participants I worked with were perhaps not representative of the general “CFAR participant”, so also take caution here.)

People generally seemed excited and engaged. Given that everyone hopefully voluntarily decided to show up, this was perhaps to be expected. Anyway, there’s a really low amount of friction when it comes to joining and exiting conversations. By that, I mean it felt very easy, socially speaking, to just randomly join a conversation. Staff and participants all seemed quite approachable for chatting.

I don’t have the actual participant stats, but my impression is that a good amount of people came from quantitative (math/CS) backgrounds, so there were discussions on more technical things, too. It also seemed like a majority of people were familiar with rationality or EA prior to coming to the workshop.

There were a few people for whom the material didn’t seem to “resonate” well, but the majority people seemed to be “with the program”.

Class Schedule:

(The schedule and classes are also in a state of flux, so bear that in mind too.)

Classes start at around 9:30 am in the morning and end at about 9:00 pm at night. In between, there are 20 minute breaks between every hour of classes. Lunch is about 90 minutes, while dinner is around 60 minutes.

Most of the actual classes were a little under 60 minutes, except for the flash classes, which were only about 20 minutes. Some classes had extended periods for practicing the techniques.

You’re put into a group of around 8 people, which switches every day, that you go to classes with. So there’s a few rotating classes that are happening, where you might go to them in a different order.

 

Classes Whose Content I Enjoyed:

As I was already familiar with most of the below material, this reflects more a general sense of classes which I think are useful, rather than ones which were taught exceptionally well at the workshop.

TAPs: Kaj Sotala already has a great write-up of TAPs here, and I think that they’re a helpful way of building small-scale habits. I also think the “click-whirr” mindset TAPs are built off can be a helpful way to model minds. The most helpful TAP for me is the Quick Focusing TAP I mention about a quarter down the page here.

Pair Debugging: Pair Debugging is about having someone else help you work through a problem. I think this is explored to some extent in places like psychiatry (actually, I’m unsure about this) as well as close friendships, but I like how CFAR turned this into a more explicit social norm / general thing to do. When I do this, I often notice a lot of interesting inconsistencies, like when I give someone good-sounding advice—except that I myself don’t follow it.  

The Strategic Level: The Strategic Level is where you, after having made a mistake, ask yourself, “What sort of general principles would I have had noticed in order to not make a mistake of this class in the future?” This is opposed to merely saying “Well, that mistake was bad” (first level thinking) or “I won’t make that mistake again” (second level thinking). There were also some ideas about how the CFAR techniques can recurse upon themselves in interesting ways, like how you can use Murphyjitsu (middle of the page) on your ability to use Murphyjitsu. This was a flash class, and I would have liked it if we could have spent more time on these ideas.

Tutoring Wheel: Less a class and more a pedagogical activity, Tutoring Wheel was where everyone picked a specific rationality class to teach and then rotated, teaching others and being taught. I thought this was a really strong way to help people understand the techniques during the workshop.

Focusing / Internal Double Crux / Mundanification: All three of these classes address different things, but in my mind I thought they were similar in the sense of looking into yourself. Focusing is Gendlin’s self-directed therapy technique, where people try to look into themselves to get a “felt shift”. Internal Double Crux is about resolving internal disagreements, often between S1 and S2 (but not necessarily). Mundanification is about facing the truth, even when you flinch from it, via Litany of Gendlin-type things. This general class of techniques that deals with resolving internal feelings of “ugh” I find to be incredibly helpful, and may very well be the highest value thing I got out of the class curriculum.

 

Classes Whose Teaching/Content I Did Not Enjoy:

These were classes that I felt were not useful and/or not explained well. This differs from the above, because I let the actual teaching part color my opinions.

Taste / Shaping: I thought an earlier iteration of this class was clearer (when it was called Inner Dashboard). Here, I wasn’t exactly sure what the practical purpose of the class was, let alone what the general thing it was pointing at. To the best of my knowledge, Taste is about how we have subtle “yuck” and “yum” senses towards things, and there can be a way to reframe negative affects in a more positive way, like how “difficult” and “challenging” can be two sides of the same coin. Shaping is about…something. I’m really unclear about this one.

Pedagogical Content Knowledge (PCK): PCK is, I think, about how the process of teaching a skill differs from the process of learning it. And you need a good understanding of how a beginner is learning something, what that experience feels like, in order to teach it well. I get that part, but this class seemed removed from the other classes, and the activity we did (asking other people how they did math in their head) didn’t seem useful.

Flash Class Structure: I didn’t like the 20 minute “flash classes”. I felt like they were too quick to really give people ideas that stuck in their head. In general, I am in support of less classes and extended times to really practice the techniques, and I think having little to no flash classes would be good.

 

Suggestions for Future Classes: 

This is my personal opinion only. CFAR has iterated their classes over lots of workshops, so it’s safe to assume that they have reasons for choosing what they teach. Nevertheless, I’m going to be bold and suggest some improvements which I think could make things better.

Opening Session: CFAR starts off every workshop with a class called Opening Session that tries to get everyone in the right mindset for learning, with a few core principles. Because of limited time, they can’t include everything, but there were a few lessons I thought might have helped as the participants went forward:

In Defense of the Obvious: There’s a sense where a lot of what CFAR says might not be revolutionary, but it’s useful. I don’t blame them; much of what they do is draw boundaries around fairly-universal mental notions and draw attention to them. I think they could spend more time highlighting how obvious advice can still be practical.

Mental Habits are ProceduralRationality techniques feel like things you know, but it’s really about things you do. Focusing on this distinction could be very useful to make sure people see that actually practicing the skills is very important.

Record / Take Notes: I find it really hard to remember concrete takeaways if I don’t write them down. During the workshop, it seemed like maybe only about half of the people were taking notes. In general, I think it’s at least good to remind people to journal their insights at the end of the day, if they’re not taking notes at every class.

Turbocharging + Overlearning: Turbocharging is a theory in learning put forth by Valentine Smith which, briefly speaking, says that you get better at what you practice. Similarly, Overlearning is about using a skill excessively over a short period to get it ingrained. It feels like the two skills are based off similar ideas, but their connection to one another wasn’t emphasized. Also, they were several days apart; I think they could be taught closer together.

General Increased Cohesion: Similarly, I think that having additional discussion on how these techniques relate to one another be it through concept maps or some theorizing might be good to give people a more unified rationality toolkit.

 

Mental Updates / Concrete Takeaways:

This ended up being really long. If you’re interested, see my 5-part series on the topic here.

 

Suggestions / Recommendations:

This is a series of things that I would have liked to do (looking back) at the workshop, but that I didn’t manage to do at the time. If you’re considering going, this list may prove useful to you when you go. (You may want to consider bookmarking this.)

Write Things Down: Have a good idea? Write it down. Hear something cool? Write it down. Writing things down (or typing, voice recording, etc.) is all really important so you can remember it later! Really, make sure to record your insights!

Build Scaffolding: Whenever you have an opportunity to shape your future trajectory, take it. Whether this means sending yourself emails, setting up reminders, or just taking a 30 minute chunk to really practice a certain technique, I think it’s useful to capitalize on the unique workshop environment to, not just learn new things, but also just do things you otherwise probably “wouldn’t have had the time for”.

Record Things to Remember Them: Here’s a poster I made that has a bunch of suggestions:

reminder-poster
Do ALL The Things!

 

Don’t Be Afraid to Ask for Help: Everyone at the workshop, on some level, has self-growth as a goal. As such, it’s a really good idea to ask people for help. If you don’t understand something, feel weird for some reason, or have anything going on, don’t be afraid to use the people around you the fullest (if they’re available, of course).

Conclusion:

Of course, perhaps the biggest question is “Is the workshop worth the hefty price?”

Assuming you’re coming from a tech-based position (apologies to everyone else, I’m just doing a quick ballpark with what seems to be the most common place CFAR participants seem to come from), the average hourly wage is something like $40. At ~$4,000, the workshop would need to save you about 100 hours to break even.

If you want rigorous quantitative data, you may want to check out CFAR’s own study on their participants. I don’t think I’ve got a good picture of quantifying the sort of personal benefits, myself, so everything below is pretty qualitative.

Things that I do think CFAR provides:

1) A unique training / learning environment for certain types of rationality skills that would probably be hard to learn elsewhere. Several of these techniques, including TAPs, Resolve Cycles, and Focusing have become fairly ingrained in my daily life, and I believe they’ve increased my quality of life.

Learning rationality is the main point of the workshop, so the majority of the value probably comes out of learning these techniques. Also, though, CFAR gives you the space and time to start thinking about a lot of things you might have otherwise put off forever. (Granted, this can be achieved by other means, like just blocking out time every week for review, but I thought this counterfactual benefit was still probably good to mention.)

2) Connections to other like-minded people. As a Schelling point for rationality, you’ll meet people who share similar values / goals as you at a CFAR workshop. If you’re looking to make new friends or meet others, this is another benefit. (Although it does seem costly and inefficient if that’s your main prerogative.)

3) Upgraded mindset: As I wrote about here, I think that learning CFAR-type rationality can really level up the way you look at your brain, which seems to have some potential flow-through effects. The post explains it better, but in short, if you have not-so-good mental models, then CFAR could be a really good choice for boosting how you see how your mind works.

There are probably other things, but those are the main ones. I hope this helps inform your decision. CFAR is currently hosting a major sprint of workshops, so this would be a good time to sign up for one, if you've been considering attending.

Concrete Takeaways Post-CFAR

11 lifelonglearner 24 February 2017 06:31PM

Concrete Takeaways:

[So I recently volunteered at a CFAR workshop. This is part five of a five-part series on how I changed my mind. It's split into 3 sections: TAPs, Heuristics, and Concepts. They get progressively more abstract. It's also quite long at around 3,000 words, so feel free to just skip around and see what looks interesting.]

 

(I didn't post Part 3 and Part 4 on LW, as they're more speculative and arguably less interesting, but I've linked to them on my blog if anyone's interested.]

 

This is a collection of TAPs, heuristics, and concepts that I’ve been thinking about recently. Many of them were inspired by my time at the CFAR workshop, but there’s not really underlying theme behind it all. It’s just a collection of ideas that are either practical or interesting.

 


TAPs:

TAPs, or Trigger Action Planning, is a CFAR technique that is used to build habits. The basic idea is you pair a strong, concrete sensory “trigger” (e.g. “when I hear my alarm go off”) with a “plan”—the thing you want to do (e.g. “I will put on my running shoes”).


If you’re good at noticing internal states, TAPs can also use your feelings or other internal things as a trigger, but it’s best to try this with something concrete first to get the sense of it.


Some of the more helpful TAPs I’ve recently been thinking about are below:


Ask for Examples TAP:

[Notice you have no mental picture of what the other person is saying. → Ask for examples.]


Examples are good. Examples are god. I really, really like them.


In conversations about abstract topics, it can be easy to understand the meaning of the words that someone said, yet still miss the mental intuition of what they’re pointing at. Asking for an example clarifies what they mean and helps you understand things better.


The trigger for this TAP is noticing that what someone said gave you no mental picture.


I may be extrapolating too far from too little data here, but it seems like people do try to “follow along” with things in their head when listening. And if this mental narrative, simulation, or whatever internal thing you’re doing comes up blank when someone’s speaking, then this may be a sign that what they said was unclear.


Once you notice this, you ask for an example of what gave you no mental picture. Ideally, the other person can then respond with a more concrete statement or clarification.


Quick Focusing TAP:

[Notice you feel aversive towards something → Be curious and try to source the aversion.]


Aversion Factoring, Internal Double Crux, and Focusing are all techniques CFAR teaches to help deal with internal feelings of badness.


While there are definite nuances between all three techniques, I’ve sort of abstracted from the general core of “figuring out why you feel bad” to create an in-the-moment TAP I can use to help debug myself.


The trigger is noticing a mental flinch or an ugh field, where I instinctively shy away from looking too hard.


After I notice the feeling, my first step is to cultivate a sense of curiosity. There’s no sense of needing to solve it; I’m just interested in why I’m feeling this way.


Once I’ve directed my attention to the mental pain, I try to source the discomfort. Using some backtracking and checking multiple threads (e.g. “is it because I feel scared?”) allows me to figure out why. This whole process takes maybe half a minute.


When I’ve figured out the reason why, a sort of shift happens, similar to the felt shift in focusing. In a similar way, I’m trying to “ground” the nebulous, uncertain discomfort, forcing it to take shape.


I’d recommend trying some Focusing before trying this TAP, as it’s basically an expedited version of it, hence the name.


Rule of Reflexivity TAP:

[Notice you’re judging someone → Recall an instance where you did something similar / construct a plausible internal narrative]

[Notice you’re making an excuse → Recall times where others used this excuse and update on how you react in the future.]


This is a TAP that was born out of my observation that our excuses seem way more self-consistent when we’re the ones saying then. (Oh, why hello there, Fundamental Attribution Error!) The point of practicing the Rule of Reflexivity is to build empathy.


The Rule of Reflexivity goes both ways. In the first case, you want to notice if you’re judging someone. This might feel like ascribing a value judgment to something they did, e.g. “This person is stupid and made a bad move.”


The response is to recall times where either you did something similar or (if you think you’re perfect) think of a plausible set of events that might have caused them to act in this way. Remember that most people don’t think they’re acting stupidly; they’re just doing what seems like a good idea from their perspective.


In the second case, you want to notice when you’re trying to justify your own actions. If the excuses you yourself make suspiciously sound like things you’ve heard others say before, then you may want to jump less likely to immediately dismissing them in the future.


Keep Calm TAP:

[Notice you’re starting to get angry → Take a deep breath → Speak softer and slower]


Okay, so this TAP is probably not easy to do because you’re working against a biological response. But I’ve found it useful in several instances where otherwise I would have gotten into a deeper argument.


The trigger, of course, is noticing that you’re angry. For me, this feels like an increased tightness in my chest and a desire to raise my voice. I may feel like a cherished belief of mine is being attacked.


Once I notice these signs, I remember that I have this TAP which is about staying calm. I think something like, “Ah yes, I’m getting angry now. But I previously already made the decision that it’d be a better idea to not yell.”


After that, I take a deep breath, and I try to open up my stance. Then I remember to speak in a slower and quieter tone than previously. I find this TAP especially helpful in arguments—ahem, collaborative searches for the truth—where things get a little too excited on both sides.  

 


Heuristics:

Heuristics are algorithm-like things you can do to help get better results. I think that it’d be possible to turn many of the heuristics below into TAPs, but there’s a sense of deliberately thinking things out that separates these from just the “mindless” actions above.


As more formal procedures, these heuristics do require you to remember to Take Time to do them well. However, I think that the sorts of benefits you get from make it worth the slight investment in time.

 


Modified Murphyjitsu: The Time Travel Reframe:

(If you haven’t read up on Murphyjitsu yet, it’d probably be good to do that first.)


Murphyjitsu is based off the idea of a premortem, where you imagine that your project failed and you’re looking back. I’ve always found this to be a weird temporal framing, and I realized there’s a potentially easier way to describe things:


Say you’re sitting at your desk, getting ready to write a report on intertemporal travel. You’re confident you can finish before the hour is over. What could go wrong? Closing Facebook, you begin to start typing.


Suddenly, you hear a loud CRACK! A burst of light floods your room as a figure pops into existence, dark and silhouetted by the brightness behind it. The light recedes, and the figure crumples to the ground. Floating in the air is a whirring gizmo, filled with turning gears. Strangely enough, your attention is drawn from the gizmo to the person on the ground:


The figure has a familiar sort of shape. You approach, tentatively, and find the splitting image of yourself! The person stirs and speaks.


“I’m you from one week into the future,” your future self croaks. Your future self tries to tries to get up, but sinks down again.


“Oh,” you say.


“I came from the future to tell you…” your temporal clone says in a scratchy voice.


“To tell me what?” you ask. Already, you can see the whispers of a scenario forming in your head…


Future Your slowly says, “To tell you… that the report on intertemporal travel that you were going to write… won’t go as planned at all. Your best-case estimate failed.”


“Oh no!” you say.


Somehow, though, you aren’t surprised…


At this point, what plausible reasons for your failure come to mind?


I hypothesize that the time-travel reframe I provide here for Murphyjitsu engages similar parts of your brain as a premortem, but is 100% more exciting to use. In all seriousness, I think this is a reframe that is easier to grasp compared to the twisted “imagine you’re in the future looking back into the past, which by the way happens to be you in the present” framing normal Murphyjitsu uses.


The actual (non-dramatized) wording of the heuristic, by the way, is, “Imagine that Future You from one week into the future comes back telling you that the plan you are about to embark on will fail: Why?”


Low on Time? Power On!

Often, when I find myself low on time, I feel less compelled to try. This seems sort of like an instance of failing with abandon, where I think something like, “Oh well, I can’t possibly get anything done in the remaining time between event X and event Y”.


And then I find myself doing quite little as a response.


As a result, I’ve decided to internalize the idea that being low on time doesn’t mean I can’t make meaningful progress on my problems.


This a very Resolve-esque technique. The idea is that even if I have only 5 minutes, that’s enough to get things down. There’s lots of useful things I can pack into small time chunks, like thinking, brainstorming, or doing some Quick Focusing.


I’m hoping to combat the sense of apathy / listlessness that creeps in when time draws to a close.


Supercharge Motivation by Propagating Emotional Bonds:

[Disclaimer: I suspect that this isn’t an optimal motivation strategy, and I’m sure there are people who will object to having bonds based on others rather than themselves. That’s okay. I think this technique is effective, I use it, and I’d like to share it. But if you don’t think it’s right for you, feel free to just move along to the next thing.]


CFAR used to teach a skill called Propagating Urges. It’s now been largely subsumed by Internal Double Crux, but I still find Propagating Urges to be a powerful concept.


In short, Propagating Urges hypothesizes that motivation problems are caused because the implicit parts of ourselves don’t see how the boring things we do (e.g. filing taxes) causally relate to things we care about (e.g. not going to jail). The actual technique involves walking through the causal chain in your mind and some visceral imagery every step of the way to get the implicit part of yourself on board.


I’ve taken the same general principle, but I’ve focused it entirely on the relationships I have with other people. If all the parts of me realize that doing something would greatly hurt those I care about, this becomes a stronger motivation than most external incentives.


For example, I walked through an elaborate internal simulation where I wanted to stop doing a Thing. I imagined someone I cared deeply for finding out about my Thing-habit and being absolutely deeply disappointed. I focused on the sheer emotional weight that such disappointment would cause (facial expressions, what they’d feel inside, the whole deal).


I now have a deep injunction against doing the Thing, and all the parts of me are in agreement because we agree that such a Thing would hurt other people and that’s obviously bad.


The basic steps for Propagating Emotional Bonds looks like:

  • Figure out what thing you want to do more of or stop doing.

  • Imagine what someone you care about would think or say.

  • Really focus on how visceral that feeling would be.

  • Rehearse the chain of reasoning (“If I do this, then X will feel bad, and I don’t want X to feel bad, so I won’t do it”) a few times.


Take Time in Social Contexts:

Often, in social situations, when people ask me questions, I feel an underlying pressure to answer quickly. It feels like if I don’t answer in the next ten seconds, something’s wrong with me. (School may have contributed to this). I don’t exactly know why, but it just feels like it’s expected.


I also think that being forced to hurry isn’t good for thinking well. As a result, something helpful I’ve found is when someone asks something like, “Is that all? Anything else?” is to Take Time.


My response is something like, “Okay, wait, let me actually take a few minutes.” At which point, I, uh, actually take a few minutes to think things through. After saying this, it feel like it’s now socially permissible for me to take some time thinking.


This has proven in several contexts where, had I not Taken Time, I would have forgotten to bring up important things or missed key failure-modes.


Ground Mental Notions in Reality not by Platonics:

One of the proposed reasons that people suck at planning is that we don’t actually think about the details behind our plans. We end up thinking about them in vague black-box-style concepts that hide all the scary unknown unknowns. What we’re left with is just the concept of our task, rather than a deep understanding of what our task entails.


In fact, this seems fairly similar to the the “prototype model” that occurs in scope insensitivity.


I find this is especially problematic for tasks which look nothing like their concepts. For example, my mental representation of “doing math” conjures images of great mathematicians, intricate connections, and fantastic concepts like uncountable sets.


Of course, actually doing math looks more like writing stuff on paper, slogging through textbooks, and banging your head on the table.


My brain doesn’t differentiate well between doing a task and the affect associated with the task. Thus I think it can be useful to try and notice when our brains our doing this sort of black-boxing and instead “unpack” the concepts.


This means getting better correspondences between our mental conceptions of tasks and the tasks themselves, so that we can hopefully actually choose better.


3 Conversation Tips:

I often forget what it means to be having a good conversation with someone. I think I miss opportunities to learn from others when talking with them. This is my handy 3-step list of Conversation Tips to get more value out of conversations:


1) "Steal their Magic": Figure out what other people are really good at, and then get inspired by their awesomeness and think of ways you can become more like that. Learn from what other people are doing well.


2) "Find the LCD"/"Intellectually Escalate": Figure out where your intelligence matches theirs, and learn something new. Focus on Actually Trying to bridge those inferential distances. In conversations, this means focusing on the limits of either what you know or what the other person knows.


3) "Convince or Be Convinced”: (This is a John Salvatier idea, and it also follows from the above.) Focus on maximizing your persuasive ability to convince them of something. Or be convinced of something. Either way, focus on updating beliefs, be it your own or the other party’s.


Be The Noodly Appendages of the Superintelligence You Wish To See in the World:

CFAR co-founder Anna Salamon has this awesome reframe similar to IAT which asks, “Say a superintelligence exists and is trying to take over the world. However, you are its only agent. What do you do?”


I’ll admit I haven’t used this one, but it’s super cool and not something I’d thought of, so I’m including it here.

 


Concepts:

Concepts are just things in the world I’ve identified and drawn some boundaries around. They are farthest from the pipeline that goes from ideas to TAPs, as concepts are just ideas. Still, I do think these concepts “bottom out” at some point into practicality, and I think playing around with them could yield interesting results.


Paperspace =/= Mindspace:

I tend to write things down because I want to remember them. Recently, though I’ve noticed that rather act as an extension of my brain, I seem to treat things I write down as no longer in my own head. As in, if I write something down, it’s not necessarily easier for me to recall it later.


It’s as if by “offloading” the thoughts onto paper, I’ve cleared them out of my brain. This seems suboptimal, because a big reason I write things down is to cement them more deeply within my head.


I can still access the thoughts if I’m asking myself questions like, “What did I write down yesterday?” but only if I’m specifically sorting for things I write down.


The point is, I want stuff I write down on paper to be, not where I store things, but merely a sign of what’s stored inside my brain.


Outreach: Focus on Your Target’s Target:

One interesting idea I got from the CFAR workshop was that of thinking about yourself as a radioactive vampire. Um, I mean, thinking about yourself as a memetic vector for rationality (the vampire thing was an actual metaphor they used, though).


The interesting thing they mentioned was to think, not about who you’re directly influencing, but who your targets themselves influence.


This means that not only do you have to care about the fidelity of your transmission, but you need to think of ways to ensure that your target also does a passable job of passing it on to their friends.


I’ve always thought about outreach / memetics in terms of the people I directly influence, so looking at two degrees of separation is a pretty cool thing I hadn’t thought about in the past.


I guess that if I took this advice to heart, I’d probably have to change the way that I explain things. For example, I might want to try giving more salient examples that can be easily passed on or focusing on getting the intuitions behind the ideas across.


Build in Blank Time:

Professor Barbara Oakley distinguishes between focused and diffused modes of thinking. Her claim is that time spent in a thoughtless activity allows your brain to continue working on problems without conscious input. This is the basis of diffuse mode.


In my experience, I’ve found that I get interesting ideas or remember important ideas when I’m doing laundry or something else similarly mindless.


I’ve found this to be helpful enough that I’m considering building in “Blank Time” in my schedules.


My intuitions here are something like, “My brain is a thought-generator, and it’s particularly active if I can pay attention to it. But I need to be doing something that doesn’t require much of my executive function to even pay attention to my brain. So maybe having more Blank Time would be good if I want to get more ideas.”


There’s also the additional point that meta-level thinking can’t be done if you’re always in the moment, stuck in a task. This means that, cool ideas aside, if I just want to reorient or survey my current state, Blank Time can be helpful.


The 99/1 Rule: Few of Your Thoughts are Insights:

The 99/1 Rule says that the vast majority of your thoughts every day are pretty boring and that only about one percent of them are insightful.


This was generally true for my life…and then I went to the CFAR workshop and this rule sort of stopped being appropriate. (Other exceptions to this rule were EuroSPARC [now ESPR] and EAG)


Note:

I bulldozed through a bunch of ideas here, some of which could have probably garnered a longer post. I’ll probably explore some of these ideas later on, but if you want to talk more about any one of them, feel free to leave a comment / PM me.

 

Levers, Emotions, and Lazy Evaluators:

5 lifelonglearner 20 February 2017 11:00PM

Levers, Emotions, and Lazy Evaluators: Post-CFAR 2

[This is a trio of topics following from the first post that all use the idea of ontologies in the mental sense as a bouncing off point. I examine why naming concepts can be helpful, listening to your emotions, and humans as lazy evaluators. I think this post may also be of interest to people here. Posts 3 and 4 are less so, so I'll probably skip those, unless someone expresses interest. Lastly, the below expressed views are my own and don’t reflect CFAR’s in any way.]


Levers:

When I was at the CFAR workshop, someone mentioned that something like 90% of the curriculum was just making up fancy new names for things they already sort of did. This got some laughs, but I think it’s worth exploring why even just naming things can be powerful.


Our minds do lots of things; they carry many thoughts, and we can recall many memories. Some of these phenomena may be more helpful for our goals, and we may want to name them.


When we name a phenomenon, like focusing, we’re essentially drawing a boundary around the thing, highlighting attention on it. We’ve made it conceptually discrete. This transformation, in turn, allows us to more concretely identify which things among the sea of our mental activity correspond to Focusing.


Focusing can then become a concept that floats in our understanding of things our minds can do. We’ve taken a mental action and packaged it into a “thing”. This can be especially helpful if we’ve identified a phenomena that consists of several steps which usually aren’t found together.


By drawing certain patterns around a thing with a name, we can hopefully help others recognize them and perhaps do the same for other mental motions, which seems to be one more way that we find new rationality techniques.


This then means that we’ve created a new action that is explicitly available to our ontology. This notion of “actions I can take” is what I think forms the idea of levers in our mind. When CFAR teaches a rationality technique, the technique itself seems to be pointing at a sequence of things that happen in our brain. Last post, I mentioned that I think CFAR techniques upgrade people’s mindsets by changing their sense of what is possible.


I think that levers are a core part of this because they give us the feeling of, “Oh wow! That thing I sometimes do has a name! Now I can refer to it and think about it in a much nicer way. I can call it ‘focusing’, rather than ‘that thing I sometimes do when I try to figure out why I’m feeling sad that involves looking into myself’.”


For example, once you understand that a large part of habituation is simply "if-then" loops (ala TAPs, aka Trigger Action Plans), you’ve now not only understood what it means to learn something as a habit, but you’ve internalized the very concept of habituation itself. You’ve gone one meta-level up, and you can now reason about this abstract mental process in a far more explicit way.


Names haves power in the same way that abstraction barriers have power in a programming language—they change how you think about the phenomena itself, and this in turn can affect your behavior.  

 

Emotions:

CFAR teaches a class called “Understanding Shoulds”, which is about seeing your “shoulds”, the parts of yourself that feel like obligations, as data about things you might care about. This is a little different from Nate Soares’s Replacing Guilt series, which tries to move past guilt-based motivation.


In further conversations with staff, I’ve seen the even deeper view that all emotions should be considered information.


The basic premise seems to be based off the understanding that different parts of us may need different things to function. Our conscious understanding of our own needs may sometimes be limited. Thus, our implicit emotions (and other S1 processes) can serve as a way to inform ourselves about what we’re missing.


In this way, all emotions seem channels where information can be passed on from implicit parts of you to the forefront of “meta-you”. This idea of “emotions as a data trove” is yet another ontology that produces different rationality techniques, as it’s operating on, once again, a mental model that is built out of a different type of abstraction.


Many of the skills based on this ontology focus on communication between different pieces of the self.


I’m very sympathetic to this viewpoint, as it form the basis of the Internal Double Crux (IDC) technique, one of my favorite CFAR skills. In short, IDC assumes that akrasia-esque problems are caused by a disagreement between different parts of you, some of which might be in the implicit parts of your brain.


By “disagreement”, I mean that some part of you endorses an action for some well-meaning reasons, but some other part of you is against the action and also has justifications. To resolve the problem, IDC has us “dialogue” between the conflicting parts of ourselves, treating both sides as valid. If done right, without “rigging” the dialogue to bias one side, IDC can be a powerful way to source internal motivation for our tasks.


While I do seem to do some communication between my emotions, I haven’t fully integrated them as internal advisors in the IFS sense. I’m not ready to adopt a worldview that might potentially hand over executive control to all the parts of me. Meta-me still deems some of my implicit desires as “foolish”, like the part of me that craves video games, for example. In order to avoid slippery slopes, I have a blanket precommitment on certain things in life.


For the meantime, I’m fine sticking with these precommitments. The modern world is filled with superstimuli, from milkshakes to insight porn (and the normal kind) to mobile games, that can hijack our well-meaning reward systems.


Lastly, I believe that without certain mental prerequisites, some ontologies can be actively harmful. Nate’s Resolving Guilt series can leave people without additional motivation for their actions; guilt can be a useful motivator. Similarly, Nihilism is another example of an ontology that can be crippling unless paired with ideas like humanism.

 

Lazy Evaluators:

In In Defense of the Obvious, I gave a practical argument as to why obvious advice was very good. I brought this point up up several times during the workshop, and people seemed to like the point.


While that essay focused on listening to obvious advice, there appears to be a similar thing where merely asking someone, “Did you do all the obvious things?” will often uncover helpful solutions they have yet to do.

 

My current hypothesis for this (apart from “humans are programs that wrote themselves on computers made of meat”, which is a great workshop quote) is that people tend to be lazy evaluators. In programming, lazy evaluation is a way of solving for the value of expressions at the last minute, not until the answers are absolutely needed.


It seems like something similar happens in people’s heads, where we simply don’t ask ourselves questions like “What are multiple ways I could accomplish this?” or “Do actually I want to do this thing?” until we need to…Except that most of the time, we never need to—Life putters on, whether or not we’re winning at it.


I think this is part of what makes “pair debugging”, a CFAR activity where a group of people try to help one person with their “bugs”, effective. When we have someone else taking an outside view asking us these questions, it may even be the first time we see these questions ourselves.


Therefore, it looks like a helpful skill is to constantly ask ourselves questions and cultivate a sense of curiosity about how things are. Anna Salamon refers to this skill of “boggling”. I think boggling can help with both counteracting lazy evaluation and actually doing obvious actions.


Looking at why obvious advice is obvious, like “What the heck does ‘obvious’ even mean?” can help break the immediate dismissive veneer our brain puts on obvious information.


EX: “If I want to learn more about coding, it probably makes sense to ask some coder friends what good resources are.”


“Nah, that’s so obvious; I should instead just stick to this abstruse book that basically no one’s heard of—wait, I just rejected something that felt obvious.”


“Huh…I wonder why that thought felt obvious…what does it even mean for something to be dubbed ‘obvious’?”


“Well…obvious thoughts seem to have a generally ‘self-evident’ tag on them. If they aren’t outright tautological or circularly defined, then there’s a sense where the obvious things seems to be the shortest paths to the goal. Like, I could fold my clothes or I could build a Rube Goldberg machine to fold my clothes. But the first option seems so much more ‘obvious’…”


“Aside from that, there also seems to be a sense where if I search my brain for ‘obvious’ things, I’m using a ‘faster’ mode of thinking (ala System 1). Also, aside from favoring simpler solutions, also seems to be influenced by social norms (what do people ‘typically’ do). And my ‘obvious action generator’ seems to also be built off my understanding of the world, like, I’m thinking about things in terms of causal chains that actually exist in the world. As in, when I’m thinking about ‘obvious’ ways to get a job, for instance, I’m thinking about actions I could take in the real world that might plausibly actually get me there…”


“Whoa…that means that obvious advice is so much more than some sort of self-evident tag. There’s a huge amount of information that’s being compressed when I look at it from the surface…’Obvious’ really means something like ‘that which my brain quickly dismisses because it is simple, complies with social norms, and/or runs off my internal model of how the universe works.”


The goal is to reduce the sort of “acclimation” that happens with obvious advice by peering deeper into it. Ideally, if you’re boggling at your own actions, you can force yourself to evaluate earlier. Otherwise, it can hopefully at least make obvious advice more appealing.


I’ll end with a quote of mine from the workshop:


“You still yet fail to grasp the weight of the Obvious.”


Ontologies are Operating Systems

4 lifelonglearner 18 February 2017 05:00AM

Ontologies are Operating Systems: Post-CFAR 1

[I recently came back from volunteering at a CFAR workshop. I found the whole experience to be 100% enjoyable, and I’ll be doing an actual workshop review soon. I also learned some new things and updated my mind. This is the first in a four-part series on new thoughts that I’ve gotten as a result of the workshop. If LW seems to like this one, I'll post the rest too.]


I’ve been thinking more about the idea of how we even reason about our own thinking, our “ontology of mind”, and how our internal mental model of how our brain works.

 

(Roughly speaking, “ontology” means the framework you view reality through, and I’ll be using it here to refer specifically to how we view our minds.)


Before I continue, it might be helpful to ask yourself some of the below questions:

  • What is my brain like, perhaps in the form of a metaphor?

  • How do I model my thoughts?

  • What things can and can’t my brain do?

  • What does it feel like when I am thinking?

  • Do my thoughts often influence my actions?


<reminder to actually think a little before continuing>


I don’t know about you, but for me, my thoughts often feel like they float into my head. There’s a general sense of effortlessly having things stream in. If I’m especially aware (i.e. metacognitive), I can then reflect on my thoughts. But for the most part, I’m filled with thoughts about the task I’m doing.


Though I don’t often go meta, I’m aware of the fact that I’m able to. In specific situations, knowing this helps me debug my thinking processes. For example, say my internal dialogue looks like this:


“Okay, so I’ve sent to forms to Steve, and now I’ve just got to do—oh wait what about my physics test—ARGH PAIN NO—now I’ve just got to do the write-up for—wait, I just thought about physics and felt some pain. Huh… I wonder why…Move past the pain, what’s bugging me about physics? It looks like I don’t want to do it because…  because I don’t think it’ll be useful?”


Because my ontology of how my thoughts operate includes the understanding that metacognition is possible, this is a “lever” I can pull on in my own mind.


I suspect that people who don’t engage in thinking about their thinking (via recursion, talking to themselves, or other things to this effect) may have a less developed internal picture of how their minds work. Things inside their head might seem to just pop in, with less explanation.


I posit that having a model of your brain that is less fleshed out affects our perception of what our brains can and can’t do.


We can imagine a hypothetical person who is self-aware and generally a fine human, except that their internal picture of their mind feels very much like a black box. They might have a sense of fatalism about some things in their mind or just feel a little confused about how their thoughts originate.


Then they come to a CFAR workshop.


What I think a lot of the CFAR rationality techniques gives these people is an upgraded internal picture of their mind with many additional levers. By “lever”, I mean a thing we can do in our brain, like metacognition or focusing (I’ll write more about levers next post). The upgraded internal picture of their mind draws attention to these levers and empowers people to have greater awareness and control in their heads by “pulling” on them.


But it’s not exactly these new levers that are the point. CFAR has mentioned that the point of teaching rationality techniques is to not only give people shiny new tools, but also improve their mindset. I agree with this view—there does seem to be something like an “optimizing mindset” that embodies rationality.


I posit that CFAR’s rationality techniques upgrade people’s ontologies of mind by changing their sense of what is possible. This, I think, is the core of an improved mindset—an increased corrigibility of mind.

 

Consider: Our hypothetical human goes to a rationality workshop and leaves with a lot of skills, but the general lesson is bigger than that. They’ve just seen that their thoughts can be accessed and even changed! It’s as if a huge blind spot in their thinking has been removed, and they’re now looking at entirely new classes of actions they can take!


When we talk about levers and internal models of our thinking, it’s important to remember that we’re really just talking about analogies or metaphors that exist in the mind. We don’t actually have access to our direct brain activity, so we need to make do with intermediaries that exist as concepts, which are made up of concepts, which are made up of concepts, etc etc.


Your ontology, the way that you think about how your thoughts work, is really just an abstract framework that makes it easier for “meta-you” (the part of your brain that seems like “you”) to more easily interface with your real brain.

 

Kind of like an operating system.


In other words, we can’t directly deal with all those neurons; our ontology, which contains thoughts, memories, internal advisors, and everything else is a conceptual interface that allows us to better manipulate information stored in our brain.


However, the operating system you acquire by interacting with CFAR-esque rationality techniques isn’t the only way type of upgraded ontology you can acquire. There exist other models which may also be just as valid. Different ontologies may draw boundaries around other mental things and empower your mind in different ways.


Leverage Research, for example, seems to be building its view of rationality from a perspective deeply grounded in introspection. I don’t know too much about them, but in a few conversations, they’ve acknowledged that their view of the mind is much more based off beliefs and internal views of things. This seems like they’d have a different sense of what is and isn’t possible.


My own personal view of rationality often views humans as merely a collection of TAPs (basically glorified if-then loops) for the most part. This ontology leads me to often think about shaping the environment, precommitment, priming/conditioning, and other ways to modify my habit structure. Within this framework of “humans as TAPs”, I search for ways to improve.


This is contrast with another view I hold of myself as an “agenty” human that has free will in a meaningful sense. Under this ontology, I’m focusing on metacognition and executive function. Of course, this assertion of my ability to choose and pick my actions seems to be at odds with my first view of myself as a habit-stuffed zombie.


It seems plausible then, that rationality techniques which often seem at odds with one another, like the above examples, occur because they’re operating on fundamentally different assumptions of how to interface with the human mind.


In some way, it seems like I’m stating that every ontology of mind is correct. But what about mindsets that model the brain as a giant hamburger? That seems obviously wrong. My response here is to appeal to practicality. In reality, all these mental models are wrong, but some of them can be useful. No ontology accurately depicts what’s happening in our brains, but the helpful ones can allows us to think better and make better choices.

 

The biggest takeaway for me after realizing all this was that even my mental framework, the foundation from which I built up my understanding of instrumental rationality, is itself based on certain assumptions of my ontology. And these assumptions, though perhaps reasonable, are still just a helpful abstraction that makes it easier for me to deal with my brain.

 

[Link] Case Studies Highlighting CFAR’s Impact on Existential Risk

4 Unnamed 10 January 2017 06:51PM

[Link] Ozy's Thoughts on CFAR's Mission Statement

2 Raemon 14 December 2016 04:25PM

[Link] CFAR's new mission statement (on our website)

7 AnnaSalamon 10 December 2016 08:37AM

The barriers to the task

-7 Elo 18 August 2016 07:22AM

Original post: http://bearlamp.com.au/the-barriers-to-the-task/


For about two months now I have been putting in effort to run in the mornings.  To make this happen, I had to take away all the barriers to me wanting to do that.  There were plenty of them, and I failed to leave my house plenty of times.  Some examples are:

Making sure I don't need correct clothes - I leave my house shirtless and barefoot, and grab my key on the way out.  

Pre-commitment to run - I take my shirt off when getting into bed the night before, so I don't even have to consider the action in the morning when I roll out of bed.

Being busy in the morning - I no longer plan any appointments before 11am.  Depending on the sunrise (I don't use alarms), I wake up in the morning, spend some time reading things, then roll out of bed to go to the toilet and leave my house.  In Sydney we just passed the depths of winter and it's beginning to get light earlier and earlier in the morning.  Which is easy now; but was harder when getting up at 7 meant getting up in the dark.  

There were days when I would wake up at 8am, stay in bed until 9am, then realise if I left for a run (which takes around an hour - 10am), then came back to have a shower (which takes 20mins - 10:20), then left to travel to my first meeting (which can take 30mins 10:50).  That means if anything goes wrong I can be late to an 11am appointment.  But also - if I have a 10am meeting I have to skip my run to get there on time.

Going to bed at a reasonable hour - I am still getting used to deciding not to work myself ragged.  I decided to accept that sleep is important, and trust to let my body sleep as long as it needs.  This sometimes also means that I can successfully get bonus time by keeping healthy sleep habits.  But also - if I go to sleep after midnight I might not get up until later, which means I compromise my "time" to go running by shoving it into other habits.

Deciding where to run - google maps, look for local parks, plan a route with the least roads and least traffic.  I did this once and then it was done.  It was also exciting to measure the route and be able to run further and further each day/week/month.


What's in your way?

If you are not doing something that you think is good and right (or healthy, or otherwise desireable) there are likely things in your way.  If you just found out about an action that is good, well and right and there is nothing stopping you from doing it; great.  You are lucky this time - Just.Do.It.

If you are one of the rest of us; who know that:

  • daily exercise is good for you
  • The right amount of sleep is good for you
  • Eating certain foods are better than others
  • certain social habits are better than others
  • certain hobbies are more fulfilling (to our needs or goals) than others

And you have known this a while but still find yourself not taking the actions you want.  It's time to start asking what is in your way.  You might find it on someone else's list, but you are looking for the needle in the haystack.  

You are much better off doing this (System 2 exercise):

  1. take 15 minutes with pencil and paper.
  2. At the top write, "I want to ______________".
  3. If you know that's true you might not need this step - if you are not sure - write out why it might be true or not true.
  4. Write down the barriers that are in the way of you doing the thing.  think;
    • "can I do this right now?" (might not always be an action you can take while sitting around thinking about it - i.e. eating different foods)
    • "why can't I just do this at every opportunity that arises?"
    • "how do I increase the frequency of opportunities?"
  5. Write out the things you are doing instead of that thing.
    These things are the barriers in your way as well.
  6. For each point - consider what you are going to do about them.

Questions:

  • What actions have you tried to take on?
  • What barriers have you encountered in doing so?
  • How did you solve that barrier?
  • What are you struggling with taking on in the future?

Meta: this borrows from the Immunity to Change process, that can be best read about in the book, "right weight, right mind".  It also borrows from CFAR style techniques like resolve cycles (also known as focused grit), hamming questions, murphy-jitsu.

Meta: this took one hour to write.

Cross posted to lesswrong: http://lesswrong.com/lw/nuq

Review and Thoughts on Current Version of CFAR Workshop

11 Gleb_Tsipursky 06 June 2016 01:44PM

Outline: I will discuss my background and how I prepared for the workshop, and then how I would have prepared differently if I could go back and have the chance to do it again; I will then discuss my experience at the CFAR workshop, and what I would have done differently if I had the chance to do it again; I will then discuss what my take-aways were from the workshop, and what I am doing to integrate CFAR strategies into my life; finally, I will give my assessment of its benefits and what other folks might expect to get who attend the workshop.


 

Acknowledgments: Thanks to fellow CFAR alumni and CFAR staff for feedback on earlier versions of this post


 

Introduction

 

Many aspiring rationalists have heard about the Center for Applied Rationality, an organization devoted to teaching applied rationality skills to help people improve their thinking, feeling, and behavior patterns. This nonprofit does so primarily through its intense workshops, and is funded by donations and revenue from its workshop. It fulfills its social mission through conducting rationality research and through giving discounted or free workshops to those people its staff judge as likely to help make the world a better place, mainly those associated with various Effective Altruist cause areas, especially existential risk.

 

To be fully transparent: even before attending the workshop, I already had a strong belief that CFAR is a great organization and have been a monthly donor to CFAR for years. So keep that in mind as you read my description of my experience (you can become a donor here).


Preparation

 

First, some background about myself, so you know where I’m coming from in attending the workshop. I’m a professor specializing in the intersection of history, psychology, behavioral economics, sociology, and cognitive neuroscience. I discovered the rationality movement several years ago through a combination of my research and attending a LessWrong meetup in Columbus, OH, and so come from a background of both academic and LW-style rationality. Since discovering the movement, I have become an activist in the movement as the President of Intentional Insights, a nonprofit devoted to popularizing rationality and effective altruism (see here for our EA work). So I came to the workshop with some training and knowledge of rationality, including some CFAR techniques.

 

To help myself prepare for the workshop, I reviewed existing posts about CFAR materials, with an eye toward being careful not to assume that the actual techniques match their actual descriptions in the posts.

 

I also delayed a number of tasks for after the workshop, tying up loose ends. In retrospect, I wish I did not leave myself some ongoing tasks to do during the workshop. As part of my leadership of InIn, I coordinate about 50ish volunteers, and I wish I had placed those responsibilities on someone else during the workshop.

 

Before the workshop, I worked intensely on finishing up some projects. In retrospect, it would have been better to get some rest and come to the workshop as fresh as possible.

 

There were some communication snafus with logistics details before the workshop. It all worked out in the end, but I would have told myself in retrospect to get the logistics hammered out in advance to not experience anxiety before the workshop about how to get there.


Experience

 

The classes were well put together, had interesting examples, and provided useful techniques. FYI, my experience in the workshop was that reading these techniques in advance was not harmful, but that the techniques in the CFAR classes were quite a bit better than the existing posts about them, so don’t assume you can get the same benefits from reading posts as attending the workshop. So while I was aware of the techniques, the ones in the classes definitely had optimized versions of them - maybe because of the “broken telephone” effect or maybe because CFAR optimized them from previous workshops, not sure. I was glad to learn that CFAR considers the workshop they gave us in May as satisfactory enough to scale up their workshops, while still improving their content over time.

 

Just as useful as the classes were the conversations held in between and after the official classes ended. Talking about them with fellow aspiring rationalists and seeing how they were thinking about applying these to their lives was helpful for sparking ideas about how to apply them to my life. The latter half of the CFAR workshop was especially great, as it focused on pairing off people and helping others figure out how to apply CFAR techniques to themselves and how to address various problems in their lives. It was especially helpful to have conversations with CFAR staff and trained volunteers, of whom there were plenty - probably about 20 volunteers/staff for the 50ish workshop attendees.

 

Another super-helpful aspect of the conversations was networking and community building. Now, this may have been more useful to some participants than others, so YMMV. As an activist in the moment, I talked to many folks in the CFAR workshop about promoting EA and rationality to a broad audience. I was happy to introduce some people to EA, with my most positive conversation there being to encourage someone to switch his efforts regarding x-risk from addressing nuclear disarmament to AI safety research as a means of addressing long/medium-term risk, and promoting rationality as a means of addressing short/medium-term risk. Others who were already familiar with EA were interested in ways of promoting it broadly, while some aspiring rationalists expressed enthusiasm over becoming rationality communicators.

 

Looking back at my experience, I wish I was more aware of the benefits of these conversations. I went to sleep early the first couple of nights, and I would have taken supplements to enable myself to stay awake and have conversations instead.


Take-Aways and Integration

 

The aspects of the workshop that I think will help me most were what CFAR staff called “5-second” strategies - brief tactics and techniques that could be executed in 5 seconds or less and address various problems. The stuff that we learned at the workshops that I was already familiar with required some time to learn and practice, such as Trigger Action Plans, Goal Factoring, Murphyjitsu, Pre-Hindsight, often with pen and paper as part of the work. However, with sufficient practice, one can develop brief techniques that mimic various aspects of the more thorough techniques, and apply them quickly to in-the-moment decision-making.

 

Now, this doesn’t mean that the longer techniques are not helpful. They are very important, but they are things I was already generally familiar with, and already practice. The 5-second versions were more of a revelation for me, and I anticipate will be more helpful for me as I did not know about them previously.

 

Now, CFAR does a very nice job of helping people integrate the techniques into daily life, as this is a common failure mode of CFAR attendees, with them going home and not practicing the techniques. So they have 6 Google Hangouts with CFAR staff and all attendees who want to participate, 4 one-on-one sessions with CFAR trained volunteers or staff, and they also pair you with one attendee for post-workshop conversations. I plan to take advantage of all these, although my pairing did not work out.

 

For integration of CFAR techniques into my life, I found the CFAR strategy of “Overlearning” especially helpful. Overlearning refers to trying to apply a single technique intensely for a while to all aspect of one’s activities, so that it gets internalized thoroughly. I will first focus on overlearning Trigger Action Plans, following the advice of CFAR.

 

I also plan to teach CFAR techniques in my local rationality dojo, as teaching is a great way to learn, naturally.

 

Finally, I plan to integrate some CFAR techniques into Intentional Insights content, at least the more simple techniques that are a good fit for the broad audience with which InIn is communicating.


Benefits

 

I have a strong probabilistic belief that having attended the workshop will improve my capacity to be a person who achieves my goals for doing good in the world. I anticipate I will be able to figure out better whether the projects I am taking on are the best uses of my time and energy. I will be more capable of avoiding procrastination and other forms of akrasia. I believe I will be more capable of making better plans, and acting on them well. I will also be more in touch with my emotions and intuitions, and be able to trust them more, as I will have more alignment among different components of my mind.

 

Another benefit is meeting the many other people at CFAR who have similar mindsets. Here in Columbus, we have a flourishing rationality community, but it’s still relatively small. Getting to know 70ish people, attendees and staff/volunteers, passionate about rationality was a blast. It was especially great to see people who were involved in creating new rationality strategies, something that I am engaged in myself in addition to popularizing rationality - it’s really heartening to envision how the rationality movement is growing.

 

These benefits should resonate strongly with those who are aspiring rationalists, but they are really important for EA participants as well. I think one of the best things that EA movement members can do is studying rationality, and it’s something we promote to the EA movement as part of InIn’s work. What we offer is articles and videos, but coming to a CFAR workshop is a much more intense and cohesive way of getting these benefits. Imagine all the good you can do for the world if you are better at planning, organizing, and enacting EA-related tasks. Rationality is what has helped me and other InIn participants make the major impact that we have been able to make, and there are a number of EA movement members who have rationality training and who reported similar benefits. Remember, as an EA participant, you can likely get a scholarship with a partial or full coverage of the regular $3900 price of the workshop, as I did myself when attending it, and you are highly likely to be able to save more lives as a result of attending the workshop over time, even if you have to pay some costs upfront.

 

Hope these thoughts prove helpful to you all, and please contact me at gleb@intentionalinsights.org if you want to chat with me about my experience.

 

How to be skeptical

-3 [deleted] 26 December 2015 06:33AM

Community

The Center For Applied Rationality (CFAR) checklist is a heuristic for assessing the admissibility of one's own testimony. 

What of the challenge of evaluating the testimony of others?

Slapping the label of a bias on a situation?

Arguing at the object level by provision of evidence to the contrary?

This risks Gish Gallop. For those who prefer to pick their battles, I commisioned this post of my time, a structural intervention into the information ecosystem.

We need not event the wheel, for legal theorists have researched this issue for years, while practitioners and courts have identified heuristics useful to lay people interested in this field. 

Precedent 

The Daubert standard provides a rule of evidence regarding the admissibility of expert witnessestestimony during United States federal legal proceedings. Pursuant to this standard, a party may raise a Daubert motion, which is a special case of motion in limine raised before or during trial to exclude the presentation of unqualified evidence to the jury. The Daubert trilogy refers to the three United States Supreme Court cases that articulated the Daubert standard:

-https://en.wikipedia.org/wiki/Daubert_standard

Further reading on the case is available here on Google Scholar

Practice

How can this be applied in practice? 

What is the first principle of skepticism. It's effectively synonymous: 'question'

What question? This isn't the 5 W's of primary school, after all.

I have summarized critical questions to a reading here to get the ball rolling:

Issues to consider when contesting and evaluating expert opinion evidence

 

A. Relevance (on the voir dire)

I accept that you are highly qualified and have extensive experience, but how do we know that your level of performance regarding . . . [the task at hand — eg, voice comparison] is actually better than that of a lay person (or the jury)?

What independent evidence... [such as published studies of your technique and its accuracy] can you direct us to that would allow us to answer this question?

What independent evidence confirms that your technique works?

Do you participate in a blind proficiency testing program?

Given that you undertake blind proficiency exercises, are these exercises also given to lay persons to determine if there are significant differences in results, such that your asserted expertise can be supported?

B. Validation 

Do you accept that techniques should be validated?

Can you direct us to specific studies that have validated the technique that you used?

What precisely did these studies assess (and is the technique being used in the same way in this case)?

Have you ever had your ability formally tested in conditions where the correct answer was known? (ie, not a previous investigation or trial)

Might different analysts using your technique produce different answers?

Has there been any variation in the result on any of the validation or proficiency tests you know of or participated in?

Can you direct us to the written standard or protocol used in your analysis?

Was it followed?

C. Limitations and errors

Could you explain the limitations of this technique?

Can you tell us about the error rate or potential sources of error associated with this technique?

Can you point to specific studies that provide an error rate or an estimation of an error rate for your technique?

How did you select what to examine?

Were there any differences observed when making your comparison . . . [eg, between two fingerprints], but which you ultimately discounted? On what basis were these discounted?

Could there be differences between the samples that you are unable to observe?

Might someone using the same technique come to a different conclusion?

Might someone using a different technique come to a different conclusion?

Did any of your colleagues disagree with you?

Did any express concerns about the quality of the sample, the results, or your interpretation?

Would some analysts be unwilling to analyse this sample (or produce such a confident opinion)?

...

D Personal proficiency 

...

Have you ever had your own ability... [doing the specific task/using the technique] tested in conditions where the correct answer was known?

If not, how can we be confident that you are proficient?

If so, can you provide independent empirical evidence of your performance?


E Expressions of opinion 

...

Can you explain how you selected the terminology used to express your opinion? Is it based on a scale or some calculation?

If so, how was the expression selected?

Would others analyzing the same material produce similar conclusions, and a similar strength of opinion? How do you know?

Is the use of this terminology derived from validation studies?

Did you report all of your results?

You would accept that forensic science results should generally be expressed in non-absolute terms?



More

For further reading, I recommend the seminal text in cross-examination which is the 1903 The Art of Cross Examination.

The Full Text is available free here on Project Gutenberg.

Other countries use different standards, such as the Opinion Rule in Australia.


Forecasting and recursive Inhibition within a decision cycle

1 [deleted] 20 December 2015 05:37AM

When we anticipate the future, we the opportunity to inhibit our behaviours which we anticipate will lead to counterfactual outcomes. Those of us with sufficiently low latencies in our decision cycles may recursively anticipate the consequences of counterfactuating (neologism) interventions to recursively intervene against our interventions.

This may be difficult for some. Try modelling that decision cycle as a nano-scale approximation of time travel. One relevant paradox from popular culture is the farther future paradox described in the tv cartoon called Family Guy.

Watch this clip: https://www.youtube.com/watch?v=4btAggXRB_Q

Relating the satire back to our abstraction of the decision cycle, one may ponder:

What is a satisfactory stopping rule for the far anticipation of self-referential consequence?

That is:

(1) what are the inherent harmful implications of inhibiting actions in and of themselves: stress?

(2) what are their inherent merits: self-determination?

and (3) what are the favourable and disfavourable consequences as x point into the future given y number of points of self reference at points z, a, b and c?

see no ready solution to this problem in terms of human rationality, and see no corresponding problem in artificial intelligence, where it would also apply. Given the relevance to MIRI (since CFAR doesn't seem work on open-problems in the same way)

I would like to also take this opportunity to open this as an experimental thread for the community to generate a list of ''open-problems'' in human rationality that are otherwise scattered across the community blog and wiki. 

[Link] 10 Tips from CFAR: My Business Insider article

19 James_Miller 10 December 2015 02:09AM

Speculative rationality skills and appropriable research or anecdote

3 [deleted] 21 July 2015 04:02AM

Is rationality training in it's infancy? I'd like to think so, given the paucity of novel, usable information produced by rationalists since the Sequence days. I like to model the rationalist body of knowledge as superset of pertinent fields such as decision analysis, educational psychology and clinical psychology. This reductionist model enables rationalists to examine the validity of rationalist constructs while standing on the shoulders of giants.

CFAR's obscurantism (and subsequent price gouging) capitalises on our [fear of missing out](https://en.wikipedia.org/wiki/Fear_of_missing_out). They brand established techniques like mindfulness as againstness or reference class forecasting as 'hopping' as if it's of their own genesis, spiting academic tradition and cultivating an insular community. In short, Lesswrongers predictably flouts [cooperative principles](https://en.wikipedia.org/wiki/Cooperative_principle).

This thread is to encourage you to speculate on potential rationality techniques, underdetermined by existing research which might be a useful area for rationalist individuals and organisations to explore. I feel this may be a better use of rationality skills training organisations time, than gatekeeping information.

To get this thread started, I've posted a speculative rationality skill I've been working on. I'd appreciate any comments about it or experiences with it. However, this thread is about working towards the generation of rationality skills more broadly.

Min/max goal factoring and belief mapping exercise

-1 [deleted] 23 June 2015 05:30AM

Edit 3: Removed description of previous edits and added the following:

This thread used to contain the description of a rationality exercise.

I have removed it and plan to rewrite it better.

I will repost it here, or delete this thread and repost in the discussion.

Thank you.

CFAR-run MIRI Summer Fellows program: July 7-26

22 AnnaSalamon 28 April 2015 07:04PM

CFAR will be running a three week summer program this July for MIRI, designed to increase participants' ability to do technical research into the superintelligence alignment problem.

The intent of the program is to boost participants as far as possible in four skills:

  1. The CFAR “applied rationality” skillset, including both what is taught at our intro workshops, and more advanced material from our alumni workshops;
  2. “Epistemic rationality as applied to the foundations of AI, and other philosophically tricky problems” -- i.e., the skillset taught in the core LW Sequences.  (E.g.: reductionism; how to reason in contexts as confusing as anthropics without getting lost in words.)
  3. The long-term impacts of AI, and strategies for intervening (e.g., the content discussed in Nick Bostrom’s book Superintelligence).
  4. The basics of AI safety-relevant technical research.  (Decision theory, anthropics, and similar; with folks trying their hand at doing actual research, and reflecting also on the cognitive habits involved.)

The program will be offered free to invited participants, and partial or full scholarships for travel expenses will be offered to those with exceptional financial need.

If you're interested (or possibly-interested), sign up for an admissions interview ASAP at this link (takes 2 minutes): http://rationality.org/miri-summer-fellows-2015/

Also, please forward this post, or the page itself, to anyone you think should come; the skills and talent that humanity brings to bear on the superintelligence alignment problem may determine our skill at navigating it, and sharing this opportunity with good potential contributors may be a high-leverage way to increase that talent.

Against the internal locus of control

6 Thrasymachus 03 April 2015 05:48PM

What do you think about these pairs of statements?

  1. People's misfortunes result from the mistakes they make
  2. Many of the unhappy things in people's lives are partly due to bad luck
  1. In the long run, people get the respect they deserve in this world.
  2. Unfortunately, an individual's worth often passes unrecognized no matter how hard he tries.
  1. Becoming a success is a matter of hard work; luck has little or nothing to do with it.
  2. Getting a good job mainly depends on being in the right place at the right time.

They have a similar theme: the first statement suggests that an outcome (misfortune, respect, or a good job) for a person are the result of their own action or volition. The second assigns the outcome to some external factor like bad luck.(1)

People who tend to think their own attitudes or efforts can control what happens to them are said to have an internal locus of control, those who don't, an external locus of control. (Call them 'internals' and 'externals' for short).

Internals seem to do better at life, pace obvious confounding: maybe instead of internals doing better by virtue of their internal locus of control, being successful inclines you to attribute success internal factors and so become more internal, and vice versa if you fail.(2) If you don't think the relationship is wholly confounded, then there is some prudential benefit for becoming more internal.

Yet internal versus external is not just a matter of taste, but a factual claim about the world. Do people, in general, get what their actions deserve, or is it generally thanks to matters outside their control?

Why the external view is right

Here are some reasons in favour of an external view:(3)

  1. Global income inequality is marked (e.g. someone in the bottom 10% of the US population by income is still richer than two thirds of the population - more here). The main predictor of your income is country of birth, it is thought to explain around 60% of the variance: not only more important than any other factor, but more important than all other factors put together.
  2. Of course, the 'remaining' 40% might not be solely internal factors either. Another external factor we could put in would be parental class. Include that, and the two factors explain 80% of variance in income.
  3. Even conditional on being born in the right country (and to the right class), success may still not be a matter of personal volition. One robust predictor of success (grades in school, job performance, income, and so on) is IQ. The precise determinants of IQ remain controversial, it is known to be highly heritable, and the 'non-genetic' factors of IQ proposed (early childhood environment, intra-uterine environment, etc.) are similarly outside one's locus of control.

On cursory examination the contours of how our lives are turned out are set by factors outside our control, merely by where we are born and who our parents are. Even after this we know various predictors, similarly outside (or mostly outside) of our control, that exert their effects on how our lives turn out: IQ is one, but we could throw in personality traits, mental health, height, attractiveness, etc.

So the answer to 'What determined how I turned out, compared to everyone else on the planet?', the answer surely has to by primarily about external factors, and our internal drive or will is relegated a long way down the list. Even if we want to look at narrower questions, like "What has made me turn out the way I am, versus all the other people who were likewise born in rich countries in comfortable circumstances?" It is still unclear whether the locus of control resides within our will: perhaps a combination of our IQ, height, gender, race, risk of mental illness and so on will still do the bulk of the explanatory work.(4)

Bringing the true and the prudentially rational together again

If it is the case that folks with an internal locus of control succeed more, yet also the external view being generally closer to the truth of the matter, this is unfortunate. What is true and what is prudentially rational seem to be diverging, such that it might be in your interests not to know about the evidence in support of an external locus of control view, as deluding yourself about an internal locus of control view would lead to your greater success.

Yet it is generally better not to believe falsehoods. Further, the internal view may have some costs. One possibility is fueling a just world fallacy: if one thinks that outcomes are generally internally controlled, then a corollary is when bad things happen to someone or they fail at something, it was primarily their fault rather than them being a victim of circumstance.

So what next? Perhaps the right view is to say that: although most important things are outside our control, not everything is. Insofar as we do the best with what things we can control, we make our lives go better. And the scope of internal factors - albeit conditional on being a rich westerner etc. - may be quite large: it might determine whether you get through medical school, publish a paper, or put in enough work to do justice to your talents. All are worth doing.

Acknowledgements

Inspired by Amanda MacAskill's remarks, and in partial response of Peter McIntyre. Neither are responsible for what I've written, and the former's agreement or the latter's disagreement with this post shouldn't be assumed.

 

1) Some ground-clearing: free will can begin to loom large here - after all, maybe my actions are just a result of my brain's particular physical state, and my brain's particular physical state at t depends on it's state at t-1, and so on and so forth all the way to the big bang. If so, there is no 'internal willer' for my internal locus of control to reside.

However, even if that is so, we can parse things in a compatibilist way: 'internal' factors are those which my choices can affect; external factors are those which my choices cannot affect. "Time spent training" is an internal factor as to how fast I can run, as (borrowing Hume), if I wanted to spend more time training, I could spend more time training, and vice versa. In contrast, "Hemiparesis secondary to birth injury" is an external factor, as I had no control over whether it happened to me, and no means of reversing it now. So the first set of answers imply support for the results of our choices being more important; whilst the second set assign more weight to things 'outside our control'.

2) In fairness, there's a pretty good story as to why there should be 'forward action': in the cases where outcome is a mix of 'luck' factors (which are a given to anyone), and 'volitional ones' (which are malleable), people inclined to think the internal ones matter a lot will work hard at them, and so will do better when this is mixed in with the external determinants.

3) This ignores edge cases where we can clearly see the external factors dominate - e.g. getting childhood leukaemia, getting struck by lightning etc. - I guess sensible proponents of an internal locus of control would say that there will be cases like this, but for most people, in most cases, their destiny is in their hands. Hence I focus on population level factors.

4) Ironically, one may wonder to what extent having an internal versus external view is itself an external factor.

An alarming fact about the anti-aging community

30 diegocaleiro 16 February 2015 05:49PM

Past and Present

Ten years ago teenager me was hopeful. And stupid.

The world neglected aging as a disease, Aubrey had barely started spreading memes, to the point it was worth it for him to let me work remotely to help with Metuselah foundation. They had not even received that initial 1,000,000 donation from an anonymous donor. The Metuselah prize was running for less than 400,000 if I remember well. Still, I was a believer.

Now we live in the age of Larry Page's Calico, 100,000,000 dollars trying to tackle the problem, besides many other amazing initiatives, from the research paid for by Life Extension Foundation and Bill Faloon, to scholars in top universities like Steve Garan and Kenneth Hayworth fixing things from our models of aging to plastination techniques. Yet, I am much more skeptical now.

Individual risk

I am skeptical because I could not find a single individual who already used a simple technique that could certainly save you many years of healthy life. I could not even find a single individual who looked into it and decided it wasn't worth it, or was too pricy, or something of that sort.

That technique is freezing some of your cells now.

Freezing cells is not a far future hope, this is something that already exists, and has been possible for decades. The reason you would want to freeze them, in case you haven't thought of it, is that they are getting older every day, so the ones you have now are the youngest ones you'll ever be able to use.

Using these cells to create new organs is not something that may help you if medicine and technology continue progressing according to the law of accelerating returns in 10 or 30 years. We already know how to make organs out of your cells. Right now. Some organs live longer, some shorter, but it can be done - for instance to bladders - and is being done.

Hope versus Reason

Now, you'd think if there was an almost non-invasive technique already shown to work in humans that can preserve many years of your life and involves only a few trivial inconveniences - compared to changing diet or exercising for instance- the whole longevist/immortalist crowd would be lining up for it and keeping back up tissue samples all over the place.

Well I've asked them. I've asked some of the adamant researchers, and I've asked the superwealthy; I've asked the cryonicists and supplement gorgers; I've asked those who work on this 8 hour a day every day, and I've asked those who pay others to do so. I asked it mostly for selfish reasons, I saw the TEDs by Juan Enriquez and Anthony Atala and thought: hey look, clearly beneficial expected life length increase, yay! let me call someone who found this out before me - anyone, I'm probably the last one, silly me - and fix this.

I've asked them all, and I have nothing to show for it.

My takeaway lesson is: whatever it is that other people are doing to solve their own impending death, they are far from doing it rationally, and maybe most of the money and psychology involved in this whole business is about buying hope, not about staring into the void and finding out the best ways of dodging it. Maybe people are not in fact going to go all-in if the opportunity comes.

How to fix this?

Let me disclose first that I have no idea how to fix this problem. I don't mean the problem of getting all longevists to freeze their cells, I mean the problem of getting them to take information from the world of science and biomedicine and applying it to themselves. To become users of the technology they are boasters of. To behave rationally in a CFAR or even homo economicus sense.

I was hoping for a grandiose idea in this last paragraph, but it didn't come. I'll go with a quote from this emotional song sung by us during last year's Secular Solstice celebration

Do you realize? that everyone, you know, someday will die...

And instead of sending all your goodbyes

Let them know you realize that life goes fast

It's hard to make the good things last

CFAR fundraiser far from filled; 4 days remaining

42 AnnaSalamon 27 January 2015 07:26AM

We're 4 days from the end of our matching fundraiser, and still only about 1/3rd of the way to our target (and to the point where pledged funds would cease being matched).

If you'd like to support the growth of rationality in the world, do please consider donating, or asking me about any questions/etc. you may have.  I'd love to talk.  I suspect funds donated to CFAR between now and Jan 31 are quite high-impact.

As a random bonus, I promise that if we meet the $120k matching challenge, I'll post at least two posts with some never-before-shared (on here) rationality techniques that we've been playing with around CFAR.

Harper's Magazine article on LW/MIRI/CFAR and Ethereum

44 gwern 12 December 2014 08:34PM

Cover title: “Power and paranoia in Silicon Valley”; article title: “Come with us if you want to live: Among the apocalyptic libertarians of Silicon Valley” (mirrors: 1, 2, 3), by Sam Frank; Harper’s Magazine, January 2015, pg26-36 (~8500 words). The beginning/ending are focused on Ethereum and Vitalik Buterin, so I'll excerpt the LW/MIRI/CFAR-focused middle:

…Blake Masters-the name was too perfect-had, obviously, dedicated himself to the command of self and universe. He did CrossFit and ate Bulletproof, a tech-world variant of the paleo diet. On his Tumblr’s About page, since rewritten, the anti-belief belief systems multiplied, hyperlinked to Wikipedia pages or to the confoundingly scholastic website Less Wrong: “Libertarian (and not convinced there’s irreconcilable fissure between deontological and consequentialist camps). Aspiring rationalist/Bayesian. Secularist/agnostic/ ignostic . . . Hayekian. As important as what we know is what we don’t. Admittedly eccentric.” Then: “Really, really excited to be in Silicon Valley right now, working on fascinating stuff with an amazing team.” I was startled that all these negative ideologies could be condensed so easily into a positive worldview. …I saw the utopianism latent in capitalism-that, as Bernard Mandeville had it three centuries ago, it is a system that manufactures public benefit from private vice. I started CrossFit and began tinkering with my diet. I browsed venal tech-trade publications, and tried and failed to read Less Wrong, which was written as if for aliens.

…I left the auditorium of Alice Tully Hall. Bleary beside the silver coffee urn in the nearly empty lobby, I was buttonholed by a man whose name tag read MICHAEL VASSAR, METAMED research. He wore a black-and-white paisley shirt and a jacket that was slightly too big for him. “What did you think of that talk?” he asked, without introducing himself. “Disorganized, wasn’t it?” A theory of everything followed. Heroes like Elon and Peter (did I have to ask? Musk and Thiel). The relative abilities of physicists and biologists, their standard deviations calculated out loud. How exactly Vassar would save the world. His left eyelid twitched, his full face winced with effort as he told me about his “personal war against the universe.” My brain hurt. I backed away and headed home. But Vassar had spoken like no one I had ever met, and after Kurzweil’s keynote the next morning, I sought him out. He continued as if uninterrupted. Among the acolytes of eternal life, Vassar was an eschatologist. “There are all of these different countdowns going on,” he said. “There’s the countdown to the broad postmodern memeplex undermining our civilization and causing everything to break down, there’s the countdown to the broad modernist memeplex destroying our environment or killing everyone in a nuclear war, and there’s the countdown to the modernist civilization learning to critique itself fully and creating an artificial intelligence that it can’t control. There are so many different - on different time-scales - ways in which the self-modifying intelligent processes that we are embedded in undermine themselves. I’m trying to figure out ways of disentangling all of that. . . .I’m not sure that what I’m trying to do is as hard as founding the Roman Empire or the Catholic Church or something. But it’s harder than people’s normal big-picture ambitions, like making a billion dollars.” Vassar was thirty-four, one year older than I was. He had gone to college at seventeen, and had worked as an actuary, as a teacher, in nanotech, and in the Peace Corps. He’d founded a music-licensing start-up called Sir Groovy. Early in 2012, he had stepped down as president of the Singularity Institute for Artificial Intelligence, now called the Machine Intelligence Research Institute (MIRI), which was created by an autodidact named Eliezer Yudkowsky, who also started Less Wrong. Vassar had left to found MetaMed, a personalized-medicine company, with Jaan Tallinn of Skype and Kazaa, $500,000 from Peter Thiel, and a staff that included young rationalists who had cut their teeth arguing on Yudkowsky’s website. The idea behind MetaMed was to apply rationality to medicine-“rationality” here defined as the ability to properly research, weight, and synthesize the flawed medical information that exists in the world. Prices ranged from $25,000 for a literature review to a few hundred thousand for a personalized study. “We can save lots and lots and lots of lives,” Vassar said (if mostly moneyed ones at first). “But it’s the signal-it’s the ‘Hey! Reason works!’-that matters. . . . It’s not really about medicine.” Our whole society was sick - root, branch, and memeplex - and rationality was the only cure. …I asked Vassar about his friend Yudkowsky. “He has worse aesthetics than I do,” he replied, “and is actually incomprehensibly smart.” We agreed to stay in touch.

One month later, I boarded a plane to San Francisco. I had spent the interim taking a second look at Less Wrong, trying to parse its lore and jargon: “scope insensitivity,” “ugh field,” “affective death spiral,” “typical mind fallacy,” “counterfactual mugging,” “Roko’s basilisk.” When I arrived at the MIRI offices in Berkeley, young men were sprawled on beanbags, surrounded by whiteboards half black with equations. I had come costumed in a Fermat’s Last Theorem T-shirt, a summary of the proof on the front and a bibliography on the back, printed for the number-theory camp I had attended at fifteen. Yudkowsky arrived late. He led me to an empty office where we sat down in mismatched chairs. He wore glasses, had a short, dark beard, and his heavy body seemed slightly alien to him. I asked what he was working on. “Should I assume that your shirt is an accurate reflection of your abilities,” he asked, “and start blabbing math at you?” Eight minutes of probability and game theory followed. Cogitating before me, he kept grimacing as if not quite in control of his face. “In the very long run, obviously, you want to solve all the problems associated with having a stable, self-improving, beneficial-slash-benevolent AI, and then you want to build one.” What happens if an artificial intelligence begins improving itself, changing its own source code, until it rapidly becomes - foom! is Yudkowsky’s preferred expression - orders of magnitude more intelligent than we are? A canonical thought experiment devised by Oxford philosopher Nick Bostrom in 2003 suggests that even a mundane, industrial sort of AI might kill us. Bostrom posited a “superintelligence whose top goal is the manufacturing of paper-clips.” For this AI, known fondly on Less Wrong as Clippy, self-improvement might entail rearranging the atoms in our bodies, and then in the universe - and so we, and everything else, end up as office supplies. Nothing so misanthropic as Skynet is required, only indifference to humanity. What is urgently needed, then, claims Yudkowsky, is an AI that shares our values and goals. This, in turn, requires a cadre of highly rational mathematicians, philosophers, and programmers to solve the problem of “friendly” AI - and, incidentally, the problem of a universal human ethics - before an indifferent, unfriendly AI escapes into the wild.

Among those who study artificial intelligence, there’s no consensus on either point: that an intelligence explosion is possible (rather than, for instance, a proliferation of weaker, more limited forms of AI) or that a heroic team of rationalists is the best defense in the event. That MIRI has as much support as it does (in 2012, the institute’s annual revenue broke $1 million for the first time) is a testament to Yudkowsky’s rhetorical ability as much as to any technical skill. Over the course of a decade, his writing, along with that of Bostrom and a handful of others, has impressed the dangers of unfriendly AI on a growing number of people in the tech world and beyond. In August, after reading Superintelligence, Bostrom’s new book, Elon Musk tweeted, “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” In 2000, when Yudkowsky was twenty, he founded the Singularity Institute with the support of a few people he’d met at the Foresight Institute, a Palo Alto nanotech think tank. He had already written papers on “The Plan to Singularity” and “Coding a Transhuman AI,” and posted an autobiography on his website, since removed, called “Eliezer, the Person.” It recounted a breakdown of will when he was eleven and a half: “I can’t do anything. That’s the phrase I used then.” He dropped out before high school and taught himself a mess of evolutionary psychology and cognitive science. He began to “neuro-hack” himself, systematizing his introspection to evade his cognitive quirks. Yudkowsky believed he could hasten the singularity by twenty years, creating a superhuman intelligence and saving humankind in the process. He met Thiel at a Foresight Institute dinner in 2005 and invited him to speak at the first annual Singularity Summit. The institute’s paid staff grew. In 2006, Yudkowsky began writing a hydra-headed series of blog posts: science-fictionish parables, thought experiments, and explainers encompassing cognitive biases, self-improvement, and many-worlds quantum mechanics that funneled lay readers into his theory of friendly AI. Rationality workshops and Meetups began soon after. In 2009, the blog posts became what he called Sequences on a new website: Less Wrong. The next year, Yudkowsky began publishing Harry Potter and the Methods of Rationality at fanfiction.net. The Harry Potter category is the site’s most popular, with almost 700,000 stories; of these, HPMoR is the most reviewed and the second-most favorited. The last comment that the programmer and activist Aaron Swartz left on Reddit before his suicide in 2013 was on /r/hpmor. In Yudkowsky’s telling, Harry is not only a magician but also a scientist, and he needs just one school year to accomplish what takes canon-Harry seven. HPMoR is serialized in arcs, like a TV show, and runs to a few thousand pages when printed; the book is still unfinished. Yudkowsky and I were talking about literature, and Swartz, when a college student wandered in. Would Eliezer sign his copy of HPMoR? “But you have to, like, write something,” he said. “You have to write, ‘I am who I am.’ So, ‘I am who I am’ and then sign it.” “Alrighty,” Yudkowsky said, signed, continued. “Have you actually read Methods of Rationality at all?” he asked me. “I take it not.” (I’d been found out.) “I don’t know what sort of a deadline you’re on, but you might consider taking a look at that.” (I had taken a look, and hated the little I’d managed.) “It has a legendary nerd-sniping effect on some people, so be warned. That is, it causes you to read it for sixty hours straight.”

The nerd-sniping effect is real enough. Of the 1,636 people who responded to a 2013 survey of Less Wrong’s readers, one quarter had found the site thanks to HPMoR, and many more had read the book. Their average age was 27.4, their average IQ 138.2. Men made up 88.8% of respondents; 78.7% were straight, 1.5% transgender, 54.7 % American, 89.3% atheist or agnostic. The catastrophes they thought most likely to wipe out at least 90% of humanity before the year 2100 were, in descending order, pandemic (bioengineered), environmental collapse, unfriendly AI, nuclear war, pandemic (natural), economic/political collapse, asteroid, nanotech/gray goo. Forty-two people, 2.6 %, called themselves futarchists, after an idea from Robin Hanson, an economist and Yudkowsky’s former coblogger, for reengineering democracy into a set of prediction markets in which speculators can bet on the best policies. Forty people called themselves reactionaries, a grab bag of former libertarians, ethno-nationalists, Social Darwinists, scientific racists, patriarchists, pickup artists, and atavistic “traditionalists,” who Internet-argue about antidemocratic futures, plumping variously for fascism or monarchism or corporatism or rule by an all-powerful, gold-seeking alien named Fnargl who will free the markets and stabilize everything else. At the bottom of each year’s list are suggestive statistical irrelevancies: “every optimizing system’s a dictator and i’m not sure which one i want in charge,” “Autocracy (important: myself as autocrat),” “Bayesian (aspiring) Rationalist. Technocratic. Human-centric Extropian Coherent Extrapolated Volition.” “Bayesian” refers to Bayes’s Theorem, a mathematical formula that describes uncertainty in probabilistic terms, telling you how much to update your beliefs when given new information. This is a formalization and calibration of the way we operate naturally, but “Bayesian” has a special status in the rationalist community because it’s the least imperfect way to think. “Extropy,” the antonym of “entropy,” is a decades-old doctrine of continuous human improvement, and “coherent extrapolated volition” is one of Yudkowsky’s pet concepts for friendly artificial intelligence. Rather than our having to solve moral philosophy in order to arrive at a complete human goal structure, C.E.V. would computationally simulate eons of moral progress, like some kind of Whiggish Pangloss machine. As Yudkowsky wrote in 2004, “In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together.” Yet can even a single human’s volition cohere or compute in this way, let alone humanity’s? We stood up to leave the room. Yudkowsky stopped me and said I might want to turn my recorder on again; he had a final thought. “We’re part of the continuation of the Enlightenment, the Old Enlightenment. This is the New Enlightenment,” he said. “Old project’s finished. We actually have science now, now we have the next part of the Enlightenment project.”

In 2013, the Singularity Institute changed its name to the Machine Intelligence Research Institute. Whereas MIRI aims to ensure human-friendly artificial intelligence, an associated program, the Center for Applied Rationality, helps humans optimize their own minds, in accordance with Bayes’s Theorem. The day after I met Yudkowsky, I returned to Berkeley for one of CFAR’s long-weekend workshops. The color scheme at the Rose Garden Inn was red and green, and everything was brocaded. The attendees were mostly in their twenties: mathematicians, software engineers, quants, a scientist studying soot, employees of Google and Facebook, an eighteen-year-old Thiel Fellow who’d been paid $100,000 to leave Boston College and start a company, professional atheists, a Mormon turned atheist, an atheist turned Catholic, an Objectivist who was photographed at the premiere of Atlas Shrugged II: The Strike. There were about three men for every woman. At the Friday-night meet and greet, I talked with Benja, a German who was studying math and behavioral biology at the University of Bristol, whom I had spotted at MIRI the day before. He was in his early thirties and quite tall, with bad posture and a ponytail past his shoulders. He wore socks with sandals, and worried a paper cup as we talked. Benja had felt death was terrible since he was a small child, and wanted his aging parents to sign up for cryonics, if he could figure out how to pay for it on a grad-student stipend. He was unsure about the risks from unfriendly AI - “There is a part of my brain,” he said, “that sort of goes, like, ‘This is crazy talk; that’s not going to happen’” - but the probabilities had persuaded him. He said there was only about a 30% chance that we could make it another century without an intelligence explosion. He was at CFAR to stop procrastinating. Julia Galef, CFAR’s president and cofounder, began a session on Saturday morning with the first of many brain-as-computer metaphors. We are “running rationality on human hardware,” she said, not supercomputers, so the goal was to become incrementally more self-reflective and Bayesian: not perfectly rational agents, but “agent-y.” The workshop’s classes lasted six or so hours a day; activities and conversations went well into the night. We got a condensed treatment of contemporary neuroscience that focused on hacking our brains’ various systems and modules, and attended sessions on habit training, urge propagation, and delegating to future selves. We heard a lot about Daniel Kahneman, the Nobel Prize-winning psychologist whose work on cognitive heuristics and biases demonstrated many of the ways we are irrational. Geoff Anders, the founder of Leverage Research, a “meta-level nonprofit” funded by Thiel, taught a class on goal factoring, a process of introspection that, after many tens of hours, maps out every one of your goals down to root-level motivations-the unchangeable “intrinsic goods,” around which you can rebuild your life. Goal factoring is an application of Connection Theory, Anders’s model of human psychology, which he developed as a Rutgers philosophy student disserting on Descartes, and Connection Theory is just the start of a universal renovation. Leverage Research has a master plan that, in the most recent public version, consists of nearly 300 steps. It begins from first principles and scales up from there: “Initiate a philosophical investigation of philosophical method”; “Discover a sufficiently good philosophical method”; have 2,000-plus “actively and stably benevolent people successfully seek enough power to be able to stably guide the world”; “People achieve their ultimate goals as far as possible without harming others”; “We have an optimal world”; “Done.” On Saturday night, Anders left the Rose Garden Inn early to supervise a polyphasic-sleep experiment that some Leverage staff members were conducting on themselves. It was a schedule called the Everyman 3, which compresses sleep into three twenty-minute REM naps each day and three hours at night for slow-wave. Anders was already polyphasic himself. Operating by the lights of his own best practices, goal-factored, coherent, and connected, he was able to work 105 hours a week on world optimization. For the rest of us, for me, these were distant aspirations. We were nerdy and unperfected. There was intense discussion at every free moment, and a genuine interest in new ideas, if especially in testable, verifiable ones. There was joy in meeting peers after years of isolation. CFAR was also insular, overhygienic, and witheringly focused on productivity. Almost everyone found politics to be tribal and viscerally upsetting. Discussions quickly turned back to philosophy and math. By Monday afternoon, things were wrapping up. Andrew Critch, a CFAR cofounder, gave a final speech in the lounge: “Remember how you got started on this path. Think about what was the time for you when you first asked yourself, ‘How do I work?’ and ‘How do I want to work?’ and ‘What can I do about that?’ . . . Think about how many people throughout history could have had that moment and not been able to do anything about it because they didn’t know the stuff we do now. I find this very upsetting to think about. It could have been really hard. A lot harder.” He was crying. “I kind of want to be grateful that we’re now, and we can share this knowledge and stand on the shoulders of giants like Daniel Kahneman . . . I just want to be grateful for that. . . . And because of those giants, the kinds of conversations we can have here now, with, like, psychology and, like, algorithms in the same paragraph, to me it feels like a new frontier. . . . Be explorers; take advantage of this vast new landscape that’s been opened up to us in this time and this place; and bear the torch of applied rationality like brave explorers. And then, like, keep in touch by email.” The workshop attendees put giant Post-its on the walls expressing the lessons they hoped to take with them. A blue one read RATIONALITY IS SYSTEMATIZED WINNING. Above it, in pink: THERE ARE OTHER PEOPLE WHO THINK LIKE ME. I AM NOT ALONE.

That night, there was a party. Alumni were invited. Networking was encouraged. Post-its proliferated; one, by the beer cooler, read SLIGHTLY ADDICTIVE. SLIGHTLY MIND-ALTERING. Another, a few feet to the right, over a double stack of bound copies of Harry Potter and the Methods of Rationality: VERY ADDICTIVE. VERY MIND-ALTERING. I talked to one of my roommates, a Google scientist who worked on neural nets. The CFAR workshop was just a whim to him, a tourist weekend. “They’re the nicest people you’d ever meet,” he said, but then he qualified the compliment. “Look around. If they were effective, rational people, would they be here? Something a little weird, no?” I walked outside for air. Michael Vassar, in a clinging red sweater, was talking to an actuary from Florida. They discussed timeless decision theory (approximately: intelligent agents should make decisions on the basis of the futures, or possible worlds, that they predict their decisions will create) and the simulation argument (essentially: we’re living in one), which Vassar traced to Schopenhauer. He recited lines from Kipling’s “If-” in no particular order and advised the actuary on how to change his life: Become a pro poker player with the $100k he had in the bank, then hit the Magic: The Gathering pro circuit; make more money; develop more rationality skills; launch the first Costco in Northern Europe. I asked Vassar what was happening at MetaMed. He told me that he was raising money, and was in discussions with a big HMO. He wanted to show up Peter Thiel for not investing more than $500,000. “I’m basically hoping that I can run the largest convertible-debt offering in the history of finance, and I think it’s kind of reasonable,” he said. “I like Peter. I just would like him to notice that he made a mistake . . . I imagine a hundred million or a billion will cause him to notice . . . I’d like to have a pi-billion-dollar valuation.” I wondered whether Vassar was drunk. He was about to drive one of his coworkers, a young woman named Alyssa, home, and he asked whether I would join them. I sat silently in the back of his musty BMW as they talked about potential investors and hires. Vassar almost ran a red light. After Alyssa got out, I rode shotgun, and we headed back to the hotel.

It was getting late. I asked him about the rationalist community. Were they really going to save the world? From what? “Imagine there is a set of skills,” he said. “There is a myth that they are possessed by the whole population, and there is a cynical myth that they’re possessed by 10% of the population. They’ve actually been wiped out in all but about one person in three thousand.” It is important, Vassar said, that his people, “the fragments of the world,” lead the way during “the fairly predictable, fairly total cultural transition that will predictably take place between 2020 and 2035 or so.” We pulled up outside the Rose Garden Inn. He continued: “You have these weird phenomena like Occupy where people are protesting with no goals, no theory of how the world is, around which they can structure a protest. Basically this incredibly, weirdly, thoroughly disempowered group of people will have to inherit the power of the world anyway, because sooner or later everyone older is going to be too old and too technologically obsolete and too bankrupt. The old institutions may largely break down or they may be handed over, but either way they can’t just freeze. These people are going to be in charge, and it would be helpful if they, as they come into their own, crystallize an identity that contains certain cultural strengths like argument and reason.” I didn’t argue with him, except to press, gently, on his particular form of elitism. His rationalism seemed so limited to me, so incomplete. “It is unfortunate,” he said, “that we are in a situation where our cultural heritage is possessed only by people who are extremely unappealing to most of the population.” That hadn’t been what I’d meant. I had meant rationalism as itself a failure of the imagination. “The current ecosystem is so totally fucked up,” Vassar said. “But if you have conversations here”-he gestured at the hotel-“people change their mind and learn and update and change their behaviors in response to the things they say and learn. That never happens anywhere else.” In a hallway of the Rose Garden Inn, a former high-frequency trader started arguing with Vassar and Anna Salamon, CFAR’s executive director, about whether people optimize for hedons or utilons or neither, about mountain climbers and other high-end masochists, about whether world happiness is currently net positive or negative, increasing or decreasing. Vassar was eating and drinking everything within reach. My recording ends with someone saying, “I just heard ‘hedons’ and then was going to ask whether anyone wants to get high,” and Vassar replying, “Ah, that’s a good point.” Other voices: “When in California . . .” “We are in California, yes.”

…Back on the East Coast, summer turned into fall, and I took another shot at reading Yudkowsky’s Harry Potter fanfic. It’s not what I would call a novel, exactly, rather an unending, self-satisfied parable about rationality and trans-humanism, with jokes.

…I flew back to San Francisco, and my friend Courtney and I drove to a cul-de-sac in Atherton, at the end of which sat the promised mansion. It had been repurposed as cohousing for children who were trying to build the future: start-up founders, singularitarians, a teenage venture capitalist. The woman who coined the term “open source” was there, along with a Less Wronger and Thiel Capital employee who had renamed himself Eden. The Day of the Idealist was a day for self-actualization and networking, like the CFAR workshop without the rigor. We were to set “mega goals” and pick a “core good” to build on in the coming year. Everyone was a capitalist; everyone was postpolitical. I squabbled with a young man in a Tesla jacket about anti-Google activism. No one has a right to housing, he said; programmers are the people who matter; the protesters’ antagonistic tactics had totally discredited them.

…Thiel and Vassar and Yudkowsky, for all their far-out rhetoric, take it on faith that corporate capitalism, unchecked just a little longer, will bring about this era of widespread abundance. Progress, Thiel thinks, is threatened mostly by the political power of what he calls the “unthinking demos.”


Pointer thanks to /u/Vulture.

My experience of the recent CFAR workshop

29 Kaj_Sotala 27 November 2014 04:17PM

Originally posted at my blog.

---

I just got home from a four-day rationality workshop in England that was organized by the Center For Applied Rationality (CFAR). It covered a lot of content, but if I had to choose a single theme that united most of it, it was listening to your emotions.

That might sound like a weird focus for a rationality workshop, but cognitive science has shown that the intuitive and emotional part of the mind (”System 1”) is both in charge of most of our behavior, and also carries out a great deal of valuable information-processing of its own (it’s great at pattern-matching, for example). Much of the workshop material was aimed at helping people reach a greater harmony between their System 1 and their verbal, logical System 2. Many of people’s motivational troubles come from the goals of their two systems being somehow at odds with each other, and we were taught to have our two systems have a better dialogue with each other, harmonizing their desires and making it easier for information to cross from one system to the other and back.

To give a more concrete example, there was the technique of goal factoring. You take a behavior that you often do but aren’t sure why, or which you feel might be wasted time. Suppose that you spend a lot of time answering e-mails that aren’t actually very important. You start by asking yourself: what’s good about this activity, that makes me do it? Then you try to listen to your feelings in response to that question, and write down what you perceive. Maybe you conclude that it makes you feel productive, and it gives you a break from tasks that require more energy to do.

Next you look at the things that you came up with, and consider whether there’s a better way to accomplish them. There are two possible outcomes here. Either you conclude that the behavior is an important and valuable one after all, meaning that you can now be more motivated to do it. Alternatively, you find that there would be better ways of accomplishing all the goals that the behavior was aiming for. Maybe taking a walk would make for a better break, and answering more urgent e-mails would provide more value. If you were previously using two hours per day on the unimportant e-mails, possibly you could now achieve more in terms of both relaxation and actual productivity by spending an hour on a walk and an hour on the important e-mails.

At this point, you consider your new plan, and again ask yourself: does this feel right? Is this motivating? Are there any slight pangs of regret about giving up my old behavior? If you still don’t want to shift your behavior, chances are that you still have some motive for doing this thing that you have missed, and the feelings of productivity and relaxation aren’t quite enough to cover it. In that case, go back to the step of listing motives.

Or, if you feel happy and content about the new direction that you’ve chosen, victory!

Notice how this technique is all about moving information from one system to another. System 2 notices that you’re doing something but it isn’t sure why that is, so it asks System 1 for the reasons. System 1 answers, ”here’s what I’m trying to do for us, what do you think?” Then System 2 does what it’s best at, taking an analytic approach and possibly coming up with better ways of achieving the different motives. Then it gives that alternative approach back to System 1 and asks, would this work? Would this give us everything that we want? If System 1 says no, System 2 gets back to work, and the dialogue continues until both are happy.

Again, I emphasize the collaborative aspect between the two systems. They’re allies working for common goals, not enemies. Too many people tend towards one of two extremes: either thinking that their emotions are stupid and something to suppress, or completely disdaining the use of logical analysis. Both extremes miss out on the strengths of the system that is neglected, and make it unlikely for the person to get everything that they want.

As I was heading back from the workshop, I considered doing something that I noticed feeling uncomfortable about. Previous meditation experience had already made me more likely to just attend to the discomfort rather than trying to push it away, but inspired by the workshop, I went a bit further. I took the discomfort, considered what my System 1 might be trying to warn me about, and concluded that it might be better to err on the side of caution this time around. Finally – and this wasn’t a thing from the workshop, it was something I invited on the spot – I summoned a feeling of gratitude and thanked my System 1 for having been alert and giving me the information. That might have been a little overblown, since neither system should actually be sentient by itself, but it still felt like a good mindset to cultivate.

Although it was never mentioned in the workshop, what comes to mind is the concept of wu-wei from Chinese philosophy, a state of ”effortless doing” where all of your desires are perfectly aligned and everything comes naturally. In the ideal form, you never need to force yourself to do something you don’t want to do, or to expend willpower on an unpleasant task. Either you want to do something and do, or don’t want to do it, and don’t.

A large number of the workshop’s classes – goal factoring, aversion factoring and calibration, urge propagation, comfort zone expansion, inner simulation, making hard decisions, Hamming questions, againstness – were aimed at more or less this. Find out what System 1 wants, find out what System 2 wants, dialogue, aim for a harmonious state between the two. Then there were a smaller number of other classes that might be summarized as being about problem-solving in general.

The classes about the different techniques were interspersed with ”debugging sessions” of various kinds. In the beginning of the workshop, we listed different bugs in our lives – anything about our lives that we weren’t happy with, with the suggested example bugs being things like ”every time I talk to so-and-so I end up in an argument”, ”I think that I ‘should’ do something but don’t really want to”, and ”I’m working on my dissertation and everything is going fine – but when people ask me why I’m doing a PhD, I have a hard time remembering why I wanted to”. After we’d had a class or a few, we’d apply the techniques we’d learned to solving those bugs, either individually, in pairs, or small groups with a staff member or volunteer TA assisting us. Then a few more classes on techniques and more debugging, classes and debugging, and so on.

The debugging sessions were interesting. Often when you ask someone for help on something, they will answer with direct object-level suggestions – if your problem is that you’re underweight and you would like to gain some weight, try this or that. Here, the staff and TAs would eventually get to the object-level advice as well, but first they would ask – why don’t you want to be underweight? Okay, you say that you’re not completely sure but based on the other things that you said, here’s a stupid and quite certainly wrong theory of what your underlying reasons for it might be, how does that theory feel like? Okay, you said that it’s mostly on the right track, so now tell me what’s wrong with it? If you feel that gaining weight would make you more attractive, do you feel that this is the most effective way of achieving that?

Only after you and the facilitator had reached some kind of consensus of why you thought that something was a bug, and made sure that the problem you were discussing was actually the best way to address to reasons, would it be time for the more direct advice.

At first, I had felt that I didn’t have very many bugs to address, and that I had mostly gotten reasonable advice for them that I might try. But then the workshop continued, and there were more debugging sessions, and I had to keep coming up with bugs. And then, under the gentle poking of others, I started finding the underlying, deep-seated problems, and some things that had been motivating my actions for the last several months without me always fully realizing it. At the end, when I looked at my initial list of bugs that I’d come up with in the beginning, most of the first items on the list looked hopelessly shallow compared to the later ones.

Often in life you feel that your problems are silly, and that you are affected by small stupid things that ”shouldn’t” be a problem. There was none of that at the workshop: it was tacitly acknowledged that being unreasonably hindered by ”stupid” problems is just something that brains tend to do.  Valentine, one of the staff members, gave a powerful speech about ”alienated birthrights” – things that all human beings should be capable of engaging in and enjoying, but which have been taken from people because they have internalized beliefs and identities that say things like ”I cannot do that” or ”I am bad at that”. Things like singing, dancing, athletics, mathematics, romantic relationships, actually understanding the world, heroism, tackling challenging problems. To use his analogy, we might not be good at these things at first, and may have to grow into them and master them the way that a toddler grows to master her body. And like a toddler who’s taking her early steps, we may flail around and look silly when we first start doing them, but these are capacities that – barring any actual disabilities – are a part of our birthright as human beings, which anyone can ultimately learn to master.

Then there were the people, and the general atmosphere of the workshop. People were intelligent, open, and motivated to work on their problems, help each other, and grow as human beings. After a long, cognitively and emotionally exhausting day at the workshop, people would then shift to entertainment ranging from wrestling to telling funny stories of their lives to Magic: the Gathering. (The game of ”bunny” was an actual scheduled event on the official agenda.) And just plain talk with each other, in a supportive, non-judgemental atmosphere. It was the people and the atmosphere that made me the most reluctant to leave, and I miss them already.

Would I recommend CFAR’s workshops to others? Although my above description may sound rather gushingly positive, my answer still needs to be a qualified ”mmmaybe”. The full price tag is quite hefty, though financial aid is available and I personally got a very substantial scholarship, with the agreement that I would pay it at a later time when I could actually afford it.

Still, the biggest question is, will the changes from the workshop stick? I feel like I have gained a valuable new perspective on emotions, a number of useful techniques, made new friends, strengthened my belief that I can do the things that I really set my mind on, and refined the ways by which I think of the world and any problems that I might have – but aside for the new friends, all of that will be worthless if it fades away in a week. If it does, I would have to judge even my steeply discounted price as ”not worth it”. That said, the workshops do have a money-back guarantee if you’re unhappy with the results, so if it really feels like it wasn’t worth it, I can simply choose to not pay. And if all the new things do end up sticking, it might still turn out that it would have been worth paying even the full, non-discounted price.

CFAR does have a few ways by which they try to make the things stick. There will be Skype follow-ups with their staff, for talking about how things have been going since the workshop. There is a mailing list for workshop alumni, and the occasional events, though the physical events are very US-centric (and in particular, San Francisco Bay Area-centric).

The techniques that we were taught are still all more or less experimental, and are being constantly refined and revised according to people’s experiences. I have already been thinking of a new skill that I had been playing with for a while before the workshop, and which has a bit of that ”CFAR feel” – I will aim to have it written up soon and sent to the others, and maybe it will eventually make its way to the curriculum of a future workshop. That should help keep me engaged as well.

We shall see. Until then, as they say in CFAR – to victory!

The January 2013 CFAR workshop: one-year retrospective

34 Qiaochu_Yuan 18 February 2014 06:41PM

About a year ago, I attended my first CFAR workshop and wrote a post about it here. I mentioned in that post that it was too soon for me to tell if the workshop would have a large positive impact on my life. In the comments to that post, I was asked to follow up on that post in a year to better evaluate that impact. So here we are!

Very short summary: overall I think the workshop had a large and persistent positive impact on my life. 

Important caveat

However, anyone using this post to evaluate the value of going to a CFAR workshop themselves should be aware that I'm local to Berkeley and have had many opportunities to stay connected to CFAR and the rationalist community. More specifically, in addition to the January workshop, I also

  • visited the March workshop (and possibly others),
  • attended various social events held by members of the community,
  • taught at the July workshop, and
  • taught at SPARC.

These experiences were all very helpful in helping me digest and reinforce the workshop material (which was also improving over time), and a typical workshop participant might not have these advantages. 

Answering a question

pewpewlasergun wanted me to answer the following question:

I'd like to know how many techniques you were taught at the meetup you still use regularly. Also which has had the largest effect on your life.

The short answer is: in some sense very few, but a lot of the value I got out of attending the workshop didn't come from specific techniques. 

In more detail: to be honest, many of the specific techniques are kind of a chore to use (at least as of January 2013). I experimented with a good number of them in the months after the workshop, and most of them haven't stuck (but that isn't so bad; the cost of trying a technique and finding that it doesn't work for you is low, while the benefit of trying a technique and finding that it does work for you can be quite high!). One that has is the idea of a next action, which I've found incredibly useful. Next actions are the things that to-do list items should be, say in the context of using Remember The Milk. Many to-do list items you might be tempted to right down are difficult to actually do because they're either too vague or too big and hence trigger ugh fields. For example, you might have an item like

  • Do my taxes

that you don't get around to until right before you have to because you have an ugh field around doing your taxes. This item is both too vague and too big: instead of writing this down, write down the next physical action you need to take to make progress on this item, which might be something more like

  • Find tax forms and put them on desk

which is both concrete and small. Thinking in terms of next actions has been a huge upgrade to my GTD system (as was Workflowy, which I also started using because of the workshop) and I do it constantly. 

But as I mentioned, a lot of the value I got out of attending the workshop was not from specific techniques. Much of the value comes from spending time with the workshop instructors and participants, which had effects that I find hard to summarize, but I'll try to describe some of them below: 

Emotional attitudes

The workshop readjusted my emotional attitudes towards several things for the better, and at several meta levels. For example, a short conversation with a workshop alum completely readjusted my emotional attitude towards both nutrition and exercise, and I started paying more attention to what I ate and going to the gym (albeit sporadically) for the first time in my life not long afterwards. I lost about 15 pounds this way (mostly from the eating part, not the gym part, I think). 

At a higher meta level, I did a fair amount of experimenting with various lifestyle changes (cold showers, not shampooing) after the workshop and overall they had the effect of readjusting my emotional attitude towards change. I find it generally easier to change my behavior than I used to because I've had a lot of practice at it lately, and am more enthusiastic about the prospect of such changes. 

(Incidentally, I think emotional attitude adjustment is an underrated component of causing people to change their behavior, at least here on LW.)

Using all of my strength

The workshop is the first place I really understood, on a gut level, that I could use my brain to think about something other than math. It sounds silly when I phrase it like that, but at some point in the past I had incorporated into my identity that I was good at math but absentminded and silly about real-world matters, and I used it as an excuse not to fully engage intellectually with anything that wasn't math, especially anything practical. One way or another the workshop helped me realize this, and I stopped thinking this way. 

The result is that I constantly apply optimization power to situations I wouldn't have even tried to apply optimization power to before. For example, today I was trying to figure out why the water in my bathroom sink was draining so slowly. At first I thought it was because the strainer had become clogged with gunk, so I cleaned the strainer, but then I found out that even with the strainer removed the water was still draining slowly. In the past I might've given up here. Instead I looked around for something that would fit farther into the sink than my fingers and saw the handle of my plunger. I pumped the handle into the sink a few times and some extra gunk I hadn't known was there came out. The sink is fine now. (This might seem small to people who are more domestically talented than me, but trust me when I say I wasn't doing stuff like this before last year.)

Reflection and repair

Thanks to the workshop, my GTD system is now robust enough to consistently enable me to reflect on and repair my life (including my GTD system). For example, I'm quicker to attempt to deal with minor medical problems I have than I used to be. I also think more often about what I'm doing and whether I could be doing something better. In this regard I pay a lot of attention in particular to what habits I'm forming, although I don't use the specific techniques in the relevant CFAR unit.

For example, at some point I had recorded in RTM that I was frustrated by the sensation of hours going by without remembering how I had spent them (usually because I was mindlessly browsing the internet). In response, I started keeping a record of what I was doing every half hour and categorizing each hour according to a combination of how productively and how intentionally I spent it (in the first iteration it was just how productively I spent it, but I found that this was making me feel too guilty about relaxing). For example:

  • a half-hour intentionally spent reading a paper is marked green.
  • a half-hour half-spent writing up solutions to a problem set and half-spent on Facebook is marked yellow. 
  • a half-hour intentionally spent playing a video game is marked with no color.
  • a half-hour mindlessly browsing the internet when I had intended to do work is marked red. 

The act of doing this every half hour itself helps make me more mindful about how I spend my time, but having a record of how I spend my time has also helped me notice interesting things, like how less of my time is under my direct control than I had thought (but instead is taken up by classes, commuting, eating, etc.). It's also easier for me to get into a success spiral when I see a lot of green. 

Stimulation

Being around workshop instructors and participants is consistently intellectually stimulating. I don't have a tactful way of saying what I'm about to say next, but: two effects of this are that I think more interesting thoughts than I used to and also that I'm funnier than I used to be. (I realize that these are both hard to quantify.) 

etc.

I worry that I haven't given a complete picture here, but hopefully anything I've left out will be brought up in the comments one way or another. (Edit: this totally happened! Please read Anna Salamon's comment below.) 

Takeaway for prospective workshop attendees

I'm not actually sure what you should take away from all this if your goal is to figure out whether you should attend a workshop yourself. My thoughts are roughly this: I think attending a workshop is potentially high-value and therefore that even talking to CFAR about any questions you might have is potentially high-value, in addition to being relatively low-cost. If you think there's even a small chance you could get a lot of value out of attending a workshop I recommend that you at least take that one step. 

CFAR is looking for a videographer for next Wednesday

4 Academian 08 October 2013 05:16AM

Hi all, CFAR is looking for a videographer in the Bay Area to shoot and edit a 1-minute video introducing us.  Do you know anyone?

If so, please send an email to them and me (critch@rationality.org) that introduces us!  

We'll need to shoot the video on Wednesday, Oct 16, or possibly Thursday, Oct 17, and have it edited within about 24 hours.

Thanks for any help tracking someone down!

Sincerely,

--
Critch

Developmental Thinking Shout-out to CFAR

16 MarkL 03 May 2013 01:46AM

Preamble

Before I make my main point, I want to acknowledge that curriculum development is hard. It's even harder when you're trying to teach the unteachable. And it's even harder when you're in the process of bootstrapping. I am aware of the Kahneman inside/outside curriculum design story. And, I myself have taught 200+ hours of my own computer science curricula to middle-school students. So this "open letter," is not some sort of criticism of CFAR's curriculum; It's a "Hey, check out this cool stuff eventually when you have time," letter. I just wanted to put all this out there, to possibly influence the next five years of CFAR.

Curriculum development is hard.

So, anyway, I don't personally know any of the people involved in CFAR, but I do know you're all great. 

 

A case for developmental thinking

The point of this post is to make a case for CFAR to become "developmentally aware." Massive amounts of quality research has gone into describing the differences between 1) children, 2) adults, and 3) expert or developmentally advanced adults. I haven't (yet?) seen any evidence of awareness of this research in CFAR's materials. (I haven't attended a CFAR workshop, but I've flipped through some of the more recent stuff.)

Developmental thinking is a different approach than, e.g., cataloguing biases, promoting real-time awareness of them, and having a toolbox of de-biasing strategies and algorithms. Developmental literature gives clues to the precise cognitive operations that are painstakingly acquired over an entire lifetime, in a more fine-grained way than is possible when studying, say, already-expert performers or cognitive bias literature. I think developmental thinking goes deeper than "toolbox thinking" (straw!) and is an angle of approach for teaching the unteachable

Below is an annotated bibliography of some of my personal touchstones in the development literature, books that are foundational or books that synthesize decades of research about the developmental aspects of entrepreneurial, executive, educational, and scientific thinking, as well as the developmental aspects of emotion and cognition. Note that this is personal, idiosyncratic, non-exhaustive list.

And, to qualify, I have epistemological and ontological issues with plenty of the stuff below. But some of these authors are brilliant, and the rest are smart, meticulous, and values-driven. Lots of these authors deeply care about empirically identifying, targeting, accelerating, and stabilizing skills ahead of schedule or helping skills manifest when they wouldn't have otherwise appeared at all. Quibbles and double-takes aside, there is lots of signal, here, even if it's not seated in a modern framework (which would of course increase the value and accessibility of what's below).

There are clues or even neon signs, here, for isolating fine-grained, trainable stuff to be incorporated into curricula. Even if an intervention was designed for kids, a lot of adults still won't perform consistently prior to said intervention. And these researchers have spent thousands of collective hours thinking about how to structure assessments, interventions, and validations which may be extendable to more advanced scenarios.

So all the material below is not only useful for thinking about remedial or grade-school situations, and is not just for adding more tools to a cognitive toolbox, but could be useful for radically transforming a person's thinking style at a deep level.

Consider:

child:adult :: adult: ? 

This has everything to do with the "Outside the Box" Box. Really. One author below has been collecting data for decades to attempt to describe individuals that may represent far less than one percent of the population.

 

0. Protocol analysis

Everyone knows that people are poor reporters of what goes on in their heads. But this is a straw. A tremendous amount of research has gone into understanding what conditions, tasks, types of cognitive routines, and types of cognitive objects foster reliable introspective reporting. Introspective reporting can be reliable and useful. Grandaddy Herbert Simon (who coined the term "bounded rationality") devotes an entire book to it. The preface (I think) is a great overview. I wanted to mention this, first, because lots of the researchers below use verbal reports in their work.

http://www.amazon.com/Protocol-Analysis-Edition-Verbal-Reports/dp/0262550237/

 

1. Developmental aspects of scientific thinking

Deanna Kuhn and colleagues develop and test fine-grained interventions to promote transfer of various aspects of causal inquiry and reasoning in middle school students. In her words, she wants to "[develop] students' meta-level awareness and management of their intellectual processes." Kuhn believes that inquiry and argumentation skills, carefully defined and empirically backed, should be emphasized over specific content in public education. That sounds like vague and fluffy marketing-speak, but if you drill down to the specifics of what she's doing, her work is anything but. (That goes for all of these 50,000 foot summaries. These people are awesome.)

http://www.amazon.com/Education-Thinking-Deanna-Kuhn/dp/0674027450/

http://www.tc.columbia.edu/academics/index.htm?facid=dk100

http://www.educationforthinking.org/

 

David Klahr and colleagues emphasize how children and adults compare in coordinated searches of a hypothesis space and experiment space. He believes that scientific thinking is not different in kind than everyday thinking. Klahr gives an integrated account of all the current approaches to studying scientific thinking. Herbert Simon was Klahr's dissertation advisor.

http://www.amazon.com/Exploring-Science-Cognition-Development-Discovery/dp/0262611767

http://www.psy.cmu.edu/~klahr/

 

2. Developmental aspects of executive or instrumental thinking

Ok, I'll say it: Elliot Jacques was a psychoanalyst, among other things. And the guy makes weird analogies between thinking styles and truth tables. But his methods are rigorous. He has found possible discontinuities in how adults process information in order to achieve goals and how these differences relate to an individuals "time horizon," or maximum time length over which an individual can comfortably execute a goal. Additionally, he has explored how these factors predictably change over a lifespan.

http://www.amazon.com/Human-Capability-Individual-Potential-Application/dp/0962107077/

 

3. Developmental aspects of entrepreneurial thinking

Saras Sarasvathy and colleagues study the difference between novice entrepreneurs and expert entrepreneurs. Sarasvathy wants to know how people function under conditions of goal ambiguity ("We don't know the exact form of what we want"), environmental isotropy ("The levers to affect the world, in our concrete situation, are non-obvious"), and enaction ("When we act we change the world"). Herbert Simon was her advisor. Her thinking predates and goes beyond the lean startup movement.

http://www.amazon.com/Effectuation-Elements-Entrepreneurial-Expertise-Entrepreneurship/dp/1848445725/

"What effectuation is not" http://www.effectuation.org/sites/default/files/research_papers/not-effectuation.pdf

Related: http://lesswrong.com/r/discussion/lw/hcb/book_suggestion_diaminds_is_worth_reading/

4. General Cognitive Development

Jane Loevinger and colleagues' work have inspired scores of studies. Loevinger discovered potentially stepwise changes in "ego level" over a lifespan. Ego level is an archaic-sounding term that might be defined as one's ontological, epistemological, and metacognitive stance towards self and world. Loevinger's methods are rigorous, with good inter-rater reliability, bayesian scoring rules incorporating base rates, and so forth.

http://www.amazon.com/Measuring-Ego-Development-Volume-Construction/dp/0875890598/

http://www.amazon.com/Measuring-Development-Scoring-Manual-Women/dp/0875890695/

Here is a woo-woo description of the ego levels, but note that these descriptions are based on decades of experience and have a repeatedly validated empirical core. The author of this document, Susanne Cook-Greuter, received her doctorate from Harvard by extending Loevinger's model, and it's well worth reading all the way through: 

http://www.cook-greuter.com/9%20levels%20of%20increasing%20embrace%20update%201%2007.pdf

Here is a recent look at the field:

http://www.amazon.com/The-Postconventional-Personality-Researching-Transpersonal/dp/1438434642/

By the way, having explicit cognitive goals predicts an increase in ego level, three years later, but not an increase in subjective well-being. (Only the highest ego levels are discontinuously associated with increased wellbeing.) Socio-emotional goals do predict an increase in subjective well-being, three years later. Great study:

Bauer, Jack J., and Dan P. McAdams. "Eudaimonic growth: Narrative growth goals predict increases in ego development and subjective well-being 3 years later." Developmental Psychology 46.4 (2010): 761.

 

5. Bridging symbolic and non-symbolic cognition

[Related: http://wiki.lesswrong.com/wiki/A_Human's_Guide_to_Words]

Eugene Gendlin and colleagues developed a "[...] theory of personality change [...] which involved a fundamental shift from looking at content [to] process [...]. From examining hundreds of transcripts and hours of taped psychotherapy interviews, Gendlin and Zimring formulated the Experiencing Level variable. [...]"

The "focusing" technique was designed as a trainable intervention to influence an individual's Experiencing Level.

Marion N. Hendricks reviews 89 studies, concluding that [I quote]:

  • Clients who process in a High Experiencing manner or focus do better in therapy according to client, therapist and objective outcome measures.
  • Clients and therapists judge sessions in which focusing takes place as more successful.
  • Successful short term therapy clients focus in every session.
  • Some clients focus immediately in therapy; Others require training.
  • Clients who process in a Low Experiencing manner can be taught to focus and increase in Experiencing manner, either in therapy or in a separate training.
  • Therapist responses deepen or flatten client Experiencing. Therapists who focus effectively help their clients do so.
  • Successful training in focusing is best maintained by those clients who are the strongest focusers during training.

http://www.focusing.org/research_basis.html

http://www.amazon.com/Focusing-Eugene-T-Gendlin/dp/0553278339/

http://www.amazon.com/Focusing-Oriented-Psychotherapy-Manual-Experiential-Method/dp/157230376X/

http://www.amazon.com/Self-Therapy-Step-By-Step-Wholeness-Cutting-Edge-Psychotherapy/dp/0984392777/ [IFS is very similar to focusing]

http://www.amazon.com/Emotion-Focused-Therapy-Coaching-Clients-Feelings/dp/1557988811/ [more references, similar to focusing]

http://www.amazon.com/Experiencing-Creation-Meaning-Philosophical-Psychological/dp/0810114275/ [favorite book of all time, by the way]

 

6. Rigorous Instructional Design

Siegfried Engelmann (http://www.zigsite.com/) and colleagues are dedicated to dramatically accelerating cognitive skill acquisition in disadvantaged children. In addition to his peer-reviewed research, he specializes in unambiguously decomposing cognitive learning tasks and designing curricula. Engelmann's methods were validated as part of Project Follow Through, the "largest and most expensive experiment in education funded by the U.S. federal government that has ever been conducted," according to Wikipedia. Engelmann contends that the data show that Direct Instruction outperformed all other methods:

http://www.zigsite.com/prologue_NeedyKids_chapter_5.html

http://en.wikipedia.org/wiki/Project_Follow_Through

Here, he systematically eviscerates an example of educational material that doesn't meet his standards:

http://www.zigsite.com/RubricPro.htm

And this is his instructional design philosophy:

http://www.amazon.com/Theory-Instruction-Applications-Siegfried-Engelmann/dp/1880183803/

 

Conclusion

In conclusion, lots of scientists have cared for decades about describing the cognitive differences between children, adults, and expert or developmentally advanced adults. And lots of scientists care about making those differences happen ahead of schedule or happen when they wouldn't have otherwise happened at all. This is a valuable and complementary perspective to what seems to be CFAR's current approach. I hope CFAR will eventually consider digging into this line of thinking, though maybe they're already on top of it or up to something even better.

Book Suggestion: "Diaminds" is worth reading (CFAR-esque)

1 MarkL 03 May 2013 12:19AM

The reason for this submission is that I don't think anyone who visits this website will ever read the book described below, otherwise. And that's a shame.

Simply stated, I think CFAR curriculum designers and people who like CFAR's approach should check out this book:

Diaminds: Decoding the Mental Habits of Successful Thinkers by Mihnea Moldoveanu

I claim that you will find illustrations of high-utility thinking styles and potentially useful exercises within. Yes, I am attempting to promote some random, highly questionable book to your attention.

You contemptuously object:

Stay with me.

Moldeveanu has a "secret identity" as a successful serial entrepreneur (first company sold for $21 million). And, he explicitly discusses the disadvantages of his book, his lack of experimental design, selection bias, explanation versus prediction, etc. The only grounds for his claim of having decoded the mental habits of successful thinkers is that he's done a lot of reading, thinking, and doing, and he has a bunch of interview transcripts of successful people. ("Interview transcripts?!")

You might have more objections:
  • If you dig around a little bit online you'll see that the second author writes highly rated popular business books.
  • If you read a little bit of the book, you'll hear a lot about Nicholas Nassim Taleb, black swans, poorly justified claims about how the mind uses branching tree searches, and other assorted suspicious physical, mathematical, and computational analogies for how the mind works.
  • He even asserts that "death is inevitable" (or something like that) in the introduction. *Gasp!*
Finally, you're thinking:
  • "There are 65 million titles out there. What are the chances that this particular crackpot book will be useful to me or CFAR?"
Stay with me.

Ok, still here? I think if you read this book you will continuously oscillate between swiftly-rising-annoyed-skepticism and hey-that's-uncommonly-smart-and-concisely-useful-and-I-could-try-that.

The exercises are not the sole value of the book, but here are some quickly assembled examples:

"Pick a past event that has been precisely recorded (for good example, a significant rise or fall in the price of the stock you know something about). Write down what you believe to be the best explanation for the event. How much would you bet on the explanation being valid, and why? Next, make a prediction based on your explanation (another movement in the stock's value within a certain time window). How much would you bet on the prediction being true, and why? Are the two sums equal? Why or why not?"

"Pick a difficult personal situation[....] In written sentences, describe the situation the way you typically would when talking about it with a friend or family member. Next, figure out -- and write down -- the basic causal structure of the narrative you've written up. [...E]xpand the range of causal chains you believe were at work. [...]"

"[... G]etting an associate to give you feedback, especially cutting, negative feedback, is not easy [...]. So arm her with a deck of file cards, on each of which is written one of the following in capital letters: WHY?, FOR WHAT PURPOSE?, BY WHAT MECHANISM?, SO WHAT?, I DISAGREE! I AGREE! [...]"

"Keep a record of your thinking process as you go through the steps of trying to solve [these problems]. [...] When you've finished, go through the transcript you've produced and 'encode it' using the coding language (mentalese) we have developed in this chapter. Your coding system should include the following simplified typology: The problem complexity class (easy/hard); The solution search process you used (deterministic/probabilistic); The type of solution your mind is searching for (global/local/adaptive); Your perceived distance from the answer to the problem at several different points in the problem-solving process. [...]"

Those were just some snippets that were easy to type up. Most of the exercises are meatier, and he doesn't just say "write down causal structure" without any context. There is buildup if not hand-holding. There's plenty of cognitive bias-flavored stuff, debiasing stuff, mental-model-switching stuff, OODA loop-type stuff, and much more.

Anyway, Moldoveanu tries to describe tools to change how people think. I think he succeeds, in concreteness and concision, at least, more than anything I've ever read on the subject, so far. I'm not saying this is a masterpiece; it's turgid and a little poisonous, like some PUA stuff. And it's uneven. And, I personally am not making any of the exercises a priority in my life, nor am I saying you should. But you might find helpful ideas in here for your personal experiments, and I think CFAR curriculum designers would probably benefit from reading this book.

You can burn through a first pass of the book in a long evening. It's short enough to do so. Chapter 1 (as opposed to the Preface, Praeludium, and Chapter 6) is probably the best thing to read for deciding whether to keep reading. But go back and read the Preface and Praeludium.

CFAR is hiring a logistics manager

12 AnnaSalamon 05 April 2013 10:32PM

CFAR is hiring an additional logistics manager.  Please click on our form for more information, or to fill out an application:

https://docs.google.com/forms/d/1ACTvM1oYsw1zzHMumrLzffCVVak3eA5A-5uJzyIYOKM/viewform

We hope to choose a candidate within the next week or so, so if you're interested, do apply ASAP.

 

Rationality Habits I Learned at the CFAR Workshop

37 elharo 10 March 2013 02:15PM

Recently Leah Libresco asked attendees at the January CFAR Workshop, "What habits have people installed after workshops?" and that got me thinking that now was a good time to write up and review what I learned (or learned and already forgot). I thought that might be of some interest to folks here, and this is what follows.

What I Learned and Implemented

The most immediately useful thing I learned was the Pomodoro Technique, as I've written about here before. In addition to that, there were a number of small items that I'm continuing to work on.

First, I've become quite fond of the question "Does future me have a comparative advantage?" Especially for small items, if the answer is "No" (and it's no far more often than it's yes) then just do it right now. The more trivial the task, the more useful it is. For instance, today I asked myself that while standing in the bedroom wondering whether to take 30 seconds to move my ExOfficio Bugproof socks from the dresser to the correct box in the closet. (Answer from a few minutes ago:  if I don't take my dog for a walk right now, he's going to pee all over the floor. Future me does have a comparative advantage of not having to clean up pee on the floor. The socks can wait.)

I've begun to notice my confusion and call it to conscious attention more often, though I suspect I learned this first from HpMOR and the sequences before the workshop. Example: when Leonard Susskind states that conservation of information is a fundamental principle of quantum mechanics, I notice that I am confused because A) I have never heard of any such fundamental law of physics as information conservation B) Every definition of information I have ever heard indicates that information most certainly can be destroyed. So just what the heck is he talking about anyway? I am now making a conscious effort to research this topic rather than letting it slide by.

The workshop introduced me to the concepts of System 1 and System 2. System 1 is the faster, reactive, intuitive mind that uses heuristics and experience to react quickly. System 2 is the slower, analytical, logical, mathematical mind. I didn't immediately grok this or see how to apply it. However the workshop did convince me to read Daniel Kahneman's Thinking Fast and Slow, and I'm beginning to follow this. It could be useful going forward. I particularly like the examples given at the end of each chapter.

Similarly I completely did not understand the concepts of inside view vs. outside view at the workshop; and worse yet I don't think that I even realized that I didn't understand these. However now that I've read Thinking Fast and Slow, the lightbulb has gone on. Inside view is simply me deciding how likely I (or my team) is likely to accomplish something based on my judgement of the problem and our capabilities. Outside view is a statistical question about how people and teams like us have done when confronted with similar problems in the past. As long as there are similar teams and similar problems to compare with, the outside view is likely to be much more accurate.

During conversation, Julia Galef and I came up with the idea of *********.  It turned out it already exists, and I'm planning to start attending these events locally soon. I've also joined my local LessWrong meetup group.

Stare into Ugh fields. Difficult conversations are an Ugh field for me. Recognizing this and bringing it to conscious attention has made it somewhat easier to manage these conversations. Example: when I went to the workshop I had been putting off contacting my dentist for months, not because of the usual reasons people don't like going to the dentist, but simply because I was uncomfortable telling her that the second (and third) opinion I had gotten on a dental issue disagreed with her about the proper course of treatment. Post-workshop, I finally called her (though it still took me two more weeks to do this. Clearly I have a lot of work left to do here.)

Consider whether the sources of my information may be correlated and by how much. I.e. Evaluating Advice. For instance, if two dentists who share an office give me the same advice, even assuming no prior disposition to agree with each other simply out of friendship, how likely is it that they share the same background and information that dentists in a different office do not?

COZE (Comfort Zone Expansion) exercises have pushed me to talk more to "strangers" and be intentionally more extroverted. On a recent trip to Latin America, I even made an effort to use what little Spanish I possess. I've had some small success, though this has led to no obvious major improvements in my life yet.

Thought experiments conducted at the workshop were very helpful in untangling some of my goals and plans. Going forward though this hasn't made a huge difference in my day-to-day life. That is, it hasn't led me to seek different paths than what I'm on right now.

What I Learned and Forgot

Going over my notes now, there was a lot of material; some of it potentially useful, that has fallen by the wayside; and may be worth a second look. This includes:

  • Geoff Anders introduced us to yEd, a nice open source diagram editor. I still prefer StencilIt or Omnigraffle though. He also used it to show us a really neat way of graphing, well, something. Goals maybe? I remember it seemed really useful and significant at the time, but for the life of me I can't remember exactly what it was or what it was supposed to show us. I'll have to go back to my notes. This is why we write things down. (Update: I suspect this was about Goal Factoring.)
  • Anticipation vs. Profession (though from time to time I do find myself asking what odds I'd be willing to bet on certain beliefs)
  • The Planning Kata.

What I Learned But Didn't Implement

Value of Information calculations seem too meta and too wishy-washy to be of much use. They attempt to put quantitative numbers based on information that's far too imprecise to allow even order of magnitude accuracy. I'm better off just keeping things I need to consider in my GTD system, and periodically reviewing it.

Similarly opportunities for Bayesian Strength of Evidence calculations, just don't seem to come up in my day-to-day life. The question for me is more commonly "Given that the situation is what it is, what actions should I take to accomplish my goals?" The outside view is useful for this. Figuring out why the situation is what it is rarely seems to be especially helpful.

Turbocharging Training may be helpful but the evidence seems to me to be lacking. I'd like to see some strong proof that this works in particular areas; e.g. foreign languages, sports, or mathematics.  Furthermore, it's not clear that it's applicable to anything I'm working on learning at this time. It seems very System 1 focused, and not especially helpful with the sort of fundamentally System 2 tasks I take on.

I have begun to declare "Victory!" at the end of a meeting/discussion. it's a bit of fun, but has limited effect. Beyond that I don't seem to reward myself for noticing things, or as a means of installing habits.

What I Didn't Learn

Getting Things Done (GTD), Remember the Milk, BeeMinder, Anki, Cultivating Curiosity, Overcoming Procrastination, and Winning at Arguments.

GTD I didn't learn because I've used it for years now or at least the parts of it that really work for me (lists and calendars mostly, and to a lesser extent filing).

Remember the Milk because my employer's security policy prohibits us from using it, and too much of my life happens at my day job to make maintaining two separate systems worthwhile.

BeeMinder and Anki because I just don't have anything that seems it could benefit from being stored in those systems right now. All of these might be more beneficial to someone in different circumstances.

Cultivating Curiosity because I am already a very naturally curious person, and have been for as long as I can remember. I don't need help with this. Indeed if anything I need to tamp down on this tendency and focus more on accomplishing things rather than merely learning them.

Similarly, Overcoming Procrastination didn't help a lot because I don't have a big procrastination problem, at least not compared to what I had when I was younger. Of course, I do say that in full knowledge that right this minute writing this article is a form of structured procrastination to avoid doing my taxes. :-)

Winning at Arguments, I am already very, very good at when I want to be, which is rare these days. It took me many years too realize that even though I "won" almost every argument I cared about, winning the argument wasn't usually all that useful. Winning an argument is the wrong goal to have for almost any purpose, and rarely leads to the outcomes I desire.

Unofficial ideas from fellow attendees:

Polyphasic sleep: I'm going to let the younger, more pioneering attendees experiment with this one. Even if it does work (which seems far from obvious) I don't see how one could integrate it into a conventional day job and family.

At breakfast one morning, a fellow attendee (Hunter?) suggested putting unsalted butter in my coffee to add more fat to my diet. It's not as crazy as it sounds. After all butter is little more than clarified cream, which I do like in my coffee. I tried this once and I still prefer cream, but I may give it another shot.

Finally, I've referred two workshop attendees to my employer as potential hires. If anyone else from the workshop is looking for a job, especially in tech, sales, or legal, drop me a line privately. For that matter if any Less Wronger is looking for a job, drop me a line privately. We have hundreds of open positions in major cities around the world. Quite a few LessWrongers already work there, and there's room for many more.

What the workshop didn't teach

There were a few techniques that were conspicuous by their absence. In particular I think the CFAR/LessWrong and Agile/XP communities have a lot to teach each other. I was surprised that no one at the workshop seemed to have heard of Kanban or Scrum, much less practice it. Burndown charts and point-based estimation are a really interesting modification of the outside view by comparing your team to your team in the past, rather than to other teams.

Pairing is also a useful technique beyond programming as at least Eliezer (not present at the workshop) has discovered. Pairing is an incredibly effective way to overcome akrasia and procrastination.

In reverse, I am considering what the craft of software development has to learn from CFAR style rationality, more specifically epistemic rationality. I have begun to notice my confusion during conversations with users, product managers, and tech leads and call it to conscious attention. I less frequently let unclear specs and goals pass without comment. Rather, I ask for examples and drill down into them until I feel my confusion has been conquered.

So far these techniques seem very useful in analysis and requirements gathering. I've found them less obviously useful (though certainly not harmful in any way) during coding, debugging, and testing. In these stages there's simply too much to be confused by to address it all, and whatever I'm confused by that's relevant to the task at hand rapidly calls itself to my attention. For instance, when a bug shows up in a production system, the very first and natural question to ask  is "How the hell did the system do that?!" On the other hand, the planning kata may be very helpful with the early stages of system design, though I haven't yet had an opportunity to try that out.

Was it Worth $3900?

Overall, I found the workshop to be a worthwhile experience, if an expensive one; and I recommend it to you if you have the opportunity and resources to attend. There are a lot of practical techniques to be learned, and you only need one or two of them to pay off to cover the cost and time. Even if the primary value is simply introducing you to books and techniques you explore further after the workshop such as Getting Things Done or Thinking Fast and Slow, that may be enough. Most knowledge workers are operating far below the level of which we're capable, and expanding our effectiveness can pay for itself.

Before attending, it is worth asking yourself whether there's an opportunity to learn this material at lower cost. For instance, did I really need to spend $3900 and 4 days to learn about Pomodoro? Apparently so, since I'd heard about Pomodoro for years and paid no attention to it until January. On the other hand, a $20 book I read on the subway was fully sufficient for me to learn and implement Getting Things Done. You'll have to judge this one for yourself.

The Singularity Wars

52 JoshuaFox 14 February 2013 09:44AM

(This is a introduction, for  those not immersed in the Singularity world, into the history of and relationships between SU, SIAI [SI, MIRI], SS, LW, CSER, FHI, and CFAR. It also has some opinions, which are strictly my own.)

The good news is that there were no Singularity Wars. 

The Bay Area had a Singularity University and a Singularity Institute, each going in a very  different direction. You'd expect to see something like the People's Front of Judea and the Judean People's Front, burning each other's grain supplies as the Romans moved in. 

continue reading »

Thoughts on the January CFAR workshop

37 Qiaochu_Yuan 31 January 2013 10:16AM

So, the Center for Applied Rationality just ran another workshop, which Anna kindly invited me to. Below I've written down some thoughts on it, both to organize those thoughts and because it seems other LWers might want to read them. I'll also invite other participants to write down their thoughts in the comments. Apologies if what follows isn't particularly well-organized. 

Feelings and other squishy things

The workshop was totally awesome. This is admittedly not strong evidence that it accomplished its goals (cf. Yvain's comment here), but being around people motivated to improve themselves and the world was totally awesome, and learning with and from them was also totally awesome, and that seems like a good thing. 

Also, the venue was fantastic. CFAR instructors reported that this workshop was more awesome than most, and while I don't want to discount improvements in CFAR's curriculum and its selection process for participants, I think the venue counted for a lot. It was uniformly beautiful and there were a lot of soft things to sit down or take naps on, and I think that helped everybody be more comfortable with and relaxed around each other. 

Main takeaways

Here are some general insights I took away from the workshop. Some of them I had already been aware of on some abstract intellectual level but hadn't fully processed and/or gotten drilled into my head and/or seen the implications of. 

  1. Epistemic rationality doesn't have to be about big things like scientific facts or the existence of God, but can be about much smaller things like the details of how your particular mind works. For example, it's quite valuable to understand what your actual motivations for doing things are. 
  2. Introspection is unreliable. Consequently, you don't have direct access to information like your actual motivations for doing things. However, it's possible to access this information through less direct means. For example, if you believe that your primary motivation for doing X is that it brings about Y, you can perform a thought experiment: imagine a world in which Y has already been brought about. In that world, would you still feel motivated to do X? If so, then there may be reasons other than Y that you do X. 
  3. The mind is embodied. If you consistently model your mind as separate from your body (I have in retrospect been doing this for a long time without explicitly realizing it), you're probably underestimating the powerful influence of your mind on your body and vice versa. For example, dominance of the sympathetic nervous system (which governs the fight-or-flight response) over the parasympathetic nervous system is unpleasant, unhealthy, and can prevent you from explicitly modeling other people. If you can notice and control it, you'll probably be happier, and if you get really good, you can develop aikido-related superpowers
  4. You are a social animal. Just as your mind should be modeled as a part of your body, you should be modeled as a part of human society. For example, if you don't think you care about social approval, you are probably wrong, and thinking that will cause you to have incorrect beliefs about things like your actual motivations for doing things. 
  5. Emotions are data. Your emotional responses to stimuli give you information about what's going on in your mind that you can use. For example, if you learn that a certain stimulus reliably makes you angry and you don't want to be angry, you can remove that stimulus from your environment. (This point should be understood in combination with point 2 so that it doesn't sound trivial: you don't have direct access to information like what stimuli make you angry.) 
  6. Emotions are tools. You can trick your mind into having specific emotions, and you can trick your mind into having specific emotions in response to specific stimuli. This can be very useful; for example, tricking your mind into being more curious is a great way to motivate yourself to find stuff out, and tricking your mind into being happy in response to doing certain things is a great way to condition yourself to do certain things. Reward your inner pigeon.

Here are some specific actions I am going to take / have already taken because of what I learned at the workshop. 

  1. Write a lot more stuff down. What I can think about in my head is limited by the size of my working memory, but a piece of paper or a WorkFlowy document don't have this limitation. 
  2. Start using a better GTD system. I was previously using RTM, but badly. I was using it exclusively from my iPhone, and when adding something to RTM from an iPhone the due date defaults to "today." When adding something to RTM from a browser the due date defaults to "never." I had never done this, so I didn't even realize that "never" was an option. That resulted in having due dates attached to RTM items that didn't actually have due dates, and it also made me reluctant to add items to RTM that really didn't look like they had due dates (e.g. "look at this interesting thing sometime"), which was bad because that meant RTM wasn't collecting a lot of things and I stopped trusting my own due dates. 
  3. Start using Boomerang to send timed email reminders to future versions of myself. I think this might work better than using, say, calendar alerts because it should help me conceptualize past versions of myself as people I don't want to break commitments to. 

I'm also planning to take various actions that I'm not writing above but instead putting into my GTD system, such as practicing specific rationality techniques (the workshop included many useful worksheets for doing this) and investigating specific topics like speed-reading and meditation. 

The arc word (TVTropes warning) of this workshop was "agentiness." ("Agentiness" is more funtacular than "agency.") The CFAR curriculum as a whole could be summarized as teaching a collection of techniques to be more agenty. 

Miscellaneous

A distinguishing feature the people I met at the workshop seemed to have in common was the ability to go meta. This is not a skill which was explicitly mentioned or taught (although it was frequently implicit in the kind of jokes people told), but it strikes me as an important foundation for rationality: it seems hard to progress with rationality unless the thought of using your brain to improve how you use your brain, and also to improve how you improve how you use your brain, is both understandable and appealing to you. This probably eliminates most people as candidates for rationality training unless it's paired with or maybe preceded by meta training, whatever that looks like.

One problem with the workshop was lack of sleep, which seemed to wear out both participants and instructors by the last day (classes started early in the day and conversations often continued late into the night because they were unusually fun / high-value). Offering everyone modafinil or something at the beginning of future workshops might help with this.

Overall

Overall, while it's too soon to tell how big an impact the workshop will have on my life, I anticipate a big impact, and I strongly recommend that aspiring rationalists attend future workshops. 

CFAR and SI MOOCs: a Great Opportunity

13 Wrongnesslessness 13 November 2012 10:30AM

Massive open online courses seem to be marching towards total world domination like some kind of educational singularity (at least in the case of Coursera). At the same time, there are still relatively few courses available, and each new added course is a small happening in the growing MOOC community.

Needless to say, this seems like a perfect opportunity for SI and CFAR to advance their goals via this new education medium. Some people seem to have already seen the potential and taken advantage of it:

One interesting trend that can be seen is companies offering MOOCs to increase the adoption of their tools/technologies. We have seem this with 10gen offering Mongo courses and to a lesser extent with Coursera’s ‘Functional Programming in Scala’ taught by Martin Odersky

(from the above link to the Class Central Blog)

 

So the question is, are there any online courses already planned by CFAR and/or SI? And if not, when will it happen?

 

Edit: This is not a "yes or no" question, albeit formulated as one. I've searched the archives and did not find any mention of MOOCs as a potentially crucial device for spreading our views. If any such courses are already being developed or at least planned, I'll be happy to move this post to the open thread, as some have requested, or delete it entirely. If not, please view this as a request for discussion and brainstorming.

P.S.: Sorry, I don't have the time to write a good article on this topic.

[Link] Article about rationality and CFAR

8 Despard 09 September 2012 05:06PM

http://issuu.com/nervemag/docs/issue-2?mode=window&pageNumber=18

A friend of mine runs Nerve, the new science magazine at the university where I work, and I offered to write about rationality for their second issue. The article is just out, with some quotes from some people you might recognise! Enjoy.

EDIT: the Wordpress version is now up, for those allergic to Flash.

http://nervemag.wordpress.com/2012/09/11/why-are-smart-people-so-stupid/

Take Part in CFAR Rationality Surveys

18 Unnamed 18 July 2012 11:57PM

Posted By: Dan Keys, CFAR Survey Coordinator

The Center for Applied Rationality is trying to develop better methods for measuring and studying the benefits of rationality.  We want to be able to test if this rationality stuff actually works.

One way that the Less Wrong community can help us with this process is by taking part in online surveys, which we can use for a variety of purposes including:

  • seeing what rationality techniques people actually use in their day-to-day lives 
  • developing & testing measures of how rational people are, and seeing if potential rationality measures correlate with the other variables that you'd expect them to 
  • comparing people who attend a minicamp with others in the LW community, so that we can learn what value-added the minicamps provide beyond what you get elsewhere 
  • trying out some of the rationality techniques that we are trying to teach, so we can see how they work 

We have a couple of surveys ready to go now which cover some of these bullet points, and will be developing other surveys over the coming months.

If you're interested in taking part in online surveys for CFAR, please go here to fill out a brief form with your contact info; then we will contact you about participating in specific surveys.

If you have previously filled out a form like this one to participate in CFAR surveys, then we already have your information so you don't need to sign up again.

Questions/Issues can be posted in the comments here, PMed to me, or emailed to us at CFARsurveys@gmail.com.

View more: Next