Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

5 Project Hufflepuff Suggestions for the Rationality Community

9 lifelonglearner 04 March 2017 02:23AM

<cross-posed on Facebook>


In the spirit of Project Hufflepuff, I’m listing out some ideas for things I would like to see in the rationality community, which seem like perhaps useful things to have. I dunno if all of these are actually good ideas, but it seems better to throw some things out there and iterate.

 

Ideas:


Idea 1) A more coherent summary of all the different ideas that are happening across all the rationalist blogs. I know LessWrong is trying to become more of a Schelling point, but I think a central forum is still suboptimal for what I want. I’d like something that just takes the best ideas everyone’s been brewing and centralizes them in one place so I can quickly browse them all and dive deep if something looks interesting.


Suggestions:

A) A bi-weekly (or some other period) newsletter where rationalists can summarize their best insights of the past weeks in 100 words or less, with links to their content.

B) An actual section of LessWrong that does the above, so people can comment / respond to the ideas.


Thoughts:

This seems straightforward and doable, conditional on commitment from 5-10 people in the community. If other people are also excited, I’m happy to reach out and get this thing started.



Idea 2) A general tool/app for being able to coordinate. I’d be happy to lend some fraction of my time/effort in order to help solve coordination problems. It’s likely other people feel the same way. I’d like a way to both pledge my commitment and stay updated on things that I might be able to plausibly Cooperate on.


Suggestions:

A) An app that is managed by someone, which sends out broadcasts for action every so often. I’m aware that similar things / platforms already exist, so maybe we could just leverage an existing one for this purpose.


Thoughts:

In abstract, this seems good. Wondering what others think / what sorts of coordination problems this would be good for. The main value here is being confident in *actually* getting coordination from the X people who’ve signed up.


Idea 3) More rationality materials that aren’t blogs. The rationality community seems fairly saturated with blogs. Maybe we could do with more webcomics, videos, or something else?


Suggestions:

A) Brainstorm good content from other mediums, benefits / drawbacks, and see why we might want content in other forms.

B) Convince people who already make such mediums to touch on rationalist ideas, sort of like what SMBC does.


Thoughts:

I’d be willing to start up either a webcomic or a video series, conditional on funding. Anyone interested in sponsoring? Happy to have a discussion below.

 

EDIT:

Links to things I've done for additional evidence:

 


Idea 4) More systematic tools to master rationality techniques. To my knowledge, only a small handful of people have really tried to implement Systemization to learning rationality, of whom Malcolm and Brienne are the most visible. I’d like to see some more attempts at Actually Trying to learn techniques.


Suggestions:

A) General meeting place to discuss the learning / practice.

B) Accountability partners + Skype check-ins .

C) List of examples of actually using the techniques + quantified self to get stats.


Thoughts:

I think finding more optimal ways to do this is very important. There is a big step between knowing how techniques work and actually finding ways to do them. I'd be excited to talk more about this Idea.


Idea 5) More online tools that facilitate rationality-things. A lot of rationality techniques seem like they could be operationalized to plausibly provide value.


Suggestions:

A) An online site for Double Cruxing, where people can search for someone to DC with, look at other ongoing DC’s, or propose topics to DC on.

B) Chatbots that integrate things like Murphyjitsu or ask debugging questions.


Thoughts:

I’m working on building a Murphyjitsu chatbot for building up my coding skill. The Double Crux site sounds really cool, and I’d be happy to do some visual mockups if that would help people’s internal picture of how that might work out. I am unsure of my ability to do the actual coding, though.

 

 

Conclusion:

Those are the ideas I currently have. Very excited to hear what other people think of them, and how we might be able to get the awesome ones into place. Also, feel free to comment on the FB post, too, if you want to signal boost.

CFAR Workshop Review: February 2017

5 lifelonglearner 28 February 2017 03:15AM

[A somewhat extensive review of a recent CFAR workshop, with recommendations at the end for those interested in attending one.]

I recently mentored at a CFAR workshop, and this is a review of the actual experience. In broad strokes, this review will cover the physical experience (atmosphere, living, eating, etc.), classes (which ones were good, which ones weren’t), and recommendations (regrets, suggestions, ways to optimize your experience). I’m not officially affiliated with CFAR, and this review represents my own thoughts only.

A little about me: my name is Owen, and I’m here in the Bay Area. This was actually my first real workshop, but I’ve had a fair amount of exposure to CFAR materials from EuroSPARC, private conversations, and LessWrong. So do keep in mind that I’m someone who came into the workshop with a rationalist’s eye.

I’m also happy to answer any questions people might have about the workshop. (Via PM or in the comments below.)


Physical Experience:

Sleeping / Food / Living:

(This section is venue-dependent, so keep that in mind.)

Despite the hefty $3000 plus price tag, the workshop accommodations aren’t exactly plush. You get a bed, and that’s about it. In my workshop, there were always free bathrooms, so that part wasn’t a problem.

There was always enough food at meals, and my impression was that dietary restrictions were handled well. For example, one staff member went out and bought someone lunch when one meal didn’t work. Other than that, there’s ample snacks between meals, usually a mix of chips, fruits, and chocolate. Also, hot tea and a surprisingly wide variety of drinks.

Atmosphere / Social:

(The participants I worked with were perhaps not representative of the general “CFAR participant”, so also take caution here.)

People generally seemed excited and engaged. Given that everyone hopefully voluntarily decided to show up, this was perhaps to be expected. Anyway, there’s a really low amount of friction when it comes to joining and exiting conversations. By that, I mean it felt very easy, socially speaking, to just randomly join a conversation. Staff and participants all seemed quite approachable for chatting.

I don’t have the actual participant stats, but my impression is that a good amount of people came from quantitative (math/CS) backgrounds, so there were discussions on more technical things, too. It also seemed like a majority of people were familiar with rationality or EA prior to coming to the workshop.

There were a few people for whom the material didn’t seem to “resonate” well, but the majority people seemed to be “with the program”.

Class Schedule:

(The schedule and classes are also in a state of flux, so bear that in mind too.)

Classes start at around 9:30 am in the morning and end at about 9:00 pm at night. In between, there are 20 minute breaks between every hour of classes. Lunch is about 90 minutes, while dinner is around 60 minutes.

Most of the actual classes were a little under 60 minutes, except for the flash classes, which were only about 20 minutes. Some classes had extended periods for practicing the techniques.

You’re put into a group of around 8 people, which switches every day, that you go to classes with. So there’s a few rotating classes that are happening, where you might go to them in a different order.

 

Classes Whose Content I Enjoyed:

As I was already familiar with most of the below material, this reflects more a general sense of classes which I think are useful, rather than ones which were taught exceptionally well at the workshop.

TAPs: Kaj Sotala already has a great write-up of TAPs here, and I think that they’re a helpful way of building small-scale habits. I also think the “click-whirr” mindset TAPs are built off can be a helpful way to model minds. The most helpful TAP for me is the Quick Focusing TAP I mention about a quarter down the page here.

Pair Debugging: Pair Debugging is about having someone else help you work through a problem. I think this is explored to some extent in places like psychiatry (actually, I’m unsure about this) as well as close friendships, but I like how CFAR turned this into a more explicit social norm / general thing to do. When I do this, I often notice a lot of interesting inconsistencies, like when I give someone good-sounding advice—except that I myself don’t follow it.  

The Strategic Level: The Strategic Level is where you, after having made a mistake, ask yourself, “What sort of general principles would I have had noticed in order to not make a mistake of this class in the future?” This is opposed to merely saying “Well, that mistake was bad” (first level thinking) or “I won’t make that mistake again” (second level thinking). There were also some ideas about how the CFAR techniques can recurse upon themselves in interesting ways, like how you can use Murphyjitsu (middle of the page) on your ability to use Murphyjitsu. This was a flash class, and I would have liked it if we could have spent more time on these ideas.

Tutoring Wheel: Less a class and more a pedagogical activity, Tutoring Wheel was where everyone picked a specific rationality class to teach and then rotated, teaching others and being taught. I thought this was a really strong way to help people understand the techniques during the workshop.

Focusing / Internal Double Crux / Mundanification: All three of these classes address different things, but in my mind I thought they were similar in the sense of looking into yourself. Focusing is Gendlin’s self-directed therapy technique, where people try to look into themselves to get a “felt shift”. Internal Double Crux is about resolving internal disagreements, often between S1 and S2 (but not necessarily). Mundanification is about facing the truth, even when you flinch from it, via Litany of Gendlin-type things. This general class of techniques that deals with resolving internal feelings of “ugh” I find to be incredibly helpful, and may very well be the highest value thing I got out of the class curriculum.

 

Classes Whose Teaching/Content I Did Not Enjoy:

These were classes that I felt were not useful and/or not explained well. This differs from the above, because I let the actual teaching part color my opinions.

Taste / Shaping: I thought an earlier iteration of this class was clearer (when it was called Inner Dashboard). Here, I wasn’t exactly sure what the practical purpose of the class was, let alone what the general thing it was pointing at. To the best of my knowledge, Taste is about how we have subtle “yuck” and “yum” senses towards things, and there can be a way to reframe negative affects in a more positive way, like how “difficult” and “challenging” can be two sides of the same coin. Shaping is about…something. I’m really unclear about this one.

Pedagogical Content Knowledge (PCK): PCK is, I think, about how the process of teaching a skill differs from the process of learning it. And you need a good understanding of how a beginner is learning something, what that experience feels like, in order to teach it well. I get that part, but this class seemed removed from the other classes, and the activity we did (asking other people how they did math in their head) didn’t seem useful.

Flash Class Structure: I didn’t like the 20 minute “flash classes”. I felt like they were too quick to really give people ideas that stuck in their head. In general, I am in support of less classes and extended times to really practice the techniques, and I think having little to no flash classes would be good.

 

Suggestions for Future Classes: 

This is my personal opinion only. CFAR has iterated their classes over lots of workshops, so it’s safe to assume that they have reasons for choosing what they teach. Nevertheless, I’m going to be bold and suggest some improvements which I think could make things better.

Opening Session: CFAR starts off every workshop with a class called Opening Session that tries to get everyone in the right mindset for learning, with a few core principles. Because of limited time, they can’t include everything, but there were a few lessons I thought might have helped as the participants went forward:

In Defense of the Obvious: There’s a sense where a lot of what CFAR says might not be revolutionary, but it’s useful. I don’t blame them; much of what they do is draw boundaries around fairly-universal mental notions and draw attention to them. I think they could spend more time highlighting how obvious advice can still be practical.

Mental Habits are ProceduralRationality techniques feel like things you know, but it’s really about things you do. Focusing on this distinction could be very useful to make sure people see that actually practicing the skills is very important.

Record / Take Notes: I find it really hard to remember concrete takeaways if I don’t write them down. During the workshop, it seemed like maybe only about half of the people were taking notes. In general, I think it’s at least good to remind people to journal their insights at the end of the day, if they’re not taking notes at every class.

Turbocharging + Overlearning: Turbocharging is a theory in learning put forth by Valentine Smith which, briefly speaking, says that you get better at what you practice. Similarly, Overlearning is about using a skill excessively over a short period to get it ingrained. It feels like the two skills are based off similar ideas, but their connection to one another wasn’t emphasized. Also, they were several days apart; I think they could be taught closer together.

General Increased Cohesion: Similarly, I think that having additional discussion on how these techniques relate to one another be it through concept maps or some theorizing might be good to give people a more unified rationality toolkit.

 

Mental Updates / Concrete Takeaways:

This ended up being really long. If you’re interested, see my 5-part series on the topic here.

 

Suggestions / Recommendations:

This is a series of things that I would have liked to do (looking back) at the workshop, but that I didn’t manage to do at the time. If you’re considering going, this list may prove useful to you when you go. (You may want to consider bookmarking this.)

Write Things Down: Have a good idea? Write it down. Hear something cool? Write it down. Writing things down (or typing, voice recording, etc.) is all really important so you can remember it later! Really, make sure to record your insights!

Build Scaffolding: Whenever you have an opportunity to shape your future trajectory, take it. Whether this means sending yourself emails, setting up reminders, or just taking a 30 minute chunk to really practice a certain technique, I think it’s useful to capitalize on the unique workshop environment to, not just learn new things, but also just do things you otherwise probably “wouldn’t have had the time for”.

Record Things to Remember Them: Here’s a poster I made that has a bunch of suggestions:

reminder-poster
Do ALL The Things!

 

Don’t Be Afraid to Ask for Help: Everyone at the workshop, on some level, has self-growth as a goal. As such, it’s a really good idea to ask people for help. If you don’t understand something, feel weird for some reason, or have anything going on, don’t be afraid to use the people around you the fullest (if they’re available, of course).

Conclusion:

Of course, perhaps the biggest question is “Is the workshop worth the hefty price?”

Assuming you’re coming from a tech-based position (apologies to everyone else, I’m just doing a quick ballpark with what seems to be the most common place CFAR participants seem to come from), the average hourly wage is something like $40. At ~$4,000, the workshop would need to save you about 100 hours to break even.

If you want rigorous quantitative data, you may want to check out CFAR’s own study on their participants. I don’t think I’ve got a good picture of quantifying the sort of personal benefits, myself, so everything below is pretty qualitative.

Things that I do think CFAR provides:

1) A unique training / learning environment for certain types of rationality skills that would probably be hard to learn elsewhere. Several of these techniques, including TAPs, Resolve Cycles, and Focusing have become fairly ingrained in my daily life, and I believe they’ve increased my quality of life.

Learning rationality is the main point of the workshop, so the majority of the value probably comes out of learning these techniques. Also, though, CFAR gives you the space and time to start thinking about a lot of things you might have otherwise put off forever. (Granted, this can be achieved by other means, like just blocking out time every week for review, but I thought this counterfactual benefit was still probably good to mention.)

2) Connections to other like-minded people. As a Schelling point for rationality, you’ll meet people who share similar values / goals as you at a CFAR workshop. If you’re looking to make new friends or meet others, this is another benefit. (Although it does seem costly and inefficient if that’s your main prerogative.)

3) Upgraded mindset: As I wrote about here, I think that learning CFAR-type rationality can really level up the way you look at your brain, which seems to have some potential flow-through effects. The post explains it better, but in short, if you have not-so-good mental models, then CFAR could be a really good choice for boosting how you see how your mind works.

There are probably other things, but those are the main ones. I hope this helps inform your decision. CFAR is currently hosting a major sprint of workshops, so this would be a good time to sign up for one, if you've been considering attending.

Concrete Takeaways Post-CFAR

11 lifelonglearner 24 February 2017 06:31PM

Concrete Takeaways:

[So I recently volunteered at a CFAR workshop. This is part five of a five-part series on how I changed my mind. It's split into 3 sections: TAPs, Heuristics, and Concepts. They get progressively more abstract. It's also quite long at around 3,000 words, so feel free to just skip around and see what looks interesting.]

 

(I didn't post Part 3 and Part 4 on LW, as they're more speculative and arguably less interesting, but I've linked to them on my blog if anyone's interested.]

 

This is a collection of TAPs, heuristics, and concepts that I’ve been thinking about recently. Many of them were inspired by my time at the CFAR workshop, but there’s not really underlying theme behind it all. It’s just a collection of ideas that are either practical or interesting.

 


TAPs:

TAPs, or Trigger Action Planning, is a CFAR technique that is used to build habits. The basic idea is you pair a strong, concrete sensory “trigger” (e.g. “when I hear my alarm go off”) with a “plan”—the thing you want to do (e.g. “I will put on my running shoes”).


If you’re good at noticing internal states, TAPs can also use your feelings or other internal things as a trigger, but it’s best to try this with something concrete first to get the sense of it.


Some of the more helpful TAPs I’ve recently been thinking about are below:


Ask for Examples TAP:

[Notice you have no mental picture of what the other person is saying. → Ask for examples.]


Examples are good. Examples are god. I really, really like them.


In conversations about abstract topics, it can be easy to understand the meaning of the words that someone said, yet still miss the mental intuition of what they’re pointing at. Asking for an example clarifies what they mean and helps you understand things better.


The trigger for this TAP is noticing that what someone said gave you no mental picture.


I may be extrapolating too far from too little data here, but it seems like people do try to “follow along” with things in their head when listening. And if this mental narrative, simulation, or whatever internal thing you’re doing comes up blank when someone’s speaking, then this may be a sign that what they said was unclear.


Once you notice this, you ask for an example of what gave you no mental picture. Ideally, the other person can then respond with a more concrete statement or clarification.


Quick Focusing TAP:

[Notice you feel aversive towards something → Be curious and try to source the aversion.]


Aversion Factoring, Internal Double Crux, and Focusing are all techniques CFAR teaches to help deal with internal feelings of badness.


While there are definite nuances between all three techniques, I’ve sort of abstracted from the general core of “figuring out why you feel bad” to create an in-the-moment TAP I can use to help debug myself.


The trigger is noticing a mental flinch or an ugh field, where I instinctively shy away from looking too hard.


After I notice the feeling, my first step is to cultivate a sense of curiosity. There’s no sense of needing to solve it; I’m just interested in why I’m feeling this way.


Once I’ve directed my attention to the mental pain, I try to source the discomfort. Using some backtracking and checking multiple threads (e.g. “is it because I feel scared?”) allows me to figure out why. This whole process takes maybe half a minute.


When I’ve figured out the reason why, a sort of shift happens, similar to the felt shift in focusing. In a similar way, I’m trying to “ground” the nebulous, uncertain discomfort, forcing it to take shape.


I’d recommend trying some Focusing before trying this TAP, as it’s basically an expedited version of it, hence the name.


Rule of Reflexivity TAP:

[Notice you’re judging someone → Recall an instance where you did something similar / construct a plausible internal narrative]

[Notice you’re making an excuse → Recall times where others used this excuse and update on how you react in the future.]


This is a TAP that was born out of my observation that our excuses seem way more self-consistent when we’re the ones saying then. (Oh, why hello there, Fundamental Attribution Error!) The point of practicing the Rule of Reflexivity is to build empathy.


The Rule of Reflexivity goes both ways. In the first case, you want to notice if you’re judging someone. This might feel like ascribing a value judgment to something they did, e.g. “This person is stupid and made a bad move.”


The response is to recall times where either you did something similar or (if you think you’re perfect) think of a plausible set of events that might have caused them to act in this way. Remember that most people don’t think they’re acting stupidly; they’re just doing what seems like a good idea from their perspective.


In the second case, you want to notice when you’re trying to justify your own actions. If the excuses you yourself make suspiciously sound like things you’ve heard others say before, then you may want to jump less likely to immediately dismissing them in the future.


Keep Calm TAP:

[Notice you’re starting to get angry → Take a deep breath → Speak softer and slower]


Okay, so this TAP is probably not easy to do because you’re working against a biological response. But I’ve found it useful in several instances where otherwise I would have gotten into a deeper argument.


The trigger, of course, is noticing that you’re angry. For me, this feels like an increased tightness in my chest and a desire to raise my voice. I may feel like a cherished belief of mine is being attacked.


Once I notice these signs, I remember that I have this TAP which is about staying calm. I think something like, “Ah yes, I’m getting angry now. But I previously already made the decision that it’d be a better idea to not yell.”


After that, I take a deep breath, and I try to open up my stance. Then I remember to speak in a slower and quieter tone than previously. I find this TAP especially helpful in arguments—ahem, collaborative searches for the truth—where things get a little too excited on both sides.  

 


Heuristics:

Heuristics are algorithm-like things you can do to help get better results. I think that it’d be possible to turn many of the heuristics below into TAPs, but there’s a sense of deliberately thinking things out that separates these from just the “mindless” actions above.


As more formal procedures, these heuristics do require you to remember to Take Time to do them well. However, I think that the sorts of benefits you get from make it worth the slight investment in time.

 


Modified Murphyjitsu: The Time Travel Reframe:

(If you haven’t read up on Murphyjitsu yet, it’d probably be good to do that first.)


Murphyjitsu is based off the idea of a premortem, where you imagine that your project failed and you’re looking back. I’ve always found this to be a weird temporal framing, and I realized there’s a potentially easier way to describe things:


Say you’re sitting at your desk, getting ready to write a report on intertemporal travel. You’re confident you can finish before the hour is over. What could go wrong? Closing Facebook, you begin to start typing.


Suddenly, you hear a loud CRACK! A burst of light floods your room as a figure pops into existence, dark and silhouetted by the brightness behind it. The light recedes, and the figure crumples to the ground. Floating in the air is a whirring gizmo, filled with turning gears. Strangely enough, your attention is drawn from the gizmo to the person on the ground:


The figure has a familiar sort of shape. You approach, tentatively, and find the splitting image of yourself! The person stirs and speaks.


“I’m you from one week into the future,” your future self croaks. Your future self tries to tries to get up, but sinks down again.


“Oh,” you say.


“I came from the future to tell you…” your temporal clone says in a scratchy voice.


“To tell me what?” you ask. Already, you can see the whispers of a scenario forming in your head…


Future Your slowly says, “To tell you… that the report on intertemporal travel that you were going to write… won’t go as planned at all. Your best-case estimate failed.”


“Oh no!” you say.


Somehow, though, you aren’t surprised…


At this point, what plausible reasons for your failure come to mind?


I hypothesize that the time-travel reframe I provide here for Murphyjitsu engages similar parts of your brain as a premortem, but is 100% more exciting to use. In all seriousness, I think this is a reframe that is easier to grasp compared to the twisted “imagine you’re in the future looking back into the past, which by the way happens to be you in the present” framing normal Murphyjitsu uses.


The actual (non-dramatized) wording of the heuristic, by the way, is, “Imagine that Future You from one week into the future comes back telling you that the plan you are about to embark on will fail: Why?”


Low on Time? Power On!

Often, when I find myself low on time, I feel less compelled to try. This seems sort of like an instance of failing with abandon, where I think something like, “Oh well, I can’t possibly get anything done in the remaining time between event X and event Y”.


And then I find myself doing quite little as a response.


As a result, I’ve decided to internalize the idea that being low on time doesn’t mean I can’t make meaningful progress on my problems.


This a very Resolve-esque technique. The idea is that even if I have only 5 minutes, that’s enough to get things down. There’s lots of useful things I can pack into small time chunks, like thinking, brainstorming, or doing some Quick Focusing.


I’m hoping to combat the sense of apathy / listlessness that creeps in when time draws to a close.


Supercharge Motivation by Propagating Emotional Bonds:

[Disclaimer: I suspect that this isn’t an optimal motivation strategy, and I’m sure there are people who will object to having bonds based on others rather than themselves. That’s okay. I think this technique is effective, I use it, and I’d like to share it. But if you don’t think it’s right for you, feel free to just move along to the next thing.]


CFAR used to teach a skill called Propagating Urges. It’s now been largely subsumed by Internal Double Crux, but I still find Propagating Urges to be a powerful concept.


In short, Propagating Urges hypothesizes that motivation problems are caused because the implicit parts of ourselves don’t see how the boring things we do (e.g. filing taxes) causally relate to things we care about (e.g. not going to jail). The actual technique involves walking through the causal chain in your mind and some visceral imagery every step of the way to get the implicit part of yourself on board.


I’ve taken the same general principle, but I’ve focused it entirely on the relationships I have with other people. If all the parts of me realize that doing something would greatly hurt those I care about, this becomes a stronger motivation than most external incentives.


For example, I walked through an elaborate internal simulation where I wanted to stop doing a Thing. I imagined someone I cared deeply for finding out about my Thing-habit and being absolutely deeply disappointed. I focused on the sheer emotional weight that such disappointment would cause (facial expressions, what they’d feel inside, the whole deal).


I now have a deep injunction against doing the Thing, and all the parts of me are in agreement because we agree that such a Thing would hurt other people and that’s obviously bad.


The basic steps for Propagating Emotional Bonds looks like:

  • Figure out what thing you want to do more of or stop doing.

  • Imagine what someone you care about would think or say.

  • Really focus on how visceral that feeling would be.

  • Rehearse the chain of reasoning (“If I do this, then X will feel bad, and I don’t want X to feel bad, so I won’t do it”) a few times.


Take Time in Social Contexts:

Often, in social situations, when people ask me questions, I feel an underlying pressure to answer quickly. It feels like if I don’t answer in the next ten seconds, something’s wrong with me. (School may have contributed to this). I don’t exactly know why, but it just feels like it’s expected.


I also think that being forced to hurry isn’t good for thinking well. As a result, something helpful I’ve found is when someone asks something like, “Is that all? Anything else?” is to Take Time.


My response is something like, “Okay, wait, let me actually take a few minutes.” At which point, I, uh, actually take a few minutes to think things through. After saying this, it feel like it’s now socially permissible for me to take some time thinking.


This has proven in several contexts where, had I not Taken Time, I would have forgotten to bring up important things or missed key failure-modes.


Ground Mental Notions in Reality not by Platonics:

One of the proposed reasons that people suck at planning is that we don’t actually think about the details behind our plans. We end up thinking about them in vague black-box-style concepts that hide all the scary unknown unknowns. What we’re left with is just the concept of our task, rather than a deep understanding of what our task entails.


In fact, this seems fairly similar to the the “prototype model” that occurs in scope insensitivity.


I find this is especially problematic for tasks which look nothing like their concepts. For example, my mental representation of “doing math” conjures images of great mathematicians, intricate connections, and fantastic concepts like uncountable sets.


Of course, actually doing math looks more like writing stuff on paper, slogging through textbooks, and banging your head on the table.


My brain doesn’t differentiate well between doing a task and the affect associated with the task. Thus I think it can be useful to try and notice when our brains our doing this sort of black-boxing and instead “unpack” the concepts.


This means getting better correspondences between our mental conceptions of tasks and the tasks themselves, so that we can hopefully actually choose better.


3 Conversation Tips:

I often forget what it means to be having a good conversation with someone. I think I miss opportunities to learn from others when talking with them. This is my handy 3-step list of Conversation Tips to get more value out of conversations:


1) "Steal their Magic": Figure out what other people are really good at, and then get inspired by their awesomeness and think of ways you can become more like that. Learn from what other people are doing well.


2) "Find the LCD"/"Intellectually Escalate": Figure out where your intelligence matches theirs, and learn something new. Focus on Actually Trying to bridge those inferential distances. In conversations, this means focusing on the limits of either what you know or what the other person knows.


3) "Convince or Be Convinced”: (This is a John Salvatier idea, and it also follows from the above.) Focus on maximizing your persuasive ability to convince them of something. Or be convinced of something. Either way, focus on updating beliefs, be it your own or the other party’s.


Be The Noodly Appendages of the Superintelligence You Wish To See in the World:

CFAR co-founder Anna Salamon has this awesome reframe similar to IAT which asks, “Say a superintelligence exists and is trying to take over the world. However, you are its only agent. What do you do?”


I’ll admit I haven’t used this one, but it’s super cool and not something I’d thought of, so I’m including it here.

 


Concepts:

Concepts are just things in the world I’ve identified and drawn some boundaries around. They are farthest from the pipeline that goes from ideas to TAPs, as concepts are just ideas. Still, I do think these concepts “bottom out” at some point into practicality, and I think playing around with them could yield interesting results.


Paperspace =/= Mindspace:

I tend to write things down because I want to remember them. Recently, though I’ve noticed that rather act as an extension of my brain, I seem to treat things I write down as no longer in my own head. As in, if I write something down, it’s not necessarily easier for me to recall it later.


It’s as if by “offloading” the thoughts onto paper, I’ve cleared them out of my brain. This seems suboptimal, because a big reason I write things down is to cement them more deeply within my head.


I can still access the thoughts if I’m asking myself questions like, “What did I write down yesterday?” but only if I’m specifically sorting for things I write down.


The point is, I want stuff I write down on paper to be, not where I store things, but merely a sign of what’s stored inside my brain.


Outreach: Focus on Your Target’s Target:

One interesting idea I got from the CFAR workshop was that of thinking about yourself as a radioactive vampire. Um, I mean, thinking about yourself as a memetic vector for rationality (the vampire thing was an actual metaphor they used, though).


The interesting thing they mentioned was to think, not about who you’re directly influencing, but who your targets themselves influence.


This means that not only do you have to care about the fidelity of your transmission, but you need to think of ways to ensure that your target also does a passable job of passing it on to their friends.


I’ve always thought about outreach / memetics in terms of the people I directly influence, so looking at two degrees of separation is a pretty cool thing I hadn’t thought about in the past.


I guess that if I took this advice to heart, I’d probably have to change the way that I explain things. For example, I might want to try giving more salient examples that can be easily passed on or focusing on getting the intuitions behind the ideas across.


Build in Blank Time:

Professor Barbara Oakley distinguishes between focused and diffused modes of thinking. Her claim is that time spent in a thoughtless activity allows your brain to continue working on problems without conscious input. This is the basis of diffuse mode.


In my experience, I’ve found that I get interesting ideas or remember important ideas when I’m doing laundry or something else similarly mindless.


I’ve found this to be helpful enough that I’m considering building in “Blank Time” in my schedules.


My intuitions here are something like, “My brain is a thought-generator, and it’s particularly active if I can pay attention to it. But I need to be doing something that doesn’t require much of my executive function to even pay attention to my brain. So maybe having more Blank Time would be good if I want to get more ideas.”


There’s also the additional point that meta-level thinking can’t be done if you’re always in the moment, stuck in a task. This means that, cool ideas aside, if I just want to reorient or survey my current state, Blank Time can be helpful.


The 99/1 Rule: Few of Your Thoughts are Insights:

The 99/1 Rule says that the vast majority of your thoughts every day are pretty boring and that only about one percent of them are insightful.


This was generally true for my life…and then I went to the CFAR workshop and this rule sort of stopped being appropriate. (Other exceptions to this rule were EuroSPARC [now ESPR] and EAG)


Note:

I bulldozed through a bunch of ideas here, some of which could have probably garnered a longer post. I’ll probably explore some of these ideas later on, but if you want to talk more about any one of them, feel free to leave a comment / PM me.

 

Levers, Emotions, and Lazy Evaluators:

5 lifelonglearner 20 February 2017 11:00PM

Levers, Emotions, and Lazy Evaluators: Post-CFAR 2

[This is a trio of topics following from the first post that all use the idea of ontologies in the mental sense as a bouncing off point. I examine why naming concepts can be helpful, listening to your emotions, and humans as lazy evaluators. I think this post may also be of interest to people here. Posts 3 and 4 are less so, so I'll probably skip those, unless someone expresses interest. Lastly, the below expressed views are my own and don’t reflect CFAR’s in any way.]


Levers:

When I was at the CFAR workshop, someone mentioned that something like 90% of the curriculum was just making up fancy new names for things they already sort of did. This got some laughs, but I think it’s worth exploring why even just naming things can be powerful.


Our minds do lots of things; they carry many thoughts, and we can recall many memories. Some of these phenomena may be more helpful for our goals, and we may want to name them.


When we name a phenomenon, like focusing, we’re essentially drawing a boundary around the thing, highlighting attention on it. We’ve made it conceptually discrete. This transformation, in turn, allows us to more concretely identify which things among the sea of our mental activity correspond to Focusing.


Focusing can then become a concept that floats in our understanding of things our minds can do. We’ve taken a mental action and packaged it into a “thing”. This can be especially helpful if we’ve identified a phenomena that consists of several steps which usually aren’t found together.


By drawing certain patterns around a thing with a name, we can hopefully help others recognize them and perhaps do the same for other mental motions, which seems to be one more way that we find new rationality techniques.


This then means that we’ve created a new action that is explicitly available to our ontology. This notion of “actions I can take” is what I think forms the idea of levers in our mind. When CFAR teaches a rationality technique, the technique itself seems to be pointing at a sequence of things that happen in our brain. Last post, I mentioned that I think CFAR techniques upgrade people’s mindsets by changing their sense of what is possible.


I think that levers are a core part of this because they give us the feeling of, “Oh wow! That thing I sometimes do has a name! Now I can refer to it and think about it in a much nicer way. I can call it ‘focusing’, rather than ‘that thing I sometimes do when I try to figure out why I’m feeling sad that involves looking into myself’.”


For example, once you understand that a large part of habituation is simply "if-then" loops (ala TAPs, aka Trigger Action Plans), you’ve now not only understood what it means to learn something as a habit, but you’ve internalized the very concept of habituation itself. You’ve gone one meta-level up, and you can now reason about this abstract mental process in a far more explicit way.


Names haves power in the same way that abstraction barriers have power in a programming language—they change how you think about the phenomena itself, and this in turn can affect your behavior.  

 

Emotions:

CFAR teaches a class called “Understanding Shoulds”, which is about seeing your “shoulds”, the parts of yourself that feel like obligations, as data about things you might care about. This is a little different from Nate Soares’s Replacing Guilt series, which tries to move past guilt-based motivation.


In further conversations with staff, I’ve seen the even deeper view that all emotions should be considered information.


The basic premise seems to be based off the understanding that different parts of us may need different things to function. Our conscious understanding of our own needs may sometimes be limited. Thus, our implicit emotions (and other S1 processes) can serve as a way to inform ourselves about what we’re missing.


In this way, all emotions seem channels where information can be passed on from implicit parts of you to the forefront of “meta-you”. This idea of “emotions as a data trove” is yet another ontology that produces different rationality techniques, as it’s operating on, once again, a mental model that is built out of a different type of abstraction.


Many of the skills based on this ontology focus on communication between different pieces of the self.


I’m very sympathetic to this viewpoint, as it form the basis of the Internal Double Crux (IDC) technique, one of my favorite CFAR skills. In short, IDC assumes that akrasia-esque problems are caused by a disagreement between different parts of you, some of which might be in the implicit parts of your brain.


By “disagreement”, I mean that some part of you endorses an action for some well-meaning reasons, but some other part of you is against the action and also has justifications. To resolve the problem, IDC has us “dialogue” between the conflicting parts of ourselves, treating both sides as valid. If done right, without “rigging” the dialogue to bias one side, IDC can be a powerful way to source internal motivation for our tasks.


While I do seem to do some communication between my emotions, I haven’t fully integrated them as internal advisors in the IFS sense. I’m not ready to adopt a worldview that might potentially hand over executive control to all the parts of me. Meta-me still deems some of my implicit desires as “foolish”, like the part of me that craves video games, for example. In order to avoid slippery slopes, I have a blanket precommitment on certain things in life.


For the meantime, I’m fine sticking with these precommitments. The modern world is filled with superstimuli, from milkshakes to insight porn (and the normal kind) to mobile games, that can hijack our well-meaning reward systems.


Lastly, I believe that without certain mental prerequisites, some ontologies can be actively harmful. Nate’s Resolving Guilt series can leave people without additional motivation for their actions; guilt can be a useful motivator. Similarly, Nihilism is another example of an ontology that can be crippling unless paired with ideas like humanism.

 

Lazy Evaluators:

In In Defense of the Obvious, I gave a practical argument as to why obvious advice was very good. I brought this point up up several times during the workshop, and people seemed to like the point.


While that essay focused on listening to obvious advice, there appears to be a similar thing where merely asking someone, “Did you do all the obvious things?” will often uncover helpful solutions they have yet to do.

 

My current hypothesis for this (apart from “humans are programs that wrote themselves on computers made of meat”, which is a great workshop quote) is that people tend to be lazy evaluators. In programming, lazy evaluation is a way of solving for the value of expressions at the last minute, not until the answers are absolutely needed.


It seems like something similar happens in people’s heads, where we simply don’t ask ourselves questions like “What are multiple ways I could accomplish this?” or “Do actually I want to do this thing?” until we need to…Except that most of the time, we never need to—Life putters on, whether or not we’re winning at it.


I think this is part of what makes “pair debugging”, a CFAR activity where a group of people try to help one person with their “bugs”, effective. When we have someone else taking an outside view asking us these questions, it may even be the first time we see these questions ourselves.


Therefore, it looks like a helpful skill is to constantly ask ourselves questions and cultivate a sense of curiosity about how things are. Anna Salamon refers to this skill of “boggling”. I think boggling can help with both counteracting lazy evaluation and actually doing obvious actions.


Looking at why obvious advice is obvious, like “What the heck does ‘obvious’ even mean?” can help break the immediate dismissive veneer our brain puts on obvious information.


EX: “If I want to learn more about coding, it probably makes sense to ask some coder friends what good resources are.”


“Nah, that’s so obvious; I should instead just stick to this abstruse book that basically no one’s heard of—wait, I just rejected something that felt obvious.”


“Huh…I wonder why that thought felt obvious…what does it even mean for something to be dubbed ‘obvious’?”


“Well…obvious thoughts seem to have a generally ‘self-evident’ tag on them. If they aren’t outright tautological or circularly defined, then there’s a sense where the obvious things seems to be the shortest paths to the goal. Like, I could fold my clothes or I could build a Rube Goldberg machine to fold my clothes. But the first option seems so much more ‘obvious’…”


“Aside from that, there also seems to be a sense where if I search my brain for ‘obvious’ things, I’m using a ‘faster’ mode of thinking (ala System 1). Also, aside from favoring simpler solutions, also seems to be influenced by social norms (what do people ‘typically’ do). And my ‘obvious action generator’ seems to also be built off my understanding of the world, like, I’m thinking about things in terms of causal chains that actually exist in the world. As in, when I’m thinking about ‘obvious’ ways to get a job, for instance, I’m thinking about actions I could take in the real world that might plausibly actually get me there…”


“Whoa…that means that obvious advice is so much more than some sort of self-evident tag. There’s a huge amount of information that’s being compressed when I look at it from the surface…’Obvious’ really means something like ‘that which my brain quickly dismisses because it is simple, complies with social norms, and/or runs off my internal model of how the universe works.”


The goal is to reduce the sort of “acclimation” that happens with obvious advice by peering deeper into it. Ideally, if you’re boggling at your own actions, you can force yourself to evaluate earlier. Otherwise, it can hopefully at least make obvious advice more appealing.


I’ll end with a quote of mine from the workshop:


“You still yet fail to grasp the weight of the Obvious.”


Planning 101: Debiasing and Research

12 lifelonglearner 03 February 2017 03:01PM

Planning 101: Techniques and Research

<Cross-posed from my blog>

[Epistemic status: Relatively strong. There are numerous studies showing that predictions often become miscalibrated. Overconfidence in itself appears fairly robust, appearing in different situations. The actual mechanism behind the planning fallacy is less certain, though there is evidence for the inside/outside view model. The debiasing techniques are supported, but more data on their effectiveness could be good.]

Humans are often quite overconfident, and perhaps for good reason. Back on the savanna and even some places today, bluffing can be an effective strategy for winning at life. Overconfidence can scare down enemies and avoid direct conflict.

When it comes to making plans, however, overconfidence can really screw us over. You can convince everyone (including yourself) that you’ll finish that report in three days, but it might still really take you a week. Overconfidence can’t intimidate advancing deadlines.

I’m talking, of course, about the planning fallacy, our tendency to make unrealistic predictions and plans that just don’t work out.

Being a true pessimist ain’t easy.

Students are a prime example of victims to the planning fallacy:

First, students were asked to predict when they were 99% sure they’d finish a project. When the researchers followed up with them later, though, only about 45%, less than half of the students, had actually finished by their own predicted times [Buehler, Griffin, Ross, 1995].

Even more striking, students working on their psychology honors theses were asked to predict when they’d finish, “assuming everything went as poor as it possibly could.” Yet, only about 30% of students finished by their own worst-case estimate [Buehler, Griffin, Ross, 1995].

Similar overconfidence was also found in Japanese and Canadian cultures, giving evidence that this is a human (and not US-culture-based) phenomenon. Students continued to make optimistic predictions, even when they knew the task had taken them longer last time [Buehler and Griffin, 2003, Buehler et al., 2003].

As I student myself, though, I don’t mean to just pick on ourselves.

The planning fallacy affects projects across all sectors.

An overview of public transportation projects found that most of them were, on average, 20–45% above the estimated cost. In fact, research has shown that these poor predictions haven’t improved at all in the past 30 years [Flyvbjerg 2006].

And there’s no shortage of anecdotes, from the Scottish Parliament Building, which cost 10 times more than expected, or the Denver International Airport, which took over a year longer and cost several billion more.

When it comes to planning, we suffer from a major disparity between our expectations and reality. This article outlines the research behind why we screw up our predictions and gives three suggested techniques to suck less at planning.

 

The Mechanism:

So what’s going on in our heads when we make these predictions for planning?

On one level, we just don’t expect things to go wrong. Studies have found that we’re biased towards not looking at pessimistic scenarios [Newby-Clark et al., 2000]. We often just assume the best-case scenario when making plans.

Part of the reason may also be due to a memory bias. It seems that we might underestimate how long things take us, even in our memory [Roy, Christenfeld, and McKenzie 2005].

But by far the dominant theory in the field is the idea of an inside view and an outside view [Kahneman and Lovallo 1993]. The inside view is the information you have about your specific project (inside your head). The outside view is what someone else looking at your project (outside of the situation) might say.

Obviously you want to take the Outside View.

 

We seem to use inside view thinking when we make plans, and this leads to our optimistic predictions. Instead of thinking about all the things that might go wrong, we’re focused on how we can help our project go right.

Still, it’s the outside view that can give us better predictions. And it turns out we don’t even need to do any heavy-lifting in statistics to get better predictions. Just asking other people (from the outside) to predict your own performance, or even just walking through your task from a third-person point of view can improve your predictions [Buehler et al., 2010].

Basically, the difference in our predictions seems to depend on whether we’re looking at the problem in our heads (a first-person view) or outside our heads (a third-person view). Whether we’re the “actor” or the “observer” in our minds seems to be a key factor in our planning [Pronin and Ross 2006].


Debiasing Techniques:

I’ll be covering three ways to improve predictions: MurphyjitsuReference Class Forecasting (RCF), and Back-planning. In actuality, they’re all pretty much the same thing; all three techniques focus, on some level, on trying to get more of an outside view. So feel free to choose the one you think works best for you (or do all three).

For each technique, I’ll give an overview and cover the steps first and then end with the research that supports it. They might seem deceptively obvious, but do try to keep in mind that obvious advice can still be helpful!

(Remembering to breathe, for example, is obvious, but you should still do it anyway. If you don't want to suffocate.)

 

Murphyjitsu:

“Avoid Obvious Failures”


Almost as good as giving procrastination an ass-kicking.

The name Murphyjitsu comes from the infamous Murphy’s Law: “Anything that can go wrong, will go wrong.” The technique itself is from the Center for Applied Rationality (CFAR), and is designed for “bulletproofing your strategies and plans”.

Here are the basic steps:

  1. Figure out your goal. This is the thing you want to make plans to do.
  2. Write down which specific things you need to get done to make the thing happen. (Make a list.)
  3. Now imagine it’s one week (or month) later, and yet you somehow didn’t manage to get started on your goal. (The visualization part here is important.) Are you surprised?
  4. Why? (What went wrong that got in your way?)
  5. Now imagine you take steps to remove the obstacle from Step 4.
  6. Return to Step 3. Are you still surprised that you’d fail? If so, your plan is probably good enough. (Don’t fool yourself!)
  7. If failure still seems likely, go through Steps 3–6 a few more times until you “problem proof” your plan.

Murphyjitsu based off a strategy called a “premortem” or “prospective hindsight”, which basically means imagining the project has already failed and “looking backwards” to see what went wrong [Klein 2007].

It turns out that putting ourselves in the future and looking back can help identify more risks, or see where things can go wrong. Prospective hindsight has been shown to increase our predictive power so we can make adjustments to our plans — before they fail [Mitchell et al., 1989, Veinott et al., 2010].

This seems to work well, even if we’re only using our intuitions. While that might seem a little weird at first (“aren’t our intuitions pretty arbitrary?”), research has shown that our intuitions can be a good source of information in situations where experience is helpful [Klein 1999; Kahneman 2011]*.

While a premortem is usually done on an organizational level, Murphyjitsu works for individuals. Still, it’s a useful way to “failure-proof” your plans before you start them that taps into the same internal mechanisms.

Here’s what Murphyjitsu looks like in action:

“First, let’s say I decide to exercise every day. That’ll be my goal (Step 1). But I should also be more specific than that, so it’s easier to tell what “exercising” means. So I decide that I want to go running on odd days for 30 minutes and do strength training on even days for 20 minutes. And I want to do them in the evenings (Step 2).

Now, let’s imagine that it’s now one week later, and I didn’t go exercising at all! What went wrong? (Step 3) The first thing that comes to mind is that I forgot to remind myself, and it just slipped out of my mind (Step 4). Well, what if I set some phone / email reminders? Is that good enough? (Step 5)

Once again, let’s imagine it’s one week later and I made a reminder. But let’s say I still didn’t got exercising. How surprising is this? (Back to Step 3) Hmm, I can see myself getting sore and/or putting other priorities before it…(Step 4). So maybe I’ll also set aside the same time every day, so I can’t easily weasel out (Step 5).

How do I feel now? (Back to Step 3) Well, if once again I imagine it’s one week later and I once again failed, I’d be pretty surprised. My plan has two levels of fail-safes and I do want to do exercise anyway. Looks like it’s good! (Done)


Reference Class Forecasting:

“Get Accurate Estimates”


Predicting the future…using the past!

Reference class forecasting (RCF)is all about using the outside view. Our inside views tend to be very optimistic: We will see all the ways that things can go right, but none of the ways things can go wrong. By looking at past history — other people who have tried the same or similar thing as us — we can get a better idea of how long things will really take.

Here are the basic steps:

  1. Figure out what you want to do.
  2. See your records how long it took you last time 3.
  3. That’s your new prediction.
  4. If you don’t have past information, look for about how long it takes, on average, to do our thing. (This usually looks like Googling “average time to do X”.)**
  5. That’s your new prediction!

Technically, the actual process for reference class forecasting works a little differently. It involves a statistical distribution and some additional calculations, but for most everyday purposes, the above algorithm should work well enough.

In both cases, we’re trying to take an outside view, which we know improves our estimates [Buehler et al., 1994].

When you Google the average time or look at your own data, you’re forming a “reference class”, a group of related actions that can give you info about how long similar projects tend to take. Hence, the name “reference class forecasting”.

Basically, RCF works by looking only at results. This means that we can avoid any potential biases that might have cropped up if we were to think it through. We’re shortcutting right to the data. The rest of it is basic statistics; most people are close to average. So if we have an idea of what the average looks like, we can be sure we’ll be pretty close to average as well [Flyvbjerg 2006; Flyvbjerg 2008].

The main difference in our above algorithm from the standard one is that this one focuses on your own experiences, so the estimate you get tends to be more accurate than an average we’d get from an entire population.

For example, if it usually takes me about 3 hours to finish homework (I use Toggl to track my time), then I’ll predict that it will take me 3 hours today, too.

It’s obvious that RCF is incredibly simple. It literally just tells you that how long something will take you this time will be very close to how long it took you last time. But that doesn’t mean it’s ineffective! Often, the past is a good benchmark of future performance, and it’s far better than any naive prediction your brain might spit out.

RCF + Murphyjitsu Example:

For me, I’ve found that using a mixture of Reference Class Forecasting and Murphyjitsu to be helpful for reducing overconfidence in my plans.

When starting projects, I will often ask myself, “What were the reasons that I failed last time?” I then make a list of the first three or four “failure-modes” that I can recall. I now make plans to preemptively avoid those past errors.

(This can also be helpful in reverse — asking yourself, “How did I solve a similar difficult problem last time?” when facing a hard problem.)

Here’s an example:

“Say I’m writing a long post (like this one) and I want to know how what might go wrong. I’ve done several of these sorts of primers before, so I have a “reference class” of data to draw from. So what were the major reasons I fell behind for those posts?

<Cue thinking>

Hmm, it looks like I would either forget about the project, get distracted, or lose motivation. Sometimes I’d want to do something else instead, or I wouldn’t be very focused.

Okay, great. Now what are some ways that I might be able to “patch” those problems?

Well, I can definitely start by making a priority list of my action items. So I know which things I want to finish first. I can also do short 5-minute planning sessions to make sure I’m actually writing. And I can do some more introspection to try and see what’s up with my motivation.

 

Back-planning:

“Calibrate Your Intuitions with Reality”

Back-planning involves, as you might expect, planning from the end. Instead of thinking about where we start and how to move forward, we imagine we’re already at our goal and go backwards.

Time-travelling inside your internal universe.

Here are the steps:

  1. Figure out the task you want to get done.
  2. Imagine you’re at the end of your task.
  3. Now move backwards, step-by-step. What is the step right before you finish?
  4. Repeat Step 3 until you get to where you are now.
  5. Write down how long you think the task will now take you.
  6. You now have a detailed plan as well as better prediction!

The experimental evidence for back-planning basically suggests that people will predict longer times to start and finish projects.

There are a few interesting hypotheses about why back-planning seems to improve predictions. The general gist of these theories is that back-planning is a weird, counterintuitive way to think about things, which means it disrupts a lot of mental processes that can lead to overconfidence [Wiese et al., 2012].

This means that back-planning can make it harder to fall into the groove of the easy “best-case” planning we default to. Instead, we need to actually look at where things might go wrong. Which is, of course, what we want.

In my own experience, I’ve found that going through a quick back-planning session can help my intuitions “warm up” to my prediction more. As in, I’ll get an estimation from RCF, but it still feels “off”. Walking through the plan through back-planning can help all the parts of me understand that it really will probably take longer.

Here’s the back-planning example:

“Right now, I want to host a talk at my school. I know that’s the end goal (Step 1). So the end goal is me actually finishing the talk and taking questions (Step 2). What happens right before that? (Step 3). Well, people would need to actually be in the room. And I would have needed a room.

Is that all? (Step 3). Also, for people to show up, I would have needed publicity. Probably also something on social media. I’d need to publicize at least a week in advance, or else it won’t be common knowledge.

And what about the actual talk? I would have needed slides, maybe memorize my talk. Also, I’d need to figure out what my talk is actually going to be on.

Huh, thinking it through like this, I’d need something like 3 weeks to get it done. One week for the actual slides, one week for publicity (at least), and one week for everything else that might go wrong.

That feels more ‘right’ than my initial estimate of ‘I can do this by next week.’”

 

Experimental Ideas:

Murphyjitsu, Reference Class Forecasting, and Back-planning are the three debiasing techniques that I’m fairly confident work well. This section is far more anecdotal. They’re ideas that I think are useful and interesting, but I don’t have much formal backing for them.

Decouple Predictions From Wishes:

In my own experience, I often find it hard to separate when I want to finish a task versus when I actually think I will finish a task. This is a simple distinction to keep in mind when making predictions, and I think it can help decrease optimism. The most important number, after all, is when I actually think I will finish—it’s what’ll most likely actually happen.

There’s some evidence suggesting that “wishful thinking” could actually be responsible for some poor estimates but it’s far from definitive [Buehler et al., 1997, Krizan and Windschitl].

Incentivize Correct Predictions:

Lately, I’ve been using a 4-column chart for my work. I write down the task in Column 1 and how long I think it will take me in Column 2. Then I go and do the task. After I’m done, I write down how long it actually took me in Column 3. Column 4 is the absolute value of Column 2 minus Column 3, or my “calibration score”.

The idea is to minimize my score every day. It’s simple and it’s helped me get a better sense for how long things really take.

Plan For Failure:

In my schedules, I specifically write in “distraction time”. If you aren’t doing this, you may want to consider doing this. Most of us (me included) have wandering attentions, and I know I’ll lost at least some time to silly things every day.

Double Your Estimate:

I get it. The three debiasing techniques I outlined above can sometimes take too long. In a pinch, you can probably approximate good predictions by just doubling your naive prediction.

Most people tend to be less than 2X overconfident, but I think (pessimistically) sticking to doubling is probably still better than something like 1.5X.

 

Working in Groups:

Obviously because groups are made of individuals, we’d expect them to be susceptible to the same overconfidence biases I covered earlier. Though some research has shown that groups are less susceptible to bias, more studies have shown that group predictions can be far more optimistic than individual predictions [Wright and Wells, Buehler et al., 2010]. “Groupthink” is term used to describe the observed failings of decision making in groups [Janis].

Groupthink (and hopefully also overconfidence), can be countered by either assigning a “Devil’s Advocate” or engaging in “dialectical inquiry” [Lunenburg 2012]:

We give out more than cookies over here

A Devil’s Advocate is a person who is actively trying to find fault with the group’s plans, looking for holes in reasoning or other objections. It’s suggested that the role rotates, and it’s associated with other positives like improved communication skills.

A dialectical inquiry is where multiple teams try to create the best plan, and then present them. Discussion then happens, and then the group selects the best parts of each plan . It’s a little like building something awesome out of lots of pieces, like a giant robot.

This is absolutely how dialectical inquiry works in practice.

For both strategies, research has shown that they lead to “higher-quality recommendations and assumptions” (compared to not doing them), although it can also reduce group satisfaction and acceptance of the final decision [Schweiger et al. 1986].

(Pretty obvious though; who’d want to keep chatting with someone hell-bent on poking holes in your plan?)

 

Conclusion:

If you’re interested in learning (even) more about the planning fallacy, I’d highly recommend the paper The Planning Fallacy: Cognitive, Motivational, and Social Origins by Roger Buehler, Dale Griffin, and Johanna Peetz. Most of the material in this guide here is was taken from their paper. Do go check it out! It’s free!

Remember that everyone is overconfident (you and me included!), and that failing to plan is the norm. There are scary unknown unknowns out there that we just don’t know about!

Good luck and happy planning!

 

Footnotes:

* Just don’t go and start buying lottery tickets with your gut. We’re talking about fairly “normal” things like catching a ball, where your intuitions give you accurate predictions about where the ball will land. (Instead of, say, calculating the actual projectile motion equation in your head.)

** In a pinch, you can just use your memory, but studies have shown that our memory tends to be biased too. So as often as possible, try to use actual measurements and numbers from past experience.


Works Cited:

Buehler, Roger, Dale Griffin, and Johanna Peetz. "The Planning Fallacy: Cognitive,

Motivational, and Social Origins." Advances in Experimental Social Psychology 43 (2010): 1-62. Social Science Research Network.

Buehler, Roger, Dale Griffin, and Michael Ross. "Exploring the Planning Fallacy: Why People

Underestimate their Task Completion Times." Journal of Personality and Social Psychology 67.3 (1994): 366.

Buehler, Roger, Dale Griffin, and Heather MacDonald. "The Role of Motivated Reasoning in

Optimistic Time Predictions." Personality and Social Psychology Bulletin 23.3 (1997): 238-247.

Buehler, Roger, Dale Griffin, and Michael Ross. “It’s About Time: Optimistic Predictions in

Work and Love.” European Review of Social Psychology Vol. 6, (1995): 1–32

Buehler, Roger, et al. "Perspectives on Prediction: Does Third-Person Imagery Improve Task

Completion Estimates?." Organizational Behavior and Human Decision Processes 117.1 (2012): 138-149.

Buehler, Roger, Dale Griffin, and Michael Ross. "Inside the Planning Fallacy: The Causes and

Consequences of Optimistic Time Predictions." Heuristics and Biases: The Psychology of Intuitive Judgment (2002): 250-270.

Buehler, R., & Griffin, D. (2003). Planning, Personality, and Prediction: The Role of Future

Focus in Optimistic Time Predictions. Organizational Behavior and Human Decision Processes, 92, 80–90

Flyvbjerg, Bent. "From Nobel Prize to Project Management: Getting Risks Right." Project

Management Journal 37.3 (2006): 5-15. Social Science Research Network.

Flyvbjerg, Bent. "Curbing Optimism Bias and Strategic Misrepresentation in Planning:

Reference Class Forecasting in Practice." European Planning Studies 16.1 (2008): 3-21.

Janis, Irving Lester. "Groupthink: Psychological Studies of Policy Decisions and Fiascoes."

(1982).

Johnson, Dominic DP, and James H. Fowler. "The Evolution of Overconfidence." Nature

477.7364 (2011): 317-320.

Kahneman, Daniel. Thinking, Fast and Slow. Macmillan, 2011.

Kahneman, Daniel, and Dan Lovallo. “Timid Choices and Bold Forecasts: A Cognitive

Perspective on Risk Taking." Management Science 39.1 (1993): 17-31.

Klein, Gary. Sources of power: How People Make DecisionsMIT press, 1999.

Klein, Gary. "Performing a Project Premortem." Harvard Business Review 85.9 (2007): 18-19.

Krizan, Zlatan, and Paul D. Windschitl. "Wishful Thinking About the Future: Does Desire

Impact Optimism?" Social and Personality Psychology Compass 3.3 (2009): 227-243.

Lunenburg, F. "Devil’s Advocacy and Dialectical Inquiry: Antidotes to Groupthink."

International Journal of Scholarly Academic Intellectual Diversity 14 (2012): 1-9.

Mitchell, Deborah J., J. Edward Russo, and Nancy Pennington. "Back to the Future: Temporal

Perspective in the Explanation of Events." Journal of Behavioral Decision Making 2.1 (1989): 25-38.

Newby-Clark, Ian R., et al. "People focus on Optimistic Scenarios and Disregard Pessimistic

Scenarios While Predicting Task Completion Times." Journal of Experimental Psychology: Applied 6.3 (2000): 171.

Pronin, Emily, and Lee Ross. "Temporal Differences in Trait Self-Ascription: When the Self is

Seen as an Other." Journal of Personality and Social Psychology 90.2 (2006): 197.

Roy, Michael M., Nicholas JS Christenfeld, and Craig RM McKenzie. "Underestimating the

Duration of Future Events: Memory Incorrectly Used or Memory Bias?." Psychological Bulletin 131.5 (2005): 738.

Schweiger, David M., William R. Sandberg, and James W. Ragan. "Group Approaches for

Improving Strategic Decision Making: A Comparative Analysis of Dialectical Inquiry,

Devil's Advocacy, and Consensus." Academy of Management Journal 29.1 (1986): 51-71.

Veinott, Beth. "Klein, and Sterling Wiggins,“Evaluating the Effectiveness of the Premortem

Technique on Plan Confidence,”." Proceedings of the 7th International ISCRAM Conference (May, 2010).

Wiese, Jessica, Roger Buehler, and Dale Griffin. "Backward Planning: Effects of Planning

Direction on Predictions of Task Completion Time." Judgment and Decision Making 11.2

(2016): 147.

Wright, Edward F., and Gary L. Wells. "Does Group Discussion Attenuate the Dispositional

Bias?." Journal of Applied Social Psychology 15.6 (1985): 531-546.

[Link] A Weird American in Trump's Post-Truth America

0 Gleb_Tsipursky 26 January 2017 10:26PM

80,000 Hours: EA and Highly Political Causes

29 The_Jaded_One 26 January 2017 09:44PM

this post is now crossposted to the EA forum

80,000 hours is a well known Effective Altruism organisation which does "in-depth research alongside academics at Oxford into how graduates can make the biggest difference possible with their careers". 

They recently posted a guide to donating which aims, in their words, to (my emphasis)

use evidence and careful reasoning to work out how to best promote the wellbeing of all. To find the highest-impact charities this giving season ... We ... summed up the main recommendations by area below

Looking below, we find a section on the problem area of criminal justice (US-focused). An area where the aim is outlined as follows: (quoting from the Open Philanthropy "problem area" page)

investing in criminal justice policy and practice reforms to substantially reduce incarceration while maintaining public safety. 

Reducing incarceration whilst maintaining public safety seems like a reasonable EA cause, if we interpret "pubic safety" in a broad sense - that is, keep fewer people in prison whilst still getting almost all of the benefits of incarceration such as deterrent effects, prevention of crime, etc.

So what are the recommended charities? (my emphasis below)

1. Alliance for Safety and Justice 

"The Alliance for Safety and Justice is a US organization that aims to reduce incarceration and racial disparities in incarceration in states across the country, and replace mass incarceration with new safety priorities that prioritize prevention and protect low-income communities of color."  

They promote an article on their site called "black wounds matter", as well as how you can "Apply for VOCA Funding: A Toolkit for Organizations Working With Crime Survivors in Communities of Color and Other Underserved Communities"

2. Cosecha - (note that their url is www.lahuelga.com, which means "the strike" in Spanish) (my emphasis below)

"Cosecha is a group organizing undocumented immigrants in 50-60 cities around the country. Its goal is to build mass popular support for undocumented immigrants, in resistance to incarceration/detention, deportation, denigration of rights, and discrimination. The group has become especially active since the Presidential election, given the immediate threat of mass incarceration and deportation of millions of people."

Cosecha have a footprint in the news, for example this article:

They have the ultimate goal of launching massive civil resistance and non-cooperation to show this country it depends on us ...  if they wage a general strike of five to eight million workers for seven days, we think the economy of this country would not be able to sustain itself 

The article quotes Carlos Saavedra, who is directly mentioned by Open Philanthropy's Chloe Cockburn:

Carlos Saavedra, who leads Cosecha, stands out as an organizer who is devoted to testing and improving his methods, ... Cosecha can do a lot of good to prevent mass deportations and incarceration, I think his work is a good fit for likely readers of this post."

They mention other charities elsewhere on their site and in their writeup on the subject, such as the conservative Center for Criminal Justice Reform, but Cosecha and the Alliance for Safety and Justice are the ones that were chosen as "highest impact" and featured in the guide to donating

 


 

Sometimes one has to be blunt: 80,000 hours is promoting the financial support of some extremely hot-button political causes, which may not be a good idea. Traditionalists/conservatives and those who are uninitiated to Social Justice ideology might look at The Alliance for Safety and Justice and Cosecha and label them as them racists and criminals, and thereby be turned off by Effective Altruism, or even by the rationality movement as a whole. 

There are standard arguments, for example this by Robin Hanson from 10 years ago about why it is not smart or "effective" to get into these political tugs-of-war if one wants to make a genuine difference in the world.

One could also argue that the 80,000 hours' charities go beyond the usual folly of political tugs-of-war. In addition to supporting extremely political causes, 80,000 hours could be accused of being somewhat intellectually dishonest about what goal they are trying to further actually is. 

Consider The Alliance for Safety and Justice. 80,000 Hours state that the goal of their work in the criminal justice problem area is to "substantially reduce incarceration while maintaining public safety". This is an abstract goal that has very broad appeal and one that I am sure almost everyone agrees to. But then their more concrete policy in this area is to fund a charity that wants to "reduce racial disparities in incarceration" and "protect low-income communities of color". The latter is significantly different to the former - it isn't even close to being the same thing - and the difference is highly political. One could object that reducing racial disparities in incarceration is merely a means to the end of substantially reducing incarceration while maintaining public safety, since many people in prison in the US are "of color". However this line of argument is a very politicized one and it might be wrong, or at least I don't see strong support for it. "Selectively release people of color and make society safer - endorsed by effective altruists!" struggles against known facts about redictivism rates across races, as well as an objection about the implicit conflation of equality of outcome and equality of opportunity. (and I do not want this to be interpreted as a claim of moral superiority of one race over others - merely a necessary exercise in coming to terms with facts and debunking implicit assumptions). Males are incarcerated much more than women, so what about reducing gender disparities in incarceration, whilst also maintaining public safety? Again, this is all highly political, laden with politicized implicit assumptions and language.  

Cosecha is worse! They are actively planning potentially illegal activities like helping illegal immigrants evade the law (though IANAL), as well as activities which potentially harm the majority of US citizens such as a seven day nationwide strike whose intent is to damage the economy. Their URL is "The Strike" in Spanish. 

Again, the abstract goal is extremely attractive to almost anyone, but the concrete implementation is highly divisive. If some conservative altruist signed up to financially or morally support the abstract goal of "substantially reducing incarceration while maintaining public safety" and EA organisations that are pursuing that goal without reading the details, and then at a later point they saw the details of Cosecha and The Alliance for Safety and Justice, they would rightly feel cheated. And to the objection that conservative altruists should read the description rather than just the heading - what are we doing writing headings so misleading that you'd feel cheated if you relied on them as summaries of the activity they are mean to summarize? 

 


 

One possibility would be for 80,000 hours to be much more upfront about what they are trying to achieve here - maybe they like left-wing social justice causes, and want to help like-minded people donate money to such causes and help the particular groups who are favored in those circles. There's almost a nod and a wink to this when Chloe Cockburn says (my paraphrase of Saavedra, and emphasis, below)

I think his [A man who wants to lead a general strike of five to eight million workers for seven days so that the economy of the USA would not be able to sustain itself, in order to help illegal immigrants] work is a good fit for likely readers of this post

Alternatively, they could try to reinvigorate the idea that their "criminal justice" problem area is politically neutral and beneficial to everyone; the Open Philanthropy issue writeup talks about "conservative interest in what has traditionally been a solely liberal cause" after all. I would advise considering dropping The Alliance for Safety and Justice and Cosecha if they intend to do this. There may not be politically neutral charities in this area, or there may not be enough high quality conservative charities to present a politically balanced set of recommendations. Setting up a growing donor advised fund or a prize for nonpartisan progress that genuinely intends to benefit everyone including conservatives, people opposed to illegal immigration and people who are not "of color" might be an option to consider.

We could examine 80,000 hours' choice to back these organisations from a more overall-utilitarian/overall-effectiveness point of view, rather than limiting the analysis to the specific problem area. These two charities don't pass the smell test for altruistic consequentialism, pulling sideways on ropes, finding hidden levers that others are ignoring, etc. Is the best thing you can do with your smart EA money helping a charity that wants to get stuck into the culture war about which skin color is most over-represented in prisons? What about a second charity that wants to help people illegally immigrate at a time when immigration is the most divisive political topic in the western world?

Furthermore, Cosecha's plans for a nationwide strike and potential civil disobedience/showdown with Trump & co could push an already volatile situation in the US into something extremely ugly. The vast majority of people in the world (present and future) are not the specific group that Cosecha aims to help, but the set of people who could be harmed by the uglier versions of a violent and calamitous showdown in the US is basically the whole world. That means that even if P(Cosecha persuades Trump to do a U-turn on illegals) is 10 or 100 times greater than P(Cosecha precipitates a violent crisis in the USA), they may still be net-negative from an expected utility point of view. EA doesn't usually fund causes whose outcome distribution is heavily left-skewed so this argument is a bit unusual to have to make, but there it is. 

Not only is Cosecha a cause that is (a) mind-killing and culture war-ish (b) very tangentially related to the actual problem area it is advertised under by 80,000 hours, but it might also (c) be an anti-charity that produces net disutility (in expectation) in the form of a higher probability a US civil war with money that you donate to it. 

Back on the topic of criminal justice and incarceration: opposition to reform often comes from conservative voters and politicians, so it might seem unlikely to a careful thinker that extra money on the left-wing side is going to be highly effective. Some intellectual judo is required; make conservatives think that it was their idea all along. So promoting the Center for Criminal Justice Reform sounds like the kind of smart, against-the-grain idea that might be highly effective! Well done, Open Philanthropy! Also in favor of this org: they don't copiously mention which races or person-categories they think are most important in their articles about criminal justice reform, the only culture war item I could find on them is the world "conservative" (and given the intellectual judo argument above, this counts as a plus), and they're not planning a national strike or other action with a heavy tail risk. But that's the one that didn't make the cut for the 80,000 hours guide to donating!

The fact that they let Cosecha (and to a lesser extent The Alliance for Safety and Justice) through reduces my confidence in 80,000 hours and the EA movement as a whole. Who thought it would be a good idea to get EA into the culture war with these causes, and also thought that they were plausibly among the most effective things you can do with money? Are they taking effectiveness seriously? What does the political diversity of meetings at 80,000 hours look like? Were there no conservative altruists present in discussions surrounding The Alliance for Safety and Justice and Cosecha, and the promotion of them as "beneficial for everyone" and "effective"? 

Before we finish, I want to emphasize that this post is not intended to start an object-level discussion about which race, gender, political movement or sexual orientation is cooler, and I would encourage moderators to temp-ban people who try to have that kind of argument in the comments of this post.

I also want to emphasize that criticism of professional altruists is a necessary evil; in an ideal world the only thing I would ever want to say to people who dedicate their lives to helping others (Chloe Cockburn in particular, since I mentioned her name above)  is "thank you, you're amazing". Other than that, comments and criticism are welcome, especially anything pointing out any inaccuracies or misunderstandings in this post. Comments from anyone involved in 80,000 hours or Open Philanthropy are welcome. 

First impressions...

7 ArisC 24 January 2017 03:14PM

... of LW: a while ago, a former boss and friend of mine said that rationality is irrational because you never have sufficient computational power to evaluate everything rationally. I thought he was missing the point - but after two posts on LW, I am inclined to agree with him.

It's kind of funny - every post gets broken down into its tiniest constituents, and these get overanalysed and then people go on tangents only marginally relevant to the intent of the original article.

This would be fine if the original questions of the post were answered; but when I asked for metrics to evaluate a presidency, few people actually provided any - most started debating the validity of metrics, and one subthread went off to discuss the appropriateness of the term "gender equality".

I am new here, and I don't want to be overly critical of a culture I do not yet understand. But I just want to point out - rationality is a great tool to solve problems; if it becomes overly abstract, it kind of misses its point I think.

Instrumental Rationality: Overriding Defaults

2 lifelonglearner 20 January 2017 05:14AM

[I'd previously posted this essay as a link. From now on, I'll be cross-posting blog posts here instead of linking them, to keep the discussions LW central. This is the first in an in-progress of sequence of articles that'll focus on identifying instrumental rationality techniques and cataloging my attempt to integrate them into my life with examples and insight from habit research.]

[Epistemic Status: Pretty sure. The stuff on habits being situation-response links seems fairly robust. I'll be writing something later with the actual research. I'm basically just retooling existing theory into an optimizational framework for improving life.]

 

   I’m interested how rationality can help us make better decisions.  

              Many of these decisions seem to involve split-second choices where it’s hard to sit down and search a handbook for the relevant bits of information—you want to quickly react in the correct way, else the moment passes and you’ve lost. On a very general level, it seems to be about reacting in the right way once the situation provides a cue.

              Consider these situation-reaction pairs:

  • ·       You are having an argument with someone. As you begin to notice the signs of yourself getting heated, you remember to calm down and talk civilly. Maybe also some deep breaths.
  • ·       You are giving yourself a deadline or making a schedule for a task, and you write down the time you expect to finish. Quickly, though, you remember to actually check if it took you that long last time, and you adjust accordingly.
  • ·       You feel yourself slipping towards doing something some part of you doesn’t want to do. Say you are reneging on a previous commitment. As you give in to temptation, you remember to pause and really let the two sides of yourself communicate.
  • ·       You think about doing something, but you feel aversive / flinch-y to it. As you shy away from the mental pain, rather than just quickly thinking about something else, you also feel curious as to why you feel that way. You query your brain and try to pick apart the “ugh” feeling,

Two things seem key to the above scenarios:

One, each situation above involves taking an action that is different from our keyed-in defaults.

Two, the situation-reaction pair paradigm is pretty much CFAR’s Trigger Action Plan (TAP) model, paired with a multi-step plan.

Also, knowing about biases isn’t enough to make good decisions. Even memorizing a mantra like “Notice signs of aversion and query them!” probably isn’t going to be clear enough to be translated into something actionable. It sounds nice enough on the conceptual level, but when, in the moment, you remember such a mantra, you still need to figure out how to “notice signs of aversion and query them”.

What we want is a series of explicit steps that turn the abstract mantra into small, actionable steps. Then, we want to quickly deploy the steps at the first sign of the situation we’re looking out for, like a new cached response.

This looks like a problem that a combination of focused habit-building and a breakdown of the 5-second level can help solve.

In short, the goal looks to be to combine triggers with clear algorithms to quickly optimize in the moment. Reference class information from habit studies can also help give good estimates on how long the whole process will take to internalize (on average 66 days, according to Lally et al)

But these Trigger Action Plan-type plans don’t seem to directly cover the willpower related problems with akrasia.

Sure, TAPs can help alert you to the presence of an internal problem, like in the above example where you notice aversion. And the actual internal conversation can probably be operationalized to some extent, like how CFAR has described the process of Double Crux.

But most of the Overriding Default Habit actions seem to be ones I’d be happy to do anytime—I just need a reminder—whereas akrasia-related problems are centrally related to me trying to debug my motivational system. For that reason, I think it helps to separate the two. Also, it makes the outside-seeming TAP algorithms complementary, rather than at odds, with the inside-seeming internal debugging techniques.

Loosely speaking, then, I think it still makes quite a bit of sense to divide the things rationality helps with into two categories:

  • Overriding Default Habits:

These are the situation-reaction pairs I’ve covered above. Here, you’re substituting a modified action instead of your “default action”. But the cue serves as mainly a reminder/trigger. It’s less about diagnosing internal disagreement.

  • Akrasia / Willpower Problems:

Here we’re talking about problems that might require you to precommit (although precommitment might not be all you need to do), perhaps because of decision instability. The “action-intention gap” caused by akrasia, where you (sort of) want to something but you don’t want to also goes in here.

Still, it’s easy to point to lots of other things that fall in the bounds of rationality that my approach doesn’t cover: epistemology, meta-levels, VNM rationality, and many other concepts are conspicuously absent. Part of this is because I’ve been focusing on instrumental rationality, while a lot of those ideas are more in the epistemic camp.

Ideas like meta-levels do seem to have some place in informing other ideas and skills. Even as declarative knowledge, they do chain together in a way that results in useful real world heuristics.  Meta-levels, for example, can help you keep track of the ultimate direction in a conversation. Then, it can help you table conversations that don’t seem immediately useful/relevant and not get sucked into the object-level discussion.

At some point, useful information about how the world works should actually help you make better decisions in the real world. For an especially pragmatic approach, it may be useful to ask yourself, each time you learn something new, “What do I see myself doing as a result of learning this information?”

There’s definitely more to mine from the related fields of learning theory, habits, and debiasing, but I think I’ll have more than enough skills to practice if I just focus on the immediately practical ones.

 

 

[Link] Dominic Cummings: how the Brexit referendum was won

16 The_Jaded_One 12 January 2017 09:26PM

[Link] Dominic Cummings: how the Brexit referendum was won

1 The_Jaded_One 12 January 2017 07:26PM

[Link] Rationality 101 videotaped presentation with link to slides in description (from our LessWrong meetup introductory event)

0 Gleb_Tsipursky 11 January 2017 07:07PM

Rationality Considered Harmful (In Politics)

9 The_Jaded_One 08 January 2017 10:36AM

Why you should be very careful about trying to openly seek truth in any political discussion


1. Rationality considered harmful for Scott Aaronson in the great gender debate

In 2015, complexity theorist and rationalist Scott Aaronson was foolhardy enough to step into the Gender Politics war on his blog with a comment stating that extreme feminism that he bought into made him hate himself and try to seek ways to chemically castrate himself. The feminist blogoshere got hold of this and crucified him for it, and he has written a few followup blog posts about it. Recently I saw this comment by him on his blog:

As the comment 171 affair blew up last year, one of my female colleagues in quantum computing remarked to me that the real issue had nothing to do with gender politics; it was really just about the commitment to truth regardless of the social costs—a quality that many of the people attacking me (who were overwhelmingly from outside the hard sciences) had perhaps never encountered before in their lives. That remark cheered me more than anything else at the time

 

2. Rationality considered harmful for Sam Harris in the islamophobia war

I recently heard a very angry, exasperated 2 hour podcast by the new atheist and political commentator Sam Harris about how badly he has been straw-manned, misrepresented and trash talked by his intellectual rivals (who he collectively refers to as the "regressive left"). Sam Harris likes to tackle hard questions such as when torture is justified, which religions are more or less harmful than others, defence of freedom of speech, etc. Several times, Harris goes to the meta-level and sees clearly what is happening:

Rather than a searching and beautiful exercise in human reason to have conversations on these topics [ethics of torture, military intervention, Islam, etc], people are making it just politically so toxic, reputationally so toxic to even raise these issues that smart people, smarter than me, are smart enough not to go near these topics

Everyone on the left at the moment seems to be a mind reader.. no matter how much you try to take their foot out of your mouth, the mere effort itself is going to be counted against you - you're someone who's in denial, or you don't even understand how racist you are, etc

 

3. Rationality considered harmful when talking to your left-wing friends about genetic modification

In the SlateStarCodex comments I posted complaining that many left-wing people were responding very personally (and negatively) to my political views. 

One long term friend openly and pointedly asked whether we should still be friends over the subject of eugenics and genetic engineering, for example altering the human germ-line via genetic engineering to permanently cure a genetic disease. This friend responded to a rational argument about why some modifications of the human germ line may in fact be a good thing by saying that "(s)he was beginning to wonder whether we should still be friends". 

A large comment thread ensued, but the best comment I got was this one:

One of the useful things I have found when confused by something my brain does is to ask what it is *for*. For example: I get angry, the anger is counterproductive, but recognizing that doesn’t make it go away. What is anger *for*? Maybe it is to cause me to plausibly signal violence by making my body ready for violence or some such.

Similarly, when I ask myself what moral/political discourse among friends is *for* I get back something like “signal what sort of ally you would be/broadcast what sort of people you want to ally with.” This makes disagreements more sensible. They are trying to signal things about distribution of resources, I am trying to signal things about truth value, others are trying to signal things about what the tribe should hold sacred etc. Feeling strong emotions is just a way of signaling strong precommitments to these positions (i.e. I will follow the morality I am signaling now because I will be wracked by guilt if I do not. I am a reliable/predictable ally.) They aren’t mad at your positions. They are mad that you are signaling that you would defect when push came to shove about things they think are important.

Let me repeat that last one: moral/political discourse among friends is for “signalling what sort of ally you would be/broadcast what sort of people you want to ally with”. Moral/political discourse probably activates specially evolved brainware in human beings; that brainware has a purpose and it isn't truthseeking. Politics is not about policy

 

4. Takeaways

This post is already getting too long so I deleted the section on lessons to be learned, but if there is interest I'll do a followup. Let me know what you think in the comments!

[Link] Applied Rationality Exercises

2 SquirrelInHell 07 January 2017 06:13PM

Actually Practicing Rationality and the 5-Second Level

5 lifelonglearner 06 January 2017 06:50AM

[I first posted this as a link to my blog post, but I'm reposting as a focused article here that trims some fat of the original post, which was less accessible]


I think a lot about heuristics and biases, and I admit that many of my ideas on rationality and debiasing get lost in the sea of my own thoughts.  They’re accessible, if I’m specifically thinking about rationality-esque things, but often invisible otherwise.  

That seems highly sub-optimal, considering that the whole point of having usable mental models isn’t to write fancy posts about them, but to, you know, actually use them.

To that end, I’ve been thinking about finding some sort of systematic way to integrate all of these ideas into my actual life.  

(If you’re curious, here’s the actual picture of what my internal “concept-verse” (w/ associated LW and CFAR memes) looks like)

 

MLU Mind Map v1.png


Open Image In New Tab for all the details

So I have all of these ideas, all of which look really great on paper and in thought experiments.  Some of them even have some sort of experimental backing.  Given this, how do I put them together into a kind of coherent notion?

Equivalently, what does it look like if I successfully implement these mental models?  What sorts of changes might I expect to see?  Then, knowing the end product, what kind of process can get me there?

One way of looking it would to say that if I implemented techniques well, then I’d be better able to tackle my goals and get things done.  Maybe my productivity would go up.  That sort of makes sense.  But this tells us nothing about how I’d actually be going about, using such skills.  

We want to know how to implement these skills and then actually utilize them.

Yudkowsky gives a highly useful abstraction when he talks about the five-second level.  He gives some great tips on breaking down mental techniques into their component mental motions.  It’s a step-by-step approach that really goes into the details of what it feels like to undergo one of the LessWrong epistemological techniques.  We’d like our mental techniques to be actual heuristics that we can use in the moment, so having an in-depth breakdown makes sense.

Here’s my attempt at a 5-second-level breakdown for Going Meta, or "popping" out of one's head to stay mindful of the moment:

  1. Notice the feeling that you are being mentally “dragged” towards continuing an action.
    1. (It can feel like an urge, or your mind automatically making a plan to do something.  Notice your brain simulating you taking an action without much conscious input.)
  2. Remember that you have a 5-second-level series of steps to do something about it.
  3. Feel aversive towards continuing the loop.  Mentally shudder at the part of you that tries to continue.
  4. Close your eyes.  Take in a breath.
  5. Think about what 1-second action you could take to instantly cut off the stimulus from whatever loop you’re stuck in. (EX: Turning off the display, closing the window, moving to somewhere else).
  6. Tense your muscles and clench, actually doing said action.
  7. Run a search through your head, looking for an action labeled “productive”.  Try to remember things you’ve told yourself you “should probably do” lately.  
    1. (If you can’t find anything, pattern-match to find something that seems “productive-ish”.)
  8. Take note of what time it is.  Write it down.
  9. Do the new thing.  Finish.
  10. Note the end time.  Calculate how long you did work.

Next, the other part is actually accessing the heuristic in the situations where you want it.  We want it to be habitual.

After doing some quick searches on the existing research on habits, it appears that many of the links go to Charles Duhigg, author of The Power of Habit, or B J Fogg of Tiny Habits. Both models focus on two things: Identifying the Thing you want to do.  Then setting triggers so you actually do It.  (There’s some similarity to CFAR’s Trigger Action Plans.)  

B J’s approach focuses on scaffolding new habits into existing routines, like brushing your teeth, which are already automatic.  Duhigg appears to be focused more on reinforcement and rewards, with several nods to Skinner.  CFAR views actions as self-reinforcing, so the reward isn’t even necessary— they see repetition as building automation.

Overlearning the material also seems to be useful in some contexts, for skills like acquiring procedural knowledge.  And mental notions do seem to be more like procedural knowledge.

For these mental skills specifically, we’d want them to go off, time irrespective, so anchoring it to an existing routine might not be best.  Having it as a response to an internal state (EX: “When I notice myself being ‘dragged’ into a spiral, or automatically making plans to do a thing”) may be more useful.


(Follow-up post forthcoming on concretely trying to apply habit research to implementing heuristics.)

 

 

 

[Link] Rationality 101 (An Intro Post to the Rationalist-Sphere for Friends: Kahneman, LW, etc.)

9 lifelonglearner 17 December 2016 10:13PM

[Link] Ozy's Thoughts on CFAR's Mission Statement

2 Raemon 14 December 2016 04:25PM

[Link] Take the Rationality Test to determine your rational thinking style

1 Gunnar_Zarncke 09 December 2016 11:10PM

Measuring the Sanity Waterline

4 moridinamael 06 December 2016 08:38PM

I've always appreciated the motto, "Raising the sanity waterline." Intentionally raising the ambient level of rationality in our civilization strikes me as a very inspiring and important goal.

It occurred to me some time ago that the "sanity waterline" could be more than just a metaphor, that it could be quantified. What gets measured gets managed. If we have metrics to aim at, we can talk concretely about strategies to effectively promulgate rationality by improving those metrics. A "rationality intervention" that effectively improves a targeted metric can be said to be effective.

It is relatively easy to concoct or discover second-order metrics. You would expect a variety of metrics to respond to the state of ambient sanity. For example, I would expect that, all things being equal, preventable deaths should decrease when overall sanity increases, because a sane society acts to effectively prevent the kinds of things that lead to preventable deaths. But of course other factors may also cause these contingent measures to fluctuate whichever way, so it's important to remember that these are only indirect measures of sanity.

The UN collects a lot of different types of data. Perusing their database, it becomes obvious that there are a lot of things that are probably worth caring about but which have only a very indirect relationship with what we could call "sanity". For example, one imagines that GDP would increase under conditions of high sanity, but that'd be a pretty noisy measure.

Take five minutes to think about how one might measure global sanity, and maybe brainstorm some potential metrics. Part of the prompt, of course, is to consider what we could mean by "sanity" in the first place.

~~~ THINK ABOUT THE PROBLEM FOR FIVE MINUTES ~~~

This is my first pass at brainstorming metrics which may more-or-less directly indicate the level of civilizational sanity:

  • (+) Literacy rate
  • (+) Enrollment rates in primary/secondary/tertiary education
  • (-) Deaths due to preventable disease
  • (-) QALYs lost due to preventable causes
  • (+) Median level of awareness about world events
  • (-) Religiosity rate
  • (-) Fundamentalist religiosity rate
  • (-) Per-capita spent on medical treatments that have not been proven to work
  • (-) Per-capita spent on medical treatments that have been proven not to work
  • (-) Adolescent fertility rate
  • (+) Human development index

It's potentially more productive (and probably more practically difficult) to talk concretely about how best to improve one or two of these metrics via specific rationality interventions, than it is to talk about popularizing abstract rationality concepts.

Sidebar: The CFAR approach may yield something like "trickle down rationality", where the top 0.0000001% of rational people are selected and taught to be even more rational, and maybe eventually good thinking habits will infect everybody in the world from the top down. But I wouldn't bet on that being the most efficient path to raising the global sanity waterline.

As to the question of the meaning of "sanity", it seems to me that this indicates a certain basic package of rationality.

In Eliezer's original post on the topic, he seems to suggest a platform that boils down to a comprehensive embrace of probability-based reasoning and reductionism, with enough caveats and asterisks applied to that summary that you might as well go back and read his original post to get his full point. The idea was that with a high enough sanity waterline, obvious irrationalities like religion would eventually "go underwater" and cease to be viable. I see no problem with any of the "curricula" Eliezer lists in his post.

It has become popular within the rationalsphere to push back against reductionism, positivism, Bayesianism, etc. While such critiques of "extreme rationality" have an important place in the discourse, I think for the sake of this discussion, we should remember that the median human being really would benefit from more rationality in their thinking, and that human societies would benefit from having more rational citizens. Maybe we can all agree on that, even if we continue to disagree on, e.g., the finer points of positivism.

"Sanity" shouldn't require dogmatic adherence to a particular description of rationality, but it must include at least a basic inoculation of rationality to be worthy of the name. The type of sanity that I would advocate for promoting is this more "basic" kind, where religion ends up underwater, but people are still socially allowed to be contrarian in certain regards. After all, a sane society is aware of the power of conformity, and should actively promote some level of contrarianism within its population to promote a diversity of ideas and therefor avoid letting itself become stuck on local maxima.

Epistemic Effort

29 Raemon 29 November 2016 04:08PM

Epistemic Effort: Thought seriously for 5 minutes about it. Thought a bit about how to test it empirically. Spelled out my model a little bit. I'm >80% confident this is worth trying and seeing what happens. Spent 45 min writing post.

I've been pleased to see "Epistemic Status" hit a critical mass of adoption - I think it's a good habit for us to have. In addition to letting you know how seriously to take an individual post, it sends a signal about what sort of discussion you want to have, and helps remind other people to think about their own thinking.

I have a suggestion for an evolution of it - "Epistemic Effort" instead of status. Instead of "how confident you are", it's more of a measure of "what steps did you actually take to make sure this was accurate?" with some examples including:

  • Thought about it musingly
  • Made a 5 minute timer and thought seriously about possible flaws or refinements
  • Had a conversation with other people you epistemically respect and who helped refine it
  • Thought about how to do an empirical test
  • Thought about how to build a model that would let you make predictions about the thing
  • Did some kind of empirical test
  • Did a review of relevant literature
  • Ran an Randomized Control Trial
[Edit: the intention with these examples is for it to start with things that are fairly easy to do to get people in the habit of thinking about how to think better, but to have it quickly escalate to "empirical tests, hard to fake evidence and exposure to falsifiability"]

A few reasons I think this (most of these reasons are "things that seem likely to me" but which I haven't made any formal effort to test - they come from some background in game design and reading some books on habit formation, most of which weren't very well cited)
  • People are more likely to put effort into being rational if there's a relatively straightforward, understandable path to do so
  • People are more likely to put effort into being rational if they see other people doing it
  • People are more likely to put effort into being rational if they are rewarded (socially or otherwise) for doing so.
  • It's not obvious that people will get _especially_ socially rewarded for doing something like "Epistemic Effort" (or "Epistemic Status") but there are mild social rewards just for doing something you see other people doing, and a mild personal reward simply for doing something you believe to be virtuous (I wanted to say "dopamine" reward but then realized I honestly don't know if that's the mechanism, but "small internal brain happy feeling")
  • Less Wrong etc is a more valuable project if more people involved are putting more effort into thinking and communicating "rationally" (i.e. making an effort to make sure their beliefs align with the truth, and making sure to communicate so other people's beliefs align with the truth)
  • People range in their ability / time to put a lot of epistemic effort into things, but if there are easily achievable, well established "low end" efforts that are easy to remember and do, this reduces the barrier for newcomers to start building good habits. Having a nice range of recommended actions can provide a pseudo-gamified structure where there's always another slightly harder step you available to you.
  • In the process of writing this very post, I actually went from planning a quick, 2 paragraph post to the current version, when I realized I should really eat my own dogfood and make a minimal effort to increase my epistemic effort here. I didn't have that much time so I did a couple simpler techniques. But even that I think provided a lot of value.
Results of thinking about it for 5 minutes.

  • It occurred to me that explicitly demonstrating the results of putting epistemic effort into something might be motivational both for me and for anyone else thinking about doing this, hence this entire section. (This is sort of stream of conscious-y because I didn't want to force myself to do so much that I ended up going 'ugh I don't have time for this right now I'll do it later.')
  • One failure mode is that people end up putting minimal, token effort into things (i.e. randomly tried something on a couple doubleblinded people and call it a Randomized Control Trial).
  • Another is that people might end up defaulting to whatever the "common" sample efforts are, instead of thinking more creatively about how to refine their ideas. I think the benefit of providing a clear path to people who weren't thinking about this at all outweights people who might end up being less agenty about their epistemology, but it seems like something to be aware of.
  • I don't think it's worth the effort to run a "serious" empirical test of this, but I do think it'd be worth the effort, if a number of people started doing this on their posts, to run a followup informal survey asking "did you do this? Did it work out for you? Do you have feedback."
  • A neat nice-to-have, if people actually started adopting this and it proved useful, might be for it to automatically appear at the top of new posts, along with a link to a wiki entry that explained what the deal was.

Next actions, if you found this post persuasive:


Next time you're writing any kind of post intended to communicate an idea (whether on Less Wrong, Tumblr or Facebook), try adding "Epistemic Effort: " to the beginning of it. If it was intended to be a quick, lightweight post, just write it in its quick, lightweight form.

After the quick, lightweight post is complete, think about whether it'd be worth doing something as simple as "set a 5 minute timer and think about how to refine/refute the idea". If not, just write "thought about it musingly" after Epistemic Status. If so, start thinking about it more seriously and see where it leads.

While thinking about it for 5 minutes, some questions worth asking yourself:
  • If this were wrong, how would I know?
  • What actually led me to believe this was a good idea? Can I spell that out? In how much detail?
  • Where might I check to see if this idea has already been tried/discussed?
  • What pieces of the idea might you peel away or refine to make the idea stronger? Are there individual premises you might be wrong about? Do they invalidate the idea? Does removing them lead to a different idea? 

[Link] Video using humor to spread rationality

-8 Gleb_Tsipursky 23 November 2016 02:18AM

[Link] Irrationality is the worst problem in politics

-14 Gleb_Tsipursky 21 November 2016 04:53PM

[Link] Major Life Course Change: Making Politics Less Irrational

-8 Gleb_Tsipursky 11 November 2016 03:30AM

[Link] Raising the sanity waterline in politics

-15 Gleb_Tsipursky 08 November 2016 04:10PM

[Link] Voting is like donating hundreds of thousands to charity

-6 Gleb_Tsipursky 02 November 2016 09:22PM

[Link] Trying to make politics less irrational by cognitive bias-checking the US presidential debates

-6 Gleb_Tsipursky 22 October 2016 02:32AM

June Outreach Thread

-7 Gleb_Tsipursky 06 June 2016 01:47PM

Please share about any outreach that you have done to convey rationality and effective altruism-themed ideas broadly, whether recent or not, which you have not yet shared on previous Outreach threads. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline, a form of cognitive altruism that contributes to creating a flourishing world. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.

Review and Thoughts on Current Version of CFAR Workshop

11 Gleb_Tsipursky 06 June 2016 01:44PM

Outline: I will discuss my background and how I prepared for the workshop, and then how I would have prepared differently if I could go back and have the chance to do it again; I will then discuss my experience at the CFAR workshop, and what I would have done differently if I had the chance to do it again; I will then discuss what my take-aways were from the workshop, and what I am doing to integrate CFAR strategies into my life; finally, I will give my assessment of its benefits and what other folks might expect to get who attend the workshop.


 

Acknowledgments: Thanks to fellow CFAR alumni and CFAR staff for feedback on earlier versions of this post


 

Introduction

 

Many aspiring rationalists have heard about the Center for Applied Rationality, an organization devoted to teaching applied rationality skills to help people improve their thinking, feeling, and behavior patterns. This nonprofit does so primarily through its intense workshops, and is funded by donations and revenue from its workshop. It fulfills its social mission through conducting rationality research and through giving discounted or free workshops to those people its staff judge as likely to help make the world a better place, mainly those associated with various Effective Altruist cause areas, especially existential risk.

 

To be fully transparent: even before attending the workshop, I already had a strong belief that CFAR is a great organization and have been a monthly donor to CFAR for years. So keep that in mind as you read my description of my experience (you can become a donor here).


Preparation

 

First, some background about myself, so you know where I’m coming from in attending the workshop. I’m a professor specializing in the intersection of history, psychology, behavioral economics, sociology, and cognitive neuroscience. I discovered the rationality movement several years ago through a combination of my research and attending a LessWrong meetup in Columbus, OH, and so come from a background of both academic and LW-style rationality. Since discovering the movement, I have become an activist in the movement as the President of Intentional Insights, a nonprofit devoted to popularizing rationality and effective altruism (see here for our EA work). So I came to the workshop with some training and knowledge of rationality, including some CFAR techniques.

 

To help myself prepare for the workshop, I reviewed existing posts about CFAR materials, with an eye toward being careful not to assume that the actual techniques match their actual descriptions in the posts.

 

I also delayed a number of tasks for after the workshop, tying up loose ends. In retrospect, I wish I did not leave myself some ongoing tasks to do during the workshop. As part of my leadership of InIn, I coordinate about 50ish volunteers, and I wish I had placed those responsibilities on someone else during the workshop.

 

Before the workshop, I worked intensely on finishing up some projects. In retrospect, it would have been better to get some rest and come to the workshop as fresh as possible.

 

There were some communication snafus with logistics details before the workshop. It all worked out in the end, but I would have told myself in retrospect to get the logistics hammered out in advance to not experience anxiety before the workshop about how to get there.


Experience

 

The classes were well put together, had interesting examples, and provided useful techniques. FYI, my experience in the workshop was that reading these techniques in advance was not harmful, but that the techniques in the CFAR classes were quite a bit better than the existing posts about them, so don’t assume you can get the same benefits from reading posts as attending the workshop. So while I was aware of the techniques, the ones in the classes definitely had optimized versions of them - maybe because of the “broken telephone” effect or maybe because CFAR optimized them from previous workshops, not sure. I was glad to learn that CFAR considers the workshop they gave us in May as satisfactory enough to scale up their workshops, while still improving their content over time.

 

Just as useful as the classes were the conversations held in between and after the official classes ended. Talking about them with fellow aspiring rationalists and seeing how they were thinking about applying these to their lives was helpful for sparking ideas about how to apply them to my life. The latter half of the CFAR workshop was especially great, as it focused on pairing off people and helping others figure out how to apply CFAR techniques to themselves and how to address various problems in their lives. It was especially helpful to have conversations with CFAR staff and trained volunteers, of whom there were plenty - probably about 20 volunteers/staff for the 50ish workshop attendees.

 

Another super-helpful aspect of the conversations was networking and community building. Now, this may have been more useful to some participants than others, so YMMV. As an activist in the moment, I talked to many folks in the CFAR workshop about promoting EA and rationality to a broad audience. I was happy to introduce some people to EA, with my most positive conversation there being to encourage someone to switch his efforts regarding x-risk from addressing nuclear disarmament to AI safety research as a means of addressing long/medium-term risk, and promoting rationality as a means of addressing short/medium-term risk. Others who were already familiar with EA were interested in ways of promoting it broadly, while some aspiring rationalists expressed enthusiasm over becoming rationality communicators.

 

Looking back at my experience, I wish I was more aware of the benefits of these conversations. I went to sleep early the first couple of nights, and I would have taken supplements to enable myself to stay awake and have conversations instead.


Take-Aways and Integration

 

The aspects of the workshop that I think will help me most were what CFAR staff called “5-second” strategies - brief tactics and techniques that could be executed in 5 seconds or less and address various problems. The stuff that we learned at the workshops that I was already familiar with required some time to learn and practice, such as Trigger Action Plans, Goal Factoring, Murphyjitsu, Pre-Hindsight, often with pen and paper as part of the work. However, with sufficient practice, one can develop brief techniques that mimic various aspects of the more thorough techniques, and apply them quickly to in-the-moment decision-making.

 

Now, this doesn’t mean that the longer techniques are not helpful. They are very important, but they are things I was already generally familiar with, and already practice. The 5-second versions were more of a revelation for me, and I anticipate will be more helpful for me as I did not know about them previously.

 

Now, CFAR does a very nice job of helping people integrate the techniques into daily life, as this is a common failure mode of CFAR attendees, with them going home and not practicing the techniques. So they have 6 Google Hangouts with CFAR staff and all attendees who want to participate, 4 one-on-one sessions with CFAR trained volunteers or staff, and they also pair you with one attendee for post-workshop conversations. I plan to take advantage of all these, although my pairing did not work out.

 

For integration of CFAR techniques into my life, I found the CFAR strategy of “Overlearning” especially helpful. Overlearning refers to trying to apply a single technique intensely for a while to all aspect of one’s activities, so that it gets internalized thoroughly. I will first focus on overlearning Trigger Action Plans, following the advice of CFAR.

 

I also plan to teach CFAR techniques in my local rationality dojo, as teaching is a great way to learn, naturally.

 

Finally, I plan to integrate some CFAR techniques into Intentional Insights content, at least the more simple techniques that are a good fit for the broad audience with which InIn is communicating.


Benefits

 

I have a strong probabilistic belief that having attended the workshop will improve my capacity to be a person who achieves my goals for doing good in the world. I anticipate I will be able to figure out better whether the projects I am taking on are the best uses of my time and energy. I will be more capable of avoiding procrastination and other forms of akrasia. I believe I will be more capable of making better plans, and acting on them well. I will also be more in touch with my emotions and intuitions, and be able to trust them more, as I will have more alignment among different components of my mind.

 

Another benefit is meeting the many other people at CFAR who have similar mindsets. Here in Columbus, we have a flourishing rationality community, but it’s still relatively small. Getting to know 70ish people, attendees and staff/volunteers, passionate about rationality was a blast. It was especially great to see people who were involved in creating new rationality strategies, something that I am engaged in myself in addition to popularizing rationality - it’s really heartening to envision how the rationality movement is growing.

 

These benefits should resonate strongly with those who are aspiring rationalists, but they are really important for EA participants as well. I think one of the best things that EA movement members can do is studying rationality, and it’s something we promote to the EA movement as part of InIn’s work. What we offer is articles and videos, but coming to a CFAR workshop is a much more intense and cohesive way of getting these benefits. Imagine all the good you can do for the world if you are better at planning, organizing, and enacting EA-related tasks. Rationality is what has helped me and other InIn participants make the major impact that we have been able to make, and there are a number of EA movement members who have rationality training and who reported similar benefits. Remember, as an EA participant, you can likely get a scholarship with a partial or full coverage of the regular $3900 price of the workshop, as I did myself when attending it, and you are highly likely to be able to save more lives as a result of attending the workshop over time, even if you have to pay some costs upfront.

 

Hope these thoughts prove helpful to you all, and please contact me at gleb@intentionalinsights.org if you want to chat with me about my experience.

 

rationalfiction.io - publish, discover, and discuss rational fiction

7 rayalez 31 May 2016 12:02PM

Hey, everyone! I want to share with you a project I've been working on for a while - http://rationalfiction.io.

I want it to become the perfect place to publish, discover, and discuss rational fiction.

We already have a lot of awesome stories, and I invite you to join and post more! =)

May Outreach Thread

-2 Gleb_Tsipursky 06 May 2016 08:02PM

Please share about any outreach that you have done to convey rationality-style ideas broadly, whether recent or not, which you have not yet shared on previous Outreach threads. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline, a form of cognitive altruism that contributes to creating a flourishing world. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.

Collaborative Truth-Seeking

11 Gleb_Tsipursky 04 May 2016 11:28PM

Summary: We frequently use debates to resolve different opinions about the truth. However, debates are not always the best course for figuring out the truth. In some situations, the technique of collaborative truth-seeking may be more optimal.

 

Acknowledgments: Thanks to Pete Michaud, Michael Dickens, Denis Drescher, Claire Zabel, Boris Yakubchik, Szun S. Tay, Alfredo Parra, Michael Estes, Aaron Thoma, Alex Weissenfels, Peter Livingstone, Jacob Bryan, Roy Wallace, and other readers who prefer to remain anonymous for providing feedback on this post. The author takes full responsibility for all opinions expressed here and any mistakes or oversights.

 

The Problem with Debates

 

Aspiring rationalists generally aim to figure out the truth, and often disagree about it. The usual method of hashing out such disagreements in order to discover the truth is through debates, in person or online.

 

Yet more often than not, people on opposing sides of a debate end up seeking to persuade rather than prioritizing truth discovery. Indeed, research suggests that debates have a specific evolutionary function – not for discovering the truth but to ensure that our perspective prevails within a tribal social context. No wonder debates are often compared to wars.

 

We may hope that as aspiring rationalists, we would strive to discover the truth during debates. Yet given that we are not always fully rational and strategic in our social engagements, it is easy to slip up within debate mode and orient toward winning instead of uncovering the truth. Heck, I know that I sometimes forget in the midst of a heated debate that I may be the one who is wrong – I’d be surprised if this didn’t happen with you. So while we should certainly continue to engage in debates, we should also use additional strategies – less natural and intuitive ones. These strategies could put us in a better mindset for updating our beliefs and improving our perspective on the truth. One such solution is a mode of engagement called collaborative truth-seeking.


Collaborative Truth-Seeking

 

Collaborative truth-seeking is one way of describing a more intentional approach in which two or more people with different opinions engage in a process that focuses on finding out the truth. Collaborative truth-seeking is a modality that should be used among people with shared goals and a shared sense of trust.

 

Some important features of collaborative truth-seeking, which are often not present in debates, are: focusing on a desire to change one’s own mind toward the truth; a curious attitude; being sensitive to others’ emotions; striving to avoid arousing emotions that will hinder updating beliefs and truth discovery; and a trust that all other participants are doing the same. These can contribute to increased  social sensitivity, which, together with other attributes, correlate with accomplishing higher group performance  on a variety of activities.

 

The process of collaborative truth-seeking starts with establishing trust, which will help increase social sensitivity, lower barriers to updating beliefs, increase willingness to be vulnerable, and calm emotional arousal. The following techniques are helpful for establishing trust in collaborative truth-seeking:

  • Share weaknesses and uncertainties in your own position

  • Share your biases about your position

  • Share your social context and background as relevant to the discussion

    • For instance, I grew up poor once my family immigrated to the US when I was 10, and this naturally influences me to care about poverty more than some other issues, and have some biases around it - this is one reason I prioritize poverty in my Effective Altruism engagement

  • Vocalize curiosity and the desire to learn

  • Ask the other person to call you out if they think you're getting emotional or engaging in emotive debate instead of collaborative truth-seeking, and consider using a safe word



Here are additional techniques that can help you stay in collaborative truth-seeking mode after establishing trust:

  • Self-signal: signal to yourself that you want to engage in collaborative truth-seeking, instead of debating

  • Empathize: try to empathize with the other perspective that you do not hold by considering where their viewpoint came from, why they think what they do, and recognizing that they feel that their viewpoint is correct

  • Keep calm: be prepared with emotional management to calm your emotions and those of the people you engage with when a desire for debate arises

    • watch out for defensiveness and aggressiveness in particular

  • Go slow: take the time to listen fully and think fully

  • Consider pausing: have an escape route for complex thoughts and emotions if you can’t deal with them in the moment by pausing and picking up the discussion later

    • say “I will take some time to think about this,” and/or write things down

  • Echo: paraphrase the other person’s position to indicate and check whether you’ve fully understood their thoughts

  • Be open: orient toward improving the other person’s points to argue against their strongest form

  • Stay the course: be passionate about wanting to update your beliefs, maintain the most truthful perspective, and adopt the best evidence and arguments, no matter if they are yours of those of others

  • Be diplomatic: when you think the other person is wrong, strive to avoid saying "you're wrong because of X" but instead to use questions, such as "what do you think X implies about your argument?"

  • Be specific and concrete: go down levels of abstraction

  • Be clear: make sure the semantics are clear to all by defining terms

  • Be probabilistic: use probabilistic thinking and probabilistic language, to help get at the extent of disagreement and be as specific and concrete as possible

    • For instance, avoid saying that X is absolutely true, but say that you think there's an 80% chance it's the true position

    • Consider adding what evidence and reasoning led you to believe so, for both you and the other participants to examine this chain of thought

  • When people whose perspective you respect fail to update their beliefs in response to your clear chain of reasoning and evidence, update a little somewhat toward their position, since that presents evidence that your position is not very convincing

  • Confirm your sources: look up information when it's possible to do so (Google is your friend)

  • Charity mode: trive to be more charitable to others and their expertise than seems intuitive to you

  • Use the reversal test to check for status quo bias

    • If you are discussing whether to change some specific numeric parameter - say increase by 50% the money donated to charity X - state the reverse of your positions, for example decreasing the amount of money donated to charity X by 50%, and see how that impacts your perspective

  • Use CFAR’s double crux technique

    • In this technique, two parties who hold different positions on an argument each writes the the fundamental reason for their position (the crux of their position). This reason has to be the key one, so if it was proven incorrect, then each would change their perspective. Then, look for experiments that can test the crux. Repeat as needed. If a person identifies more than one reason as crucial, you can go through each as needed. More details are here.  


Of course, not all of these techniques are necessary for high-quality collaborative truth-seeking. Some are easier than others, and different techniques apply better to different kinds of truth-seeking discussions. You can apply some of these techniques during debates as well, such as double crux and the reversal test. Try some out and see how they work for you.


Conclusion

 

Engaging in collaborative truth-seeking goes against our natural impulses to win in a debate, and is thus more cognitively costly. It also tends to take more time and effort than just debating. It is also easy to slip into debate mode even when using collaborative truth-seeking, because of the intuitive nature of debate mode.

 

Moreover, collaborative truth-seeking need not replace debates at all times. This non-intuitive mode of engagement can be chosen when discussing issues that relate to deeply-held beliefs and/or ones that risk emotional triggering for the people involved. Because of my own background, I would prefer to discuss poverty in collaborative truth-seeking mode rather than debate mode, for example. On such issues, collaborative truth-seeking can provide a shortcut to resolution, in comparison to protracted, tiring, and emotionally challenging debates. Likewise, using collaborative truth-seeking to resolve differing opinions on all issues holds the danger of creating a community oriented excessively toward sensitivity to the perspectives of others, which might result in important issues not being discussed candidly. After all, research shows the importance of having disagreement in order to make wise decisions and to figure out the truth. Of course, collaborative truth-seeking is well suited to expressing disagreements in a sensitive way, so if used appropriately, it might permit even people with triggers around certain topics to express their opinions.

 

Taking these caveats into consideration, collaborative truth-seeking is a great tool to use to discover the truth and to update our beliefs, as it can get past the high emotional barriers to altering our perspectives that have been put up by evolution. Rationality venues are natural places to try out collaborative truth-seeking.

 

 

 

Monthly Outreach Thread

0 Gleb_Tsipursky 17 April 2016 11:18PM

Please share about any outreach that you have done to convey rationality-style ideas broadly, whether recent or not, which you have not yet shared on previous Outreach threads. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline, a form of cognitive altruism that contributes to creating a flourishing world. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.

[Link] Op-Ed on Brussels Attacks

-6 Gleb_Tsipursky 02 April 2016 05:38PM

Trigger warning: politics is hard mode.


"How to you make America safer from terrorists" is the title of my op-ed published in Sun Sentinel, a very prominent newspaper in Florida, one of the most swingiest of the swing states in the US for the presidential election, and the one with the most votes. The maximum length of the op-ed was 450 words, and it was significantly edited by the editor, so it doesn't convey the full message I wanted with all the nuances, but such is life. My primary goal with the piece was to convey methods of thinking more rationally about politics, such as to use probabilistic thinking, evaluating the full consequences of our actions, and avoiding attention bias. I used the example of the proposal to police heavily Muslim neighborhoods as a case study. Hope this helps Floridians think more rationally and raises the sanity waterline regarding politics!

 

 

EDIT: To be totally clear, I used guesstimates for the numbers I suggested. Following Yvain/Scott Alexander's advice, I prefer to use guesstimates rather than vague statements.

[Video] The Essential Strategies To Debiasing From Academic Rationality

1 Gleb_Tsipursky 27 March 2016 03:04AM
A lifetime of work by a world expert in debiasing boiled down into four broad strategies in this video. A nice approach to this topic from the academic side of rationality.

Disclosure - the academic is Dr. Hal Arkes, a personal friend and Advisory Board Member of Intentional Insights, which I run.

EDIT: Seems like the sound quality is low. Anyone willing to do a transcript of this video as a volunteer activity for the rationality community? We can then subtitle the video.

Outreach Thread

6 Gleb_Tsipursky 06 March 2016 10:18PM

Based on an earlier suggestion, here's an outreach thread where you can leave comments about any recent outreach that you have done to convey rationality-style ideas broadly. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.


 

Religious and Rational?

3 Gleb_Tsipursky 09 February 2016 08:12PM

Reverend Caleb Pitkin, an aspiring rationalist and United Methodist Minister, wrote an article about combining religion and rationality which was recently published on the Intentional Insights blog. He's the only Minister I know who is also an aspiring rationalist, so I thought it would be an interesting piece for Less Wrong as well. Besides, it prompted an interesting discussion on the Less Wrong Facebook group, so I thought some people here who don't look at the Facebook group might be interested in checking it out as well. Caleb does not have enough karma to post, so I am posting it on his behalf, but he will engage with the comments.

______________________________________________________________________________

 

Religious and Rational?

 

“Wisdom shouts in the street; in the public square she raises her voice.”

Proverbs 1:20 Common English Bible

The Biblical book of Proverbs is full of imagery of wisdom personified as a woman calling and extorting people to come to her and listen.  The wisdom contained in Proverbs is not just spiritual wisdom but also contains a large amount of practical wisdom and advice.  What might the wisdom of Proverbs and rationality have in common?  The wisdom literature in scripture was meant to help people make better and more effective decisions.  In today’s complex and rapidly changing world we have the same need for tools and resources to help us make good decisions.  One great source of wisdom is methods of better thinking that are informed by science.  

Now, not everyone would agree with comparing the wisdom of Proverbs with scientific insights.  Doing so may not sit well with some in the secular rationality community who view all religion as inherently irrational and hindering clear thinking. It also might not sit well with some in my own religious community who are suspicious of scientific thinking as undermining traditional faith.  While it would take a much longer piece to try to completely defend either religion or secular rationality I’m going to try and demonstrate some ways that rationality is useful  for a religious person.

The first way that rationality can be useful for a religious person is in the living of our daily lives.  We are faced with tasks and decisions each day that we try to do our best in.  Learning to recognize common logical fallacies or other biases, like those that cause us to fail to understand other people, will improve our decision making as much as it improves the thinking of non-religious people. For example, a mother driving her kids to Sunday School might benefit from avoiding thinking that the person who cuts her off is definitely a jerk, one common type of thinking error.  Some doing volunteer work for their church could be more effective if they avoid problematic communication with other volunteers. This use of rationality to lead our daily lives in the best way is one that most would find fairly unobjectionable.  It’s easy to say that the way we all achieve our personal goals and objectives could be improved, and we can all gain greater agency.

Rationality can also be of use in theological commentary and discourse.  Many of the theological and religious greats used the available philosophical and intellectual tools of their day to examine their faith. Examples of this include John Wesley, Thomas Aquinas and even the Apostle Paul when he debated Epicurean and Stoic Philosophers.   They also made sure that their theologies were internally, rational and logical.  This means that, from the perspective of a religious person, keeping up with rationality can help with the pursuit of a deeper understanding of our faith.  For a secular person acknowledging the ways in which religious people use rationality within their worldview may be difficult, but it can help to build common ground. The starting point is different.  Secular people start with the faith that they can trust their sensory experience.  Religious people start with conceptions of the divine.  Yet, after each starting point, both seek to proceed in a rational logical manner.

It is not just our personal lives that can be improved by rationality, it’s also the ways in which we interact with communities.  One of the goals of many religious communities is to make a positive impact on the world around them.  When we work to do good in community we want that work to be as effective as possible.  Often when we work in community we find that we are not meeting our goals or having the kind of significant impact that we wish to have.  It is my experience this is often a failure to really examine and gather the facts on the ground.  We set off full of good intentions but with limited resources and time.  Rational examination helps us to figure out how to match our good intentions with our limited resources in the most effective way possible.  For example as the Pastor of two small churches money and people power can be in short supply.  So when we examine all the needs of our community we have to acknowledge we cannot begin to meet all or even most of them.  So we take one issue, hunger, and devote our time and resources to having one big impact on that issue.  As opposed to trying to be a little bit to alleviate a lot of problems.

One other way that rationality can inform our work in the community is to recognize that part of what a scarcity of resources means is that we need to work together with others in our community.  The inter-faith movement has done a lot of good work in bringing together people of faith to work on common goals.  This has meant setting aside traditional differences for the sake of shared goals.  Let us examine the world we live in today though. The amount of nonreligious people is on the rise and there is every indication that it will continue to do so.  On the other hand religion does not seem to be going anywhere either.  Which is good news for a pastor.  Looking at this situation, the rational thing to do is to work together, for religious people to build bridges toward the non-religious and vice versa.

Wisdom still stands on the street calling and imploring us to be improved--not in the form of rationalist street preachers, though that idea has a certain appeal-- but in the form of the growing number of tools being offered to help us improve our capacity for logic, for reasoning, and for the tools that will enable us take part in the world we live in.  

Everyone wants to make good decisions.  This means that everyone tries to make rational decisions.  We all try but we don’t always hit the mark.  Religious people seek to achieve their goals and make good decisions.  Secular people seek to achieve their goals and make good decisions.  Yes, we have different starting points and it’s important to acknowledge that.  Yet, there are similarities in what each group wants out of their lives and maybe we have more in common than we think we do.

On a final note it is my belief that what religious people and what non-religious people fear about each other is the same thing.  The non-religious look at the religious and say God could ask them to do anything... scary.  The religious look at the non-religious and say without God they could do anything... scary.  If we remember though that most people are rational and want to live a good life we have less to be scared of, and are more likely to find common ground.

____________________________________________________________________________________________________________

 

Bio: Caleb Pitkin is a Provisional Elder with the United Methodist Church appointed to Signal Mountain United Methodist Church. Caleb is a huge fan of the theology of John Wesley, which ask that Christians use reason in their faith journey.  This helped lead Caleb to Rationality and participation in Columbus Rationality, a Less Wrong meetup that is part of the Humanist Community of Central Ohio. Through that, Caleb got involved with Intentional Insights. Caleb spends his time trying to live a faithful and rational life. 

Conveying rational thinking about long-term goals to youth and young adults

8 Gleb_Tsipursky 07 February 2016 01:54AM
More than a year ago, I discussed here how we at Intentional Insights intended to convey rationality to young adults through our collaboration with the Secular Student Alliance. This international organization unites over 270 clubs at colleges and high schools in English-speaking countries, mainly the US, with its clubs spanning from a few students to a few hundred students. The SSA's Executive Director is an aspiring rationalist and CFAR alum who is on our Advisory Board.

Well, we've been working on a project with the SSA for the last 8 months to create and evaluate an event aimed to help its student members figure out and orient toward the long term, thus both fighting Moloch on a societal level and helping them become more individually rational as well (the long-term perspective is couched in the language of finding purpose using science) It's finally done, and here is the link to the event packet. The SSA will be distributing this packet broadly, but in the meantime, if you have any connections to secular student groups, consider encouraging them to hold this event. The event would also fit well for adult secular groups with minor editing, in case any of you are involved with them. It's also easy to strip the secular language from the packet, and just have it as an event for a philosophy/science club of any sort, at any level from youth to adult. Although I would prefer you cite Intentional Insights when you do it, I'm comfortable with you not doing so if circumstances don't permit it for some reason.

We're also working on similar projects with the SSA, focusing on being rational in the area of giving, so promoting Effective Altruism. I'll post it here when it's ready.  

[Link] How I Escaped The Darkness of Mental Illness

5 Gleb_Tsipursky 04 February 2016 11:08PM
A deeply personal account by aspiring rationalist Agnes Vishnevkin, who shares the broad overview of how she used rationality-informed strategies to recover from mental illness. She will also appear on the Unbelievers Radio podcast today live at 10:30 PM EST (-5 UTC), together with JT Eberhard, to speak about mental illness and recovery.

**EDIT** Based on feedback from gjm below, I want to clarify that Agnes is my wife and fellow co-founder of Intentional Insights.


[Link] Huffington Post article about dual process theory

9 Gleb_Tsipursky 06 January 2016 01:44AM

Published a piece in The Huffington Post popularizing dual-process theory in layman's language.

 

P.S. I know some don't like using terms like Autopilot and Intentional to describe System 1 and System 2, but I find from long experience that these terms resonate well with a broad audience. Also, I know dual process theory is criticized by some, but we have to start somewhere, and just explaining dual process theory is a way to start bridging the inference gap to higher meta-cognition.

Forecasting and recursive Inhibition within a decision cycle

1 [deleted] 20 December 2015 05:37AM

When we anticipate the future, we the opportunity to inhibit our behaviours which we anticipate will lead to counterfactual outcomes. Those of us with sufficiently low latencies in our decision cycles may recursively anticipate the consequences of counterfactuating (neologism) interventions to recursively intervene against our interventions.

This may be difficult for some. Try modelling that decision cycle as a nano-scale approximation of time travel. One relevant paradox from popular culture is the farther future paradox described in the tv cartoon called Family Guy.

Watch this clip: https://www.youtube.com/watch?v=4btAggXRB_Q

Relating the satire back to our abstraction of the decision cycle, one may ponder:

What is a satisfactory stopping rule for the far anticipation of self-referential consequence?

That is:

(1) what are the inherent harmful implications of inhibiting actions in and of themselves: stress?

(2) what are their inherent merits: self-determination?

and (3) what are the favourable and disfavourable consequences as x point into the future given y number of points of self reference at points z, a, b and c?

see no ready solution to this problem in terms of human rationality, and see no corresponding problem in artificial intelligence, where it would also apply. Given the relevance to MIRI (since CFAR doesn't seem work on open-problems in the same way)

I would like to also take this opportunity to open this as an experimental thread for the community to generate a list of ''open-problems'' in human rationality that are otherwise scattered across the community blog and wiki. 

View more: Next