Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Project Hufflepuff: Planting the Flag

36 Raemon 03 April 2017 06:37PM

This is the first in a series of posts about improving group dynamics within the rationality community. (The previous "checking for interest post" is here).


The Berkeley Hufflepuff Unconference is on April 28th. RSVPing on this Facebook Event is helpful, as is filling out this form.

Project Hufflepuff: Planting the Flag

"Clever kids in Ravenclaw, evil kids in Slytherin, wannabe heroes in Gryffindor, and everyone who does the actual work in Hufflepuff.”

- Harry Potter and the Methods of Rationality, Chapter 9

“It is a common misconception that the best rationalists are Sorted into Ravenclaw, leaving none for other Houses. This is not so; being Sorted into Ravenclaw indicates that your strongest virtue is curiosity, wondering and desiring to know the true answer. And this is not the only virtue a rationalist needs. Sometimes you have to work hard on a problem, and stick to it for a while. Sometimes you need a clever plan for finding out. And sometimes what you need more than anything else to see an answer, is the courage to face it…

- Harry Potter and the Methods of Rationality, Chapter 45


I’m a Ravenclaw and Slytherin by nature. I like being clever. I like pursuing ambitious goals. But over the past few years, I’ve been cultivating the skills and attitudes of Hufflepuff, by choice.

I think those skills are woefully under-appreciated in the Rationality Community. The problem cuts across many dimensions

continue reading »

Double Crux — A Strategy for Resolving Disagreement

58 Duncan_Sabien 29 November 2016 09:23PM


Double crux is one of CFAR's newer concepts, and one that's forced a re-examination and refactoring of a lot of our curriculum (in the same way that the introduction of TAPs and Inner Simulator did previously).  It rapidly became a part of our organizational social fabric, and is one of our highest-EV threads for outreach and dissemination, so it's long overdue for a public, formal explanation.

Note that while the core concept is fairly settled, the execution remains somewhat in flux, with notable experimentation coming from Julia Galef, Kenzi Amodei, Andrew Critch, Eli Tyre, Anna Salamon, myself, and others.  Because of that, this post will be less of a cake and more of a folk recipe—this is long and meandering on purpose, because the priority is to transmit the generators of the thing over the thing itself.  Accordingly, if you think you see stuff that's wrong or missing, you're probably onto something, and we'd appreciate having them added here as commentary.

Casus belli

To a first approximation, a human can be thought of as a black box that takes in data from its environment, and outputs beliefs and behaviors (that black box isn't really "opaque" given that we do have access to a lot of what's going on inside of it, but our understanding of our own cognition seems uncontroversially incomplete).

When two humans disagree—when their black boxes output different answers, as below—there are often a handful of unproductive things that can occur.

The most obvious (and tiresome) is that they'll simply repeatedly bash those outputs together without making any progress (think most disagreements over sports or politics; the people above just shouting "triangle!" and "circle!" louder and louder).  On the second level, people can (and often do) take the difference in output as evidence that the other person's black box is broken (i.e. they're bad, dumb, crazy) or that the other person doesn't see the universe clearly (i.e. they're biased, oblivious, unobservant).  On the third level, people will often agree to disagree, a move which preserves the social fabric at the cost of truth-seeking and actual progress.

Double crux in the ideal solves all of these problems, and in practice even fumbling and inexpert steps toward that ideal seem to produce a lot of marginal value, both in increasing understanding and in decreasing conflict-due-to-disagreement.


This post will occasionally delineate two versions of double crux: a strong version, in which both parties have a shared understanding of double crux and have explicitly agreed to work within that framework, and a weak version, in which only one party has access to the concept, and is attempting to improve the conversational dynamic unilaterally.

In either case, the following things seem to be required:

  • Epistemic humility. The number one foundational backbone of rationality seems, to me, to be how readily one is able to think "It's possible that I might be the one who's wrong, here."  Viewed another way, this is the ability to take one's beliefs as object, rather than being subject to them and unable to set them aside (and then try on some other belief and productively imagine "what would the world be like if this were true, instead of that?").
  • Good faith. An assumption that people believe things for causal reasons; a recognition that having been exposed to the same set of stimuli would have caused one to hold approximately the same beliefs; a default stance of holding-with-skepticism what seems to be evidence that the other party is bad or wants the world to be bad (because as monkeys it's not hard for us to convince ourselves that we have such evidence when we really don't).1
  • Confidence in the existence of objective truth. I was tempted to call this "objectivity," "empiricism," or "the Mulder principle," but in the end none of those quite fit.  In essence: a conviction that for almost any well-defined question, there really truly is a clear-cut answer.  That answer may be impractically or even impossibly difficult to find, such that we can't actually go looking for it and have to fall back on heuristics (e.g. how many grasshoppers are alive on Earth at this exact moment, is the color orange superior to the color green, why isn't there an audio book of Fight Club narrated by Edward Norton), but it nevertheless exists.
  • Curiosity and/or a desire to uncover truth.  Originally, I had this listed as truth-seeking alone, but my colleagues pointed out that one can move in the right direction simply by being curious about the other person and the contents of their map, without focusing directly on the territory.

At CFAR workshops, we hit on the first and second through specific lectures, the third through osmosis, and the fourth through osmosis and a lot of relational dynamics work that gets people curious and comfortable with one another.  Other qualities (such as the ability to regulate and transcend one's emotions in the heat of the moment, or the ability to commit to a thought experiment and really wrestle with it) are also helpful, but not as critical as the above.  

How to play

Let's say you have a belief, which we can label A (for instance, "middle school students should wear uniforms"), and that you're in disagreement with someone who believes some form of ¬A.  Double cruxing with that person means that you're both in search of a second statement B, with the following properties:

  • You and your partner both disagree about B as well (you think B, your partner thinks ¬B).
  • The belief B is crucial for your belief in A; it is one of the cruxes of the argument.  If it turned out that B was not true, that would be sufficient to make you think A was false, too.
  • The belief ¬B is crucial for your partner's belief in ¬A, in a similar fashion.

In the example about school uniforms, B might be a statement like "uniforms help smooth out unhelpful class distinctions by making it harder for rich and poor students to judge one another through clothing," which your partner might sum up as "optimistic bullshit."  Ideally, B is a statement that is somewhat closer to reality than A—it's more concrete, grounded, well-defined, discoverable, etc.  It's less about principles and summed-up, induced conclusions, and more of a glimpse into the structure that led to those conclusions.

(It doesn't have to be concrete and discoverable, though—often after finding B it's productive to start over in search of a C, and then a D, and then an E, and so forth, until you end up with something you can research or run an experiment on).

At first glance, it might not be clear why simply finding B counts as victory—shouldn't you settle B, so that you can conclusively choose between A and ¬A?  But it's important to recognize that arriving at B means you've already dissolved a significant chunk of your disagreement, in that you and your partner now share a belief about the causal nature of the universe.

If B, then A.  Furthermore, if ¬B, then ¬A.  You've both agreed that the states of B are crucial for the states of A, and in this way your continuing "agreement to disagree" isn't just "well, you take your truth and I'll take mine," but rather "okay, well, let's see what the evidence shows."  Progress!  And (more importantly) collaboration!


This is where CFAR's versions of the double crux unit are currently weakest—there's some form of magic in the search for cruxes that we haven't quite locked down.  In general, the method is "search through your cruxes for ones that your partner is likely to disagree with, and then compare lists."  For some people and some topics, clearly identifying your own cruxes is easy; for others, it very quickly starts to feel like one's position is fundamental/objective/un-break-downable.


  • Increase noticing of subtle tastes, judgments, and "karma scores."  Often, people suppress a lot of their opinions and judgments due to social mores and so forth.  Generally loosening up one's inner censors can make it easier to notice why we think X, Y, or Z.
  • Look forward rather than backward.  In places where the question "why?" fails to produce meaningful answers, it's often more productive to try making predictions about the future.  For example, I might not know why I think school uniforms are a good idea, but if I turn on my narrative engine and start describing the better world I think will result, I can often sort of feel my way toward the underlying causal models.
  • Narrow the scope.  A specific test case of "Steve should've said hello to us when he got off the elevator yesterday" is easier to wrestle with than "Steve should be more sociable."  Similarly, it's often easier to answer questions like "How much of our next $10,000 should we spend on research, as opposed to advertising?" than to answer "Which is more important right now, research or advertising?"
  • Do "Focusing" and other resonance checks.  It's often useful to try on a perspective, hypothetically, and then pay attention to your intuition and bodily responses to refine your actual stance.  For instance: (wildly asserts) "I bet if everyone wore uniforms there would be a fifty percent reduction in bullying." (pauses, listens to inner doubts)  "Actually, scratch that—that doesn't seem true, now that I say it out loud, but there is something in the vein of reducing overt bullying, maybe?"
  • Seek cruxes independently before anchoring on your partner's thoughts.  This one is fairly straightforward.  It's also worth noting that if you're attempting to find disagreements in the first place (e.g. in order to practice double cruxing with friends) this is an excellent way to start—give everyone the same ten or fifteen open-ended questions, and have everyone write down their own answers based on their own thinking, crystallizing opinions before opening the discussion.

Overall, it helps to keep the ideal of a perfect double crux in the front of your mind, while holding the realities of your actual conversation somewhat separate.  We've found that, at any given moment, increasing the "double cruxiness" of a conversation tends to be useful, but worrying about how far from the ideal you are in absolute terms doesn't.  It's all about doing what's useful and productive in the moment, and that often means making sane compromises—if one of you has clear cruxes and the other is floundering, it's fine to focus on one side.  If neither of you can find a single crux, but instead each of you has something like eight co-cruxes of which any five are sufficient, just say so and then move forward in whatever way seems best.

(Variant: a "trio" double crux conversation in which, at any given moment, if you're the least-active participant, your job is to squint at your two partners and try to model what each of them is saying, and where/why/how they're talking past one another and failing to see each other's points.  Once you have a rough "translation" to offer, do so—at that point, you'll likely become more central to the conversation and someone else will rotate out into the squinter/translator role.)

Ultimately, each move should be in service of reversing the usual antagonistic, warlike, "win at all costs" dynamic of most disagreements.  Usually, we spend a significant chunk of our mental resources guessing at the shape of our opponent's belief structure, forming hypotheses about what things are crucial and lobbing arguments at them in the hopes of knocking the whole edifice over.  Meanwhile, we're incentivized to obfuscate our own belief structure, so that our opponent's attacks will be ineffective.

(This is also terrible because it means that we often fail to even find the crux of the argument, and waste time in the weeds.  If you've ever had the experience of awkwardly fidgeting while someone spends ten minutes assembling a conclusive proof of some tangential sub-point that never even had the potential of changing your mind, then you know the value of someone being willing to say "Nope, this isn't going to be relevant for me; try speaking to that instead.")

If we can move the debate to a place where, instead of fighting over the truth, we're collaborating on a search for understanding, then we can recoup a lot of wasted resources.  You have a tremendous comparative advantage at knowing the shape of your own belief structure—if we can switch to a mode where we're each looking inward and candidly sharing insights, we'll move forward much more efficiently than if we're each engaged in guesswork about the other person.  This requires that we want to know the actual truth (such that we're incentivized to seek out flaws and falsify wrong beliefs in ourselves just as much as in others) and that we feel emotionally and socially safe with our partner, but there's a doubly-causal dynamic where a tiny bit of double crux spirit up front can produce safety and truth-seeking, which allows for more double crux, which produces more safety and truth-seeking, etc.


First and foremost, it matters whether you're in the strong version of double crux (cooperative, consent-based) or the weak version (you, as an agent, trying to improve the conversational dynamic, possibly in the face of direct opposition).  In particular, if someone is currently riled up and conceives of you as rude/hostile/the enemy, then saying something like "I just think we'd make better progress if we talked about the underlying reasons for our beliefs" doesn't sound like a plea for cooperation—it sounds like a trap.

So, if you're in the weak version, the primary strategy is to embody the question "What do you see that I don't?"  In other words, approach from a place of explicit humility and good faith, drawing out their belief structure for its own sake, to see and appreciate it rather than to undermine or attack it.  In my experience, people can "smell it" if you're just playing at good faith to get them to expose themselves; if you're having trouble really getting into the spirit, I recommend meditating on times in your past when you were embarrassingly wrong, and how you felt prior to realizing it compared to after realizing it.

(If you're unable or unwilling to swallow your pride or set aside your sense of justice or fairness hard enough to really do this, that's actually fine; not every disagreement benefits from the double-crux-nature.  But if your actual goal is improving the conversational dynamic, then this is a cost you want to be prepared to pay—going the extra mile, because a) going what feels like an appropriate distance is more often an undershoot, and b) going an actually appropriate distance may not be enough to overturn their entrenched model in which you are The Enemy.  Patience- and sanity-inducing rituals recommended.)

As a further tip that's good for either version but particularly important for the weak one, model the behavior you'd like your partner to exhibit.  Expose your own belief structure, show how your own beliefs might be falsified, highlight points where you're uncertain and visibly integrate their perspective and information, etc.  In particular, if you don't want people running amok with wrong models of what's going on in your head, make sure you're not acting like you're the authority on what's going on in their head.

Speaking of non-sequiturs, beware of getting lost in the fog.  The very first step in double crux should always be to operationalize and clarify terms.  Try attaching numbers to things rather than using misinterpretable qualifiers; try to talk about what would be observable in the world rather than how things feel or what's good or bad.  In the school uniforms example, saying "uniforms make students feel better about themselves" is a start, but it's not enough, and going further into quantifiability (if you think you could actually get numbers someday) would be even better.  Often, disagreements will "dissolve" as soon as you remove ambiguity—this is success, not failure!

Finally, use paper and pencil, or whiteboards, or get people to treat specific predictions and conclusions as immutable objects (if you or they want to change or update the wording, that's encouraged, but make sure that at any given moment, you're working with a clear, unambiguous statement).  Part of the value of double crux is that it's the opposite of the weaselly, score-points, hide-in-ambiguity-and-look-clever dynamic of, say, a public political debate.  The goal is to have everyone understand, at all times and as much as possible, what the other person is actually trying to say—not to try to get a straw version of their argument to stick to them and make them look silly.  Recognize that you yourself may be tempted or incentivized to fall back to that familiar, fun dynamic, and take steps to keep yourself in "scout mindset" rather than "soldier mindset."


This is the double crux algorithm as it currently exists in our handbook.  It's not strictly connected to all of the discussion above; it was designed to be read in context with an hour-long lecture and several practice activities (so it has some holes and weirdnesses) and is presented here more for completeness and as food for thought than as an actual conclusion to the above.

1. Find a disagreement with another person

  • A case where you believe one thing and they believe the other

  • A case where you and the other person have different confidences (e.g. you think X is 60% likely to be true, and they think it’s 90%)

2. Operationalize the disagreement

  • Define terms to avoid getting lost in semantic confusions that miss the real point

  • Find specific test cases—instead of (e.g.) discussing whether you should be more outgoing, instead evaluate whether you should have said hello to Steve in the office yesterday morning

  • Wherever possible, try to think in terms of actions rather than beliefs—it’s easier to evaluate arguments like “we should do X before Y” than it is to converge on “X is better than Y.”

3. Seek double cruxes

  • Seek your own cruxes independently, and compare with those of the other person to find overlap

  • Seek cruxes collaboratively, by making claims (“I believe that X will happen because Y”) and focusing on falsifiability (“It would take A, B, or C to make me stop believing X”)

4. Resonate

  • Spend time “inhabiting” both sides of the double crux, to confirm that you’ve found the core of the disagreement (as opposed to something that will ultimately fail to produce an update)

  • Imagine the resolution as an if-then statement, and use your inner sim and other checks to see if there are any unspoken hesitations about the truth of that statement

5. Repeat!


We think double crux is super sweet.  To the extent that you see flaws in it, we want to find them and repair them, and we're currently betting that repairing and refining double crux is going to pay off better than try something totally different.  In particular, we believe that embracing the spirit of this mental move has huge potential for unlocking people's abilities to wrestle with all sorts of complex and heavy hard-to-parse topics (like existential risk, for instance), because it provides a format for holding a bunch of partly-wrong models at the same time while you distill the value out of each.

Comments appreciated; critiques highly appreciated; anecdotal data from experimental attempts to teach yourself double crux, or teach it to others, or use it on the down-low without telling other people what you're doing extremely appreciated.

 - Duncan Sabien

1One reason good faith is important is that even when people are "wrong," they are usually partially right—there are flecks of gold mixed in with their false belief that can be productively mined by an agent who's interested in getting the whole picture.  Normal disagreement-navigation methods have some tendency to throw out that gold, either by allowing everyone to protect their original belief set or by replacing everyone's view with whichever view is shown to be "best," thereby throwing out data, causing information cascades, disincentivizing "noticing your confusion," etc.

The central assumption is that the universe is like a large and complex maze that each of us can only see parts of.  To the extent that language and communication allow us to gather info about parts of the maze without having to investigate them ourselves, that's great.  But when we disagree on what to do because we each see a different slice of reality, it's nice to adopt methods that allow us to integrate and synthesize, rather than methods that force us to pick and pare down.  It's like the parable of the three blind men and the elephant—whenever possible, avoid generating a bottom-line conclusion until you've accounted for all of the available data.


The agent at the top mistakenly believes that the correct move is to head to the left, since that seems to be the most direct path toward the goal.  The agent on the right can see that this is a mistake, but it would never have been able to navigate to that particular node of the maze on its own. 

"3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism"

11 PhilGoetz 29 March 2016 03:16PM

The lead article on everydayfeminism.com on March 25:

3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism

The scenario is always the same: I say we should  abolish prisonspolice, and the  American settler state— someone tells me I’m irrational. I say we need  decolonization of the land — someone tells me I’m not being realistic.... When those who are the loudest, the most disruptive — the ones who want to destroy America and all of the oppression it has brought into the world — are being silenced even by others in social justice groups, that is unacceptable.

(The link from "decolonization" is to "Decolonization is not a metaphor", to make it clear s/he means actually giving the land back to the Native Americans.)

I regularly see people who describe how social justice activists act accused of setting up a straw man.  This article show that the bias of some SJWs against reason is impossible to strawman.  The author argues at length that rationality is bad, and that justice arguments shouldn't be rational or be defended rationally.  Ze is, or was, confused about what "rationality" means, but clearly now means it to include reason-based argumentation.

This isn't just some wacko's blog; it was chosen as the headline article for the website.  I had to click around to a few other articles to make sure it wasn't a parody site.

But it isn't just a sign of how irrational the social justice movement is—it has clues to how it got that way.

continue reading »

Is Spirituality Irrational?

5 lisper 09 February 2016 01:42AM

[Originally published at Intentional Insights in response to Religious and Rational]

Spirituality and rationality seem completely opposed. But are they really?


To get at this question, let's start with a little thought experiment.  Consider the following two questions:


1.  If you were given a choice between reading a physical book (or an e-book) or listening to an audiobook, which would you prefer?


2.  If you were given a choice between listening to music, or looking at the grooves of a phonograph record through a microscope, which would you prefer?


But I am more interested in the answer to a third question:


3.  For which of the first two questions do you have a stronger preference between the two options?


Most people will have a stronger preference in the second case than the first.  But why?  Both situations are in some sense the same: there is information being fed into your brain, in one case through your ears and in the other through your eyes.  So why should people's preference for ears be so much stronger in the case of music than books?


There is something in the essence of music that is lost in the translation between an audio and a visual rendering.  The same loss happens for words too, but to a much lesser extent.  Subtle shades of emphasis and tone of voice can convey essential information in spoken language. This is one of the reasons that email is so notorious for amplifying misunderstandings.  But the loss in much greater in the case of music.  


The same is true for other senses.  Color is one example.  A blind person can abstractly understand what light is, and that color is a byproduct of the wavelength of light, and that light is a form of electromagnetic radiation... yet there is no way for a blind person to experience subjectively the difference between red and blue and green.  But just because some people can't see colors doesn't mean that colors aren't real.


The same is true for spiritual experiences.


Now, before I expand that thought, I want to give you my bona fides.  I am a committed rationalist, and an atheist (though I don't like to self-identify as an atheist because I'd rather focus on what I *do* believe in rather than what I don't).  So I am not trying to convince you that God exists.  What I want to say is rather that certain kinds of spiritual experiences *might* be more than mere fantasies made up out of whole cloth. If we ignore this possibility we risk shutting ourselves off from a vital part of the human experience.


I grew up in the deep south (Kentucky and Tennessee) in a secular Jewish family.  When I was 12 my parents sent me to a Christian summer camp (there were no other kinds in Kentucky back in those days).  After a week of being relentlessly proselytized (read: teased and ostracized), I decided I was tired of being the camp punching bag and so I relented and gave my heart to Jesus.  I prayed, confessed my sins, and just like that I was a member of the club.


I experienced a euphoria that I cannot render into words, in exactly the same way that one cannot render into words the subjective experience of listening to music or seeing colors or eating chocolate or having sex.  If you have not experienced these things for yourself, no amount of description can fill the gap.  Of course, you can come to an *intellectual* understanding that "feeling the presence of the holy spirit" has nothing to do with any holy spirit. You can intellectually grasp that it is an internal mental process resulting from (probably) some kind of neurotransmitter released in response to social and internal mental stimulus.  But that won't allow you to understand *what it is like* any more than understanding physics will let you understand what colors look like or what music sounds like.


Happily, there are ways to stimulate the subjective experience that I'm describing other than accepting Jesus as your Lord and Savior. Meditation, for example, can produce similar results.  It can be a very powerful experience.  It can even become addictive, almost like a drug.


I am not necessarily advocating that you go try to get yourself a hit of religious euphoria (though I wouldn’t discourage you either -- the experience can give you some interesting and useful perspective on life).  Instead, I simply want to convince you to entertain the possibility that people might profess to believe in God for reasons other than indoctrination or stupidity.  Religious texts and rituals might be attempts to share real subjective experiences that, in the absence of a detailed modern understanding of neuroscience, can appear to originate from mysterious, subtle external sources.


The reason I want to convince you to entertain this notion is that an awful lot of energy gets wasted by arguing against religious beliefs on logical grounds, pointing out contradictions in the Bible and whatnot. Such arguments tend to be ineffective, which can be very frustrating for those who advance them. The antidote for this frustration is to realize that spirituality is not about logic.  It's about subjective experiences that not everyone is privy to.  Logic is about looking at the grooves.  Spirituality is about hearing the music.


The good news is that adopting science and reason doesn’t mean you have to give up on spirituality any more than you have to give up on music. There are myriad paths to spiritual experience, to a sense of awe and wonder at the grand tapestry of creation, to the essential existential mysteries of life and consciousness, to what religious people call “God.” Walking in the woods. Seeing the moons of Jupiter through a telescope. Gathering with friends to listen to music, or to sing, or simply to share the experience of being alive. Meditation. Any of these can be spiritual experiences if you allow them to be. In this sense, God is everywhere.


Rationality Merchandise - First Set

11 Gleb_Tsipursky 05 November 2015 06:12AM

As part of my broader project of promoting rationality to a wide audience, we developed clothing with rationality-themed slogans. This apparel is suited for aspiring rationalists to wear to show their affiliation with rationality, to remind themselves and other aspiring rationalists to improve, and to spread positive memes broadly.


My gratitude to all those who gave suggestions about and voted on these slogans, both on LW itself and the LW Facebook group. This is the first set of seven slogans that had the most popular support from Less Wrongers, and more will be coming soon.


The apparel is pretty affordable, starting at under $15. All profits will go to funding nonprofit work dedicated to spreading rationality to a broad audience.


Links to Clothing with Slogans:

1) Less Wrong Every Day

This slogan conveys a key aspiration of every aspiring rationalist - to grow less wrong every day and have a clearer map of the territory. This is not only a positive meme, but also a clear sign of affiliation with rationality and the Less Wrong community in particular.


2) Growing Mentally Stronger

This slogan conveys the broad goal of rationality, namely for its participants to grow mentally stronger. This shirt helps prime the wearer and those around the wearer to focus on growing more rational, both epistemically and instrumentally. It is more broadly accessible than something like "Less Wrong Every Day."


3) Living On Purpose

This slogan conveys the intentional nature of how aspiring rationalists live their life, with a clear set of terminal goals and strategies to reach those goals.


4) Please Provide An Example

This slogan and its variants received a lot of support from aspiring rationalists tired of discussions and debates with people who talked in broad abstract terms and failed to provide examples. It automatically reminds those who you are talking with, both aspiring rationalists and non-rationalists, to be concrete and specific in their engagement with you, and minimizes wasted airtime and inefficient discussions.


5) I Notice I'm Confused

This slogan reminds the wearer and those around the wearer of the vital skill of noticing confusion for growing aware of gaps between one's map and the reality of the territory. Moreover, in field testing this design, this slogan proved especially fruitful for prompting conversations about rationality from those curious about this slogan.


6) Glad To Change My Mind

This slogan conveys and reinforces one of the most fundamental aspects of rationality - the eagerness and yearning to change one's mind based on evidence. The slogan is an especially impactful way of conveying rationality broadly, as the sentiment of updating beliefs based on evidence is something that many intelligent people wish for society. Thus, it helps attract intellectually-oriented people into discussions about rationality.


7) Changed Your Mind? Achievement Unlocked!

This slogan has the same benefits as the above slogan, except being more outwardly oriented and expressing the message in a more meme-style format.


Other ideas for slogans that had support, in no particular order (Note that we limited the number of words to 4 longer words or 7 shorter words to fit on a T-shirt, and some of these combine Effective Altruism and Rationality):


  • How Much Do You Believe That?
  • Reach Your Goals Using Science
  • Truth Is Not Partisan
  • Glad To Give Citations
  • What is True is Already So
  • Reality Doesn’t Take Sides
  • In Math We Trust
  • In Reason We Trust
  • Seeking Constructive Feedback
  • Make New Mistakes Only
  • Constantly Optimizing
  • Absence Of Evidence Is Evidence Of Absence
  • Rationality: Accurate Beliefs + Winning Decisions
  • I Chose This Rationally
  • Combining Heart And Head
  • Effective Altruism
  • Doing the Most Good Per Dollar
  • Optimizing QALYs
  • Superdonor
  • Making My Life Meaningful
  • Purpose Comes from Within


I would appreciate feedback on the current designs. As you get and wear them, I'd appreciate learning about your experience wearing them, to learn what kind of reaction you get. So far, we've had quite positive reports from our field tests of the merchandise, with good conversations prompted by wearing these slogans.


Also, please share which of the additional slogans are your favorites, so we can get them done sooner. If you have additional ideas for slogans, list them in comments below, and remember the guidelines of 4 longer words to 7 short words, and making them accessible to a broad audience to spread rationality memes.


Besides clothing, what other kind of merchandise would you like to buy?


Look forward to your feedback! If you want to contact me privately about the merchandise or the broader project of spreading rationality to a broad audience, my email is gleb@intentionalinsights.org



Five Worlds Collide 2015 - unconference in Vienna

23 AnnaLeptikon 30 September 2015 01:34PM

Welcome to Five Worlds Collide, the (un)conference for effective altruism, quantified self, rationality/scientific thinking, transhumanism and artificial intelligence.

Based on some feedback I heard about the EA Global events where people said they want to have more additional opportunity to present their own thoughts and because I (co)organize multiple meetups in Vienna, which to my mind have a huge overlap, I plan and organize this event in December 2015.

What: Present your own thoughts and projects, get inspired by new input, discuss, disagree, change your mind and grow – and connect with new amazing people and form ideas and projects together. In practice this means there will be five keynote talks and a lot of opportunity to give short lightning talks yourself.

When: it starts in the evening of Friday the 4th of December - and ends in the evening on Sunday the 6th of December 2015 (so it’s 2,5 days).

Where: sektor5 is an amazing and huge coworking space in Vienna. They even won the “Best Coworking Space” in the Austrian national round of the Central European Startup Awards 2015! Vienna is a city worth visiting – it is especially beautiful during Christmas season and interesting because of its history („Vienna Circle“, Gödel, Schrödinger – it’s even mentioned in the „Logicomix“).

How much: the ticket for the whole event will be 50 Euro. This includes lunch on Saturday on Sunday - it does not include accommodation, breakfast and dinner (but I offer advice and recommendations for those). Still this is the absolute minimum needed to create this event so there is also the option on Eventbrite to donate additional money to make the event as great as possible. (Any overshoot will be used for “Effective Altruism Austria” and/or donated effectively to GiveWell top charities.)

Always updated version on Facebook here.

Get your ticket here.

I am very thankful for all the great events I attended in the last months. For example the European LessWrong Community Weekend 2015, EA Global in San Francisco and Oxford. They added value to my life and gave me opportunity to learn new things, exchange thoughts and get to know amazing humans as well as meet friends again. I hope I can give the same back to others.

Also I am happy about feedback and helping hands – right now it’s mostly a one-(wo)man-show.

Looking forward to seeing you,

P.S.: If you have any questions about the event, you can reach me via email or on Facebook

Optimizing the Twelve Virtues of Rationality

24 Gleb_Tsipursky 09 June 2015 03:08AM

At the Less Wrong Meetup in Columbus, OH over the last couple of months, we discussed optimizing the Twelve Virtues of Rationality. In doing so, we were inspired by what Eliezer himself said in the essay:

  • Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.

So we first decided on the purpose of optimizing, and settled on yielding virtues that would be most impactful and effective for motivating people to become more rational, in other words optimizations that would produce the most utilons and hedons for the purpose of winning. There were a bunch of different suggestions. I tried to apply them to myself over the last few weeks and want to share my findings.


First Suggestion

Replace Perfectionism with Improvement


Motivation for Replacement

Perfectionism, both in how it pattern matches and in its actual description in the essay, orients toward focusing on defects and errors in oneself. By depicting the self as always flawed, and portraying the aspiring rationalist's job as seeking to find the flaws, the virtue of perfectionism is framed negatively, and is bound to result in negative reinforcement. Finding a flaw feels bad, and in many people that creates ugh fields around actually doing that search, as reported by participants at the Meetup. Instead, a positive framing of this virtue would be Improvement. Then, the aspiring rationalist can feel ok about where s/he is right now, but orient toward improving and growing mentally stronger - Tsuyoku Naritai! All improvement would be about gaining more hedons, and thus use the power of positive reinforcement. Generally, research suggests that positive reinforcement is effective in motivating the repetition of behavior, whereas negative reinforcement works best to stop people from doing a certain behavior. No wonder that Meetup participants reported that Perfectionism was not very effective in motivating them to grow more rational. So to get both more hedons, and thereby more utilons in the sense of the utility of seeking to grow more rational, Improvement might be a better term and virtue than perfectionism.



I've been orienting myself toward improvement instead of perfectionism for the last few weeks, and it's been a really noticeable difference. I've become much more motivated to seek ways that I can improve my ability to find the truth. I've been more excited and enthused about finding flaws and errors in myself, because they are now an opportunity to improve and grow stronger, not become less weak and imperfect. It's the same outcome as the virtue of Perfectionism, but deploying the power of positive reinforcement.


Second Suggestion

Replace Argument with Community


Motivation for Replacement

Argument is an important virtue, and a vital way of getting ourselves to see the truth is to rely on others to help us see the truth through debates, highlight mistaken beliefs, and help update on them, as the virtue describes. Yet orienting toward a rationalist Community has additional benefits besides the benefits of argument, which is only one part of a rationalist Community. Such a community would help provide an external perspective that research suggests would be especially beneficial to pointing out flaws and biases within one's ability to evaluate reality rationally, even without an argument. A community can help provide wise advice on making decisions, and it’s especially beneficial to have a community of diverse and intelligent people of all sorts in order to get the benefits of a wide variety of private information that one can aggregate to help make the best decisions. Moreover, a community can provide systematic ways to improve, through giving each systematic feedback, through compensating for each others' weaknesses in rationality, through learning difficult things together, and other ways of supporting each others' pursuit of ever-greater rationality.  Likewise, a community can collaborate together, with different people fulfilling different functions in supporting all others in growing mentally stronger - not everybody has to be the "hero," after all, and different people can specialize in various tasks related to supporting others growing mentally stronger, gaining comparative advantage as a result. Studies show that social relationships impact us powerfully in numerous ways, contribute to our mental and physical wellbeing, and that we become more like our social network over time (1, 2, 3). This highlights further the benefits of focusing on developing a rationalist-oriented community of diverse people around ourselves to help us grow mentally stronger and get to the correct answer, and gain hedons and utilons alike for the purpose of winning.



After I updated my beliefs toward Community from Argument, I've been working more intentionally to create a systematic way for other aspiring rationalists in my LW meetup, and even non-rationalists, to point out my flaws and biases to me. I've noticed that by taking advantage of outside perspectives, I've been able to make quite a bit more headway on uncovering my own false beliefs and biases. I asked friends, both fellow aspiring rationalists and other wise friends not currently in the rationalist movement, to help me by pointing out when my biases might be at play, and they were happy to do so. For example, I tend to have an optimism bias, and I have told people around me to watch for me exhibiting this bias. They pointed out a number of times when this occurred, and I was able to improve gradually my ability to notice and deal with this bias.


Third Suggestion

Expand Empiricism to include Experimentation


Motivation for Expansion

This would not be a replacement of a virtue, but an expansion of the definition of Empiricism. As currently stated, Empiricism focused on observation and prediction, and implicitly in making beliefs pay rent in anticipated experience. This is a very important virtue, and fundamental to rationality. It can be improved, however, by adding experimentation to the description of empiricism. By experimentation I mean expanding simply observation as described in the essay currently, to include actually running experiments and testing things out in order to update our maps, both about ourselves and in the world around us. This would help us take initiative in gaining data around the world, not simply relying passively on observation of the world around us. My perspective on this topic was further strengthened by this recent discussion post, which caused me to further update my beliefs toward experimentation as a really valuable part of empiricism. Thus, including experimentation as part of empiricism would get us more utilons for getting at the correct answer and winning.



I have been running experiments on myself and the world around me long before this discussion took place. The discussion itself helped me connect the benefits of experimentation to the virtue of Empiricism, and also see the gap currently present in that virtue. I strengthened my commitment to experimentation, and have been running more concrete experiments, where I both predict the results in advance in order to make my beliefs pay rent, and then run an experiment to test whether my beliefs actually correlated to the outcome of the experiments. I have been humbled several times and got some great opportunities to update my beliefs by combining prediction of anticipated experience with active experimentation.



The Twelve Virtues of Rationality can be optimized to be more effective and impactful for getting at the correct answer and thus winning. There are many way of doing so, but we need to be careful in choosing optimizations that would be most optimal for the most people, as based on the research on how our minds actually work. The suggestions I shared above are just some ways of doing so. What do you think of these suggestions? What are your ideas for optimizing the Twelve Virtues of Rationality?


In Praise of Maximizing – With Some Caveats

22 wallowinmaya 15 March 2015 07:40PM

Most of you are probably familiar with the two contrasting decision making strategies "maximizing" and "satisficing", but a short recap won't hurt (you can skip the first two paragraphs if you get bored): Satisficing means selecting the first option that is good enough, i.e. that meets or exceeds a certain threshold of acceptability. In contrast, maximizing means the tendency to search for so long until the best possible option is found.

Research indicates (Schwartz et al., 2002) that there are individual differences with regard to these two decision making strategies. That is, some individuals – so called ‘maximizers’ – tend to extensively search for the optimal solution. Other people – ‘satisficers’ – settle for good enough1. Satisficers, in contrast to maximizers, tend to accept the status quo and see no need to change their circumstances2.

When the subject is raised, maximizing usually gets a bad rap. For example, Schwartz et al. (2002) found "negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret."

So should we all try to become satisficers? At least some scientists and the popular press seem to draw this conclusion:

Maximisers miss out on the psychological benefits of commitment, leaving them less satisfied than their more contented counterparts, the satisficers. ...Current research is trying to understand whether they can change. High-level maximisers certainly cause themselves a lot of grief.

I beg to differ. Satisficers may be more content with their lives, but most of us don't live for the sake of happiness alone. Of course, satisficing makes sense when not much is at stake3. However, maximizing also can prove beneficial, for the maximizers themselves and for the people around them, especially in the realm of knowledge, ethics, relationships and when it comes to more existential issues – as I will argue below4.

Belief systems and Epistemology

Ideal rationalists could be thought of as epistemic maximizers: They try to notice slight inconsistencies in their worldview, take ideas seriously, beware wishful thinking, compartmentalization, rationalizations, motivated reasoning, cognitive biases and other epistemic sins. Driven by curiosity, they don't try to confirm their prior beliefs, but wish to update them until they are maximally consistent and maximally correspondent with reality. To put it poetically, ideal rationalists as well as great scientists don't content themselves to wallow in the mire of ignorance but are imbued with the Faustian yearning to ultimately understand whatever holds the world together in its inmost folds.

In contrast, consider the epistemic habits of the average Joe Christian: He will certainly profess that having true beliefs is important to him. But he doesn't go to great lengths to actually make this happen. For example, he probably believes in an omnipotent and beneficial being that created our universe. Did he impartially weigh all available evidence to reach this conclusion? Probably not. More likely is that he merely shares the beliefs of his parents and his peers. However, isn't he bothered by the problem of evil or Occam's razor? And what about all those other religions whose adherents believe with the same certainty in different doctrines?

Many people don’t have good answers to these questions. Their model of how the world works is neither very coherent nor accurate but it's comforting and good enough. They see little need to fill the epistemic gaps and inconsistencies in their worldview or to search for a better alternative. Thus, one could view them as epistemic satisficers. Of course, all of us exhibit this sort of epistemic laziness from time to time. In the words of Jonathan Haidt (2013):

We take a position, look for evidence that supports it, and if we find some evidence—enough so that our position “makes sense”—we stop thinking.

Usually, I try to avoid taking cheap shots at religion and therefore I want to note that similar points apply to many non-theistic belief systems.


Let's go back to average Joe: he presumably obeys the dictates of the law and his religion and occasionally donates to (ineffective) charities. Joe probably thinks that he is a “good” person and many people would likely agree. This leads us to an interesting question: how do we typically judge the morality of our own actions?

Let's delve into the academic literature and see what it has to offer: In one exemplary study, Sachdeva et al. (2009) asked participants to write a story about themselves using either morally positive words (e.g. fair, nice) or morally negative words (e.g. selfish, mean). Afterwards, the participants were asked if and how much they would like to donate to a charity of their choice. The result: Participants who wrote a story containing the positive words donated only one fifth as much as those who wrote a story with negative words.

This effect is commonly referred to as moral licensing: People with a recently boosted moral self-concept feel like they have done enough and see no need to improve the world even further. Or, as McGonigal (2011) puts it (emphasis mine):

When it comes to right and wrong, most of us are not striving for moral perfection. We just want to feel good enough – which then gives us permission to do whatever we want.

Another well known phenomenon is scope neglect. One explanation for scope neglect is the "purchase of moral satisfaction" proposed by Kahneman and Knetsch (1992): Most people don't try to do as much good as possible with their money, they only spend just enough cash to create a "warm-fuzzy feeling" in themselves.

Phenomenons like "moral licensing" and "purchase of moral satisfaction" indicate that it is all too human to only act as altruistic as is necessary to feel or seem good enough. This could be described as "ethical satisficing" because people just follow the course of action that meets or exceeds a certain threshold of moral goodness. They don't try to carry out the morally optimal action or an approximation thereof (as measured by their own axiology).

I think I cited enough academic papers in the last paragraphs so let's get more speculative: Many, if not most people5 tend to be intuitive deontologists6. Deontology basically posits that some actions are morally required, and some actions are morally forbidden. As long as you do perform the morally required ones and don't engage in morally wrong actions you are off the hook. There is no need to do more, no need to perform supererogatory acts. Not neglecting your duties is good enough. In short, deontology could also be viewed as ethical satisficing (see footnote 7 for further elaboration).

In contrast, consider deontology's arch-enemy: Utilitarianism. Almost all branches of utilitarianism share the same principal idea: That one should maximize something for as many entities as possible. Thus, utilitarianism could be thought of as ethical maximizing8.

Effective altruists are an even better example for ethical maximizers because they actually try to identify and implement (or at least pretend to try) the most effective approaches to improve the world. Some conduct in-depth research and compare the effectiveness of hundreds of different charities to find the ones that save the most lives with as little money as possible. And rumor has it there are people who have even weirder ideas about how to ethically optimize literally everything. But more on this later.

Friendships and conversations

Humans intuitively assume that the desires and needs of other people are similar to their own ones. Consequently, I thought that everyone secretly yearns to find like-minded companions with whom one can talk about one’s biggest hopes as well as one’s greatest fears and form deep, lasting friendships.

But experience tells me that I was probably wrong, at least to some degree: I found it quite difficult to have these sorts of conversations with a certain kind of people, especially in groups (luckily, I’ve found also enough exceptions). It seems that some people are satisfied as long as their conversations meet a certain, not very high threshold of acceptability. Similar observations could be made about their friendships in general. One could call them social or conversational satisficers. By the way, this time research actually suggests that conversational maximizing is probably better for your happiness than small talk (Mehl et al., 2008).

Interestingly, what could be called "pluralistic superficiality" may account for many instances of small talk and superficial friendships since everyone experiences this atmosphere of boring triviality but thinks that the others seem to enjoy the conversations. So everyone is careful not to voice their yearning for a more profound conversation, not realizing that the others are suppressing similar desires.

Crucial Considerations and the Big Picture

On to the last section of this essay. It’s even more speculative and half-baked than the previous ones, but it may be the most interesting, so bear with me.

Research suggests that many people don’t even bother to search for answers to the big questions of existence. For example, in a representative sample of 603 Germans, 35% of the participants could be classified as existentially indifferent, that is they neither think their lives are meaningful nor suffer from this lack of meaning (T. Schnell, 2008).

The existential thirst of the remaining 65% is presumably harder to satisfy, but how much harder? Many people don't invest much time or cognitive resources in order to ascertain their actual terminal values and how to optimally reach them – which is arguably of the utmost importance. Instead they appear to follow a mental checklist containing common life goals (one could call them "cached goals") such as a nice job, a romantic partner, a house and probably kids. I’m not saying that such goals are “bad” – I also prefer having a job to sleeping under the bridge and having a partner to being alone. But people usually acquire and pursue their (life) goals unsystematically and without much reflection which makes it unlikely that such goals exhaustively reflect their idealized preferences. Unfortunately, many humans are so occupied by the pursuit of such goals that they are forced to abandon further contemplation of the big picture.

Furthermore, many of them lack the financial, intellectual or psychological capacities to ponder complex existential questions. I'm not blaming subsistence farmers in Bangladesh for not reading more about philosophy, rationality or the far future. But there are more than enough affluent, highly intelligent and inquisitive people who certainly would be able to reflect about crucial considerations. Instead, they spend most of their waking hours maximizing nothing but the money in their bank accounts or interpreting the poems of some arabic guy from the 7th century9.

Generally, many people seem to take the current rules of our existence for granted and content themselves with the fundamental evils of the human condition such as aging, needless suffering or death. Whatever the reason may be, they don't try to radically change the rules of life and their everyday behavior seems to indicate that they’ve (gladly?) accepted their current existence and the human condition in general. One could call them existential satisficers.

Contrast this with the mindset of transhumanism. Generally, transhumanists are not willing to accept the horrors of nature and realize that human nature itself is deeply flawed. Thus, transhumanists want to fundamentally alter the human condition and aim to eradicate, for example, aging, unnecessary suffering and ultimately death. Through various technologies transhumanists desire to create an utopia for everyone. Thus, transhumanism could be thought of as existential maximizing10.

However, existential maximizing and transhumanism are not very popular. Quite the opposite, existential satisficing – accepting the seemingly unalterable human condition – has a long philosophical tradition. To give some examples: The otherwise admirable Stoics believed that the whole universe is pervaded and animated by divine reason. Consequently, one should cultivate apatheia and calmly accept one's fate. Leibniz even argued that we already live in the best of all possible worlds. The mindset of existential satisficing can also be found in Epicureanism and arguably in Buddhism. Lastly, religions like Christianity or Islam are generally against transhumanism, partly because this amounts to “playing God”. Which is understandable from their point of view because why bother fundamentally transforming the human condition if everything will be perfect in heaven anyway?

One has to grant ancient philosophers that they couldn't even imagine that one day humanity would acquire the technological means to fundamentally alter the human condition. Thus it is no wonder that Epicurus argued that death is not to be feared or that the Stoics believed that disease or poverty are not really bad: It is all too human to invent rationalizations for the desirability of actually undesirable, but (seemingly) inevitable things – be it death or the human condition itself.

But many contemporary intellectuals can't be given the benefit of the doubt. They argue explicitly against trying to change the human condition. To name a few: Bernard Williams believed that death gives life meaning. Francis Fukuyama called transhumanism the world's most dangerous idea. And even Richard Dawkins thinks that the fear of death is "whining" and that the desire for immortality is "presumptuous"11:

Be thankful that you have a life, and forsake your vain and presumptuous desire for a second one.

With all that said, "run-off-the-mill" transhumanism arguably still doesn't go far enough. There are at least two problems I can see: 1) Without a benevolent superintelligent singleton "Moloch" (to use Scott Alexander's excellent wording) will never be defeated. 2) We are still uncertain about ontology, decision theory, epistemology and our own terminal values. Consequently, we need some kind of process which can help us to understand those things or we will probably fail to rearrange reality until it conforms with our idealized preferences.

Therefore, it could be argued that the ultimate goal is the creation of a benevolent superintelligence or Friendly AI (FAI) whose values are aligned with ours. There are of course numerous objections to the whole superintelligence strategy in general and to FAI in particular, but I won’t go into detail here because this essay is already too long.

Nevertheless – however unlikely – it seems possible that with the help of a benevolent superintelligence we could abolish all gratuitous suffering and achieve an optimal mode of existence. We could become posthuman beings with god-like intellects, our ecstasy outshining the surrounding stars, and transforming the universe until one happy day all wounds are healed, all despair dispelled and every (idealized) desire fulfilled. To many this seems like sentimental and wishful eschatological speculation but for me it amounts to ultimate existential maximizing12, 13.


The previous paragraphs shouldn’t fool one into believing that maximizing has no serious disadvantages. The desire to aim higher, become stronger and to always behave in an optimally goal-tracking way can easily result in psychological overload and subsequent surrender. Furthermore, it seems that adopting the mindset of a maximizer increases the tendency to engage in upward social comparisons and counterfactual thinking which contribute to depression as research has shown.

Moreover, there is much to be learnt from stoicism and satisficing in general: Life isn't always perfect and there are things one cannot change; one should accept one's shortcomings – if they are indeed unalterable; one should make the best of one's circumstances. In conclusion, better be a happy satisficer whose moderate productivity is sustainable than be a stressed maximizer who burns out after one year. See also these two essays which make similar points.

All that being said, I still favor maximizing over satisficing. If our ancestors had all been satisficers we would still be picking lice off each other’s backs14. And only by means of existential maximizing can we hope to abolish the aforementioned existential evils and all needless suffering – even if the chances seem slim.

[Originally posted a longer, more personal version of this essay on my own blog]


[1] Obviously this is not a categorical classification, but a dimensional one.

[2] To put it more formally: The utility function of the ultimate satisficer would assign the same (positive) number to each possible world, i.e. the ultimate satisficer would be satisfied with every possible world. The less possible worlds you are satisfied with (i.e. the higher your threshold of acceptability), the less possible worlds exist between which you are indifferent, the less of a satisficer and the more of a maximizer you are. Also note: Satisficing is not irrational in itself. Furthermore, I’m talking about the somewhat messy psychological characteristics and (revealed) preferences of human satisficers/maximizers. Read these posts if you want to know more about satisficing vs. maximizing with regard to AIs.

[3] Rational maximizers take the value of information and opportunity costs into account.

[4] Instead of "maximizer" I could also have used the term "optimizer".

[5] E.g. in the "Fat Man" version of the famous trolley dilemma, something like 90% of subjects don't push a fat man onto the track, in order to save 5 other people. Also, utilitarians like Peter Singer don't exactly get rave reviews from most folks. Although there is some conflicting research (Johansson-Stenman, 2012). Furthermore, the deontology vs. utilitarianism distinction itself is limited. See e.g. "The Righteous Mind" by Jonathan Haidt.

[6] Of course, most people are not strict deontologists. They are also intuitive virtue ethicists and care about the consequences of their actions.

[7] Admittedly, one could argue that certain versions of deontology are about maximally not violating certain rules and thus could be viewed as ethical maximizing. However, in the space of all possible moral actions there exist many actions between which a deontologist is indifferent, namely all those actions that exceed the threshold of moral acceptability (i.e. those actions that are not violating any deontological rule). To illustrate this with an example: Visiting a friend and comforting him for 4 hours or using the same time to work and subsequently donating the earned money to a charity are both morally equivalent from the perspective of (many) deontological theories – as long as one doesn’t violate any deontological rule in the process. We can see that this parallels satisficing.

Contrast this with (classical) utilitarianism: In the space of all possible moral actions there is only one optimal moral action for an utilitarian and all other actions are morally worse. An (ideal) utilitarian searches for and implements the optimal moral action (or tries to approximate it because in real life one is basically never able to identify, let alone carry out the optimal moral action). This amounts to maximizing. Interestingly, this inherent demandingness has often been put forward as a critique of utilitarianism (and other sorts of consequentialism) and satisficing consequentialism has been proposed as a solution (Slote, 1984). Further evidence for the claim that maximizing is generally viewed with suspicion.

[8] The obligatory word of caution here: following utilitarianism to the letter can be self-defeating if done in a naive way.

[9] Nick Bostrom (2014) expresses this point somewhat harshly:

A colleague of mine likes to point out that a Fields Medal (the highest honor in mathematics) indicates two things about the recipient: that he was capable of accomplishing something important, and that he didn't.

As a general point: Too many people end up as money-, academia-, career- or status-maximizers although those things often don’t reflect their (idealized) preferences.

[10] Of course there are lots of utopian movements like socialism, communism or the Zeitgeist movement. But all those movements make the fundamental mistake of ignoring or at least heavily underestimating the importance of human nature. Creating utopia merely through social means is impossible because most of us are, by our very nature, too selfish, status-obsessed and hypocritical and cultural indoctrination can hardly change this. To deny this, is to simply misunderstand the process of natural selection and evolutionary psychology. Secondly, even if a socialist utopia were to come true, there still would exist unrequited love, disease, depression and of course death. To abolish those things one has to radically transform the human condition itself.

[11] Here is another quote:

We are going to die, and that makes us the lucky ones. Most people are never going to die because they are never going to be born. [….] We privileged few, who won the lottery of birth against all odds, how dare we whine at our inevitable return to that prior state from which the vast majority have never stirred?

― Richard Dawkins in "Unweaving the Rainbow"

[12] It’s probably no coincidence that Yudkowsky named his blog "Optimize Literally Everything" which adequately encapsulates the sentiment I tried to express here.

[13] Those interested in or skeptical of the prospect of superintelligent AI, I refer to "Superintelligence: Paths, Dangers and Strategies" by Nick Bostrom.

[14] I stole this line from Bostrom’s “In Defense of Posthuman Dignity”.


Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

Haidt, J. (2013). The righteous mind: Why good people are divided by politics and religion. Random House LLC.

Johansson-Stenman, O. (2012). Are most people consequentialists? Economics Letters, 115 (2), 225-228.

Kahneman, D., & Knetsch, J. L. (1992). Valuing public goods: the purchase of moral satisfaction. Journal of environmental economics and management, 22(1), 57-70.

McGonigal, K. (2011). The Willpower Instinct: How Self-Control Works, Why It Matters, and What You Can Do to Get More of It. Penguin.

Mehl, M. R., Vazire, S., Holleran, S. E., & Clark, C. S. (2010). Eavesdropping on Happiness Well-Being Is Related to Having Less Small Talk and More Substantive Conversations. Psychological Science, 21(4), 539-541.

Sachdeva, S., Iliev, R., & Medin, D. L. (2009). Sinning saints and saintly sinners the paradox of moral self-regulation. Psychological science, 20(4), 523-528.

Schnell, T. (2010). Existential indifference: Another quality of meaning in life. Journal of Humanistic Psychology, 50(3), 351-373.

Schwartz, B. (2000). Self determination: The tyranny of freedom. American Psychologist, 55, 79–88.

Schwartz, B., Ward, A., Monterosso, J., Lyubomirsky, S., White, K., & Lehman, D. R. (2002). Maximizing versus satisficing: happiness is a matter of choice. Journal of personality and social psychology, 83(5), 1178.

Slote, M. (1984). “Satisficing Consequentialism”. Proceedings of the Aristotelian Society, 58: 139–63.

Agency and Life Domains

5 Gleb_Tsipursky 16 November 2014 01:38AM


The purpose of this essay is to propose an enriched framework of thinking to help optimize the pursuit of agency, the quality of living intentionally. I posit that pursuing and gaining agency involves 3 components:

1. Evaluating reality clearly, to

2. Make effective decisions, that

3. Achieve our short and long-term goals.

In other words, agency refers to the combination of assessing reality accurately and achieving goals effectively, epistemic and instrumental rationality. The essay will first explore the concept of agency more thoroughly, and will then consider the application of this concept in different life domains, by which I mean different life areas such as work, romance, friendships, fitness, leisure, and other domains.

The concepts laid out here sprang from a collaboration between myself and Don Sutterfield, and also discussions with Max Harms, Rita Messer, Carlos Cabrera, Michael Riggs, Ben Thomas, Elissa Fleming, Agnes Vishnevkin, Jeff Dubin, and other members of the Columbus, OH, Rationality Meetup, as well as former members of this Meetup such as Jesse Galef and Erica Edelman. Members of this meetup are also collaborating to organize Intentional Insights, a new nonprofit dedicated to raising the sanity waterline through popularizing Rationality concepts in ways that create cognitive ease for a broad public audience (for more on Intentional Insights, see a fuller description here).


This section describes a framework of thinking that helps assess reality accurately and achieve goals effectively, in other words gain agency. After all, insofar as human thinking suffers from many biases, working to achieve greater agenty-ness would help us lead better lives. First, I will consider agency in relation to epistemic rationality, and then instrumental rationality: while acknowledging fully that these overlap in some ways, I believe it is helpful to handle them in distinct sections.

This essay proposes that gaining agency from the epistemic perspective involves individuals making an intentional evaluation of their environment and situation, in the moment and more broadly in life, sufficient to understand the full extent of one’s options within it and how these options relate to one’s personal short-term and long-term goals. People often make their decisions, both in the moment and major life decisions, based on socially-prescribed life paths and roles, whether due to the social expectations imposed by others or internalized preconceptions, often a combination of both. Such socially-prescribed life roles limit one’s options and thus the capacity to optimize one’s utility in reaching personal goals and preferences. Instead of going on autopilot in making decisions about one’s options, agency involves intentionally evaluating the full extent of one’s options to pursue the ones most conducive to one’s actual personal goals. To be clear, this may often mean choosing options that are socially prescribed, if they also happen to fit within one’s goal set. This intentional evaluation also means updating one’s beliefs based on evidence and facing the truth of reality even when it may seem ugly.

By gaining agency from the instrumental perspective, this essay refers to the ability to achieve one’s short-term and long-term goals. Doing so requires that one first gain a thorough understanding of one’s short-term and long-term goals, through an intentional process of self-evaluation of one’s values, preferences, and intended life course. Next, it involves learning effective strategies to make and carry out decisions conducive to achieving one’s personal goals and thus win at life. In the moment, that involves having an intentional response to situations, as opposed to relying on autopilot reflexes. This statement certainly does not mean going by System 2 at all times, as doing so would lead to rapid ego depletion, whether through actual willpower drain or through other related mechanisms. Agency involves using System 2 to evaluate System 1 and decide when one’s System 1 may be trusted to make good enough decisions and take appropriate actions with minimal oversight, in other words when System 1 has functional cached thinking, feeling, and behavior patterns. In cases where System 1 habits are problematic, agency involves using System 2 to change System 1 habits into more functional ones conducive to one’s goal set, not only behaviors but also changing one's emotions and thoughts. For the long term, agency involves intentionally making plans about one’s time and activities so that one can accomplish one’s goals. This involves learning about and adopting intentional strategies for discovering, setting, and achieving your goals, and implementing these strategies effectively in your life on a daily level.

Life Domains

Much of the discourse on agency in Rationality circles focuses on this notion as a broad category, and the level of agenty-ness for any individual is treated as a single point on a broad continuum of agency (she’s highly agenty, 8/10; he’s not very agenty, 3/10). After all, if someone has a thorough understanding of the concept of agency as demonstrated by the way they talk about agency and goal achievement, combined with their actual abilities to solve problems and achieve their goals in life domains such as their career or romantic relationships, then that qualifies that individual as a pretty high-level agent, right? Indeed, this is what I and others in the Columbus Rationality Meetup believed in the past about agency.

However, in an insight that now seems obvious to us (hello, hindsight bias) and may seem obvious to you after reading this post, we have come to understand that this is far from the case: in other words, just because someone has a high level of agency and success in one life domain does not mean that they have agency in other domains. Our previous belief that those who understand the concept of agency well and seem highly agenty in one life domain created a dangerous halo effect in evaluating individuals. This halo effect led to highly problematic predictions and normative expectations about the capacities of others, which undermined social relationships through creating misunderstandings, conflicts, and general interpersonal stress. This halo effect also led to highly problematic predictions and normative expectations about ourselves when highly inflated conceptions of our personal capacities in each given life domain contrasted with consequent mistakes in efforts at optimization that resulted in losses of time, energy, motivation, and personal stress.

Since that realization, we have come across studies on the difference between rationality and intelligence, as well as on broader re-evaluations of dual process theory, and also on the difference between task-oriented thinking and socio-relationship thinking, indicating the usefulness of parsing out the heuristic of “smart” and “rational,” and examining the various skills and abilities covered by that term. However, such research has not yet explored how significant skill in rational thinking and agency in one life domain may (or may not) transfer to those same skills and abilities in other areas of life. In other words, individuals may not be intentional and agenty about their application of rational thinking across various life domains, something that might be conveyed through the term “intentionality quotient.” So let me tell you a bit about ourselves as case studies in how the concept of domains of agency has proved to be useful in thinking rationally about our lives and gaining agency more quickly and effectively in varied domains.

For example, I have a high level of agency in my career area and in time management and organization, both knowing quite a lot about these areas and achieving my goals within them pretty well. Moreover, I am thoroughly familiar with the concept of agency, both from the Rationality perspective and from my own academic research. From that, I and others who know me expect me to express high levels of agency across all of my life domains.

However, I have many challenges in being rational about maximizing my utility gains in relationships with others. Only relatively recently, within the last couple of years or so, have I began to consider and pursue intentional efforts to reflect on the value that relationships with others has for my life. These intentional efforts resulted from conversations with members of the Columbus Rationality Meetup about their own approaches to relationships, and reading Less Wrong posts on the topic of relationships. As a result of these efforts, I have begun to deliberately invest resources into cultivating some relationships while withdrawing from others. My System 1 self still has a pretty strong ugh field about doing the latter, and my System 2 has to have a very serious talk with my System 1 every time I make a move to distance myself from extant relationships that no longer serve me well.

This personal example illustrates one major reason why people who have a high level of agency in one life domain may not have it in another life domain. Namely, “ugh” fields and cached thinking patterns prevent many who are quite rational and utility-optimizing in certain domains from applying the same level of intentional analysis to another life domain. For myself, as an introverted bookish child, I had few friends. This was further exacerbated by my family’s immigration to the United States from the former Soviet Union when I was 10, with the consequent deep disruption of interpersonal social development. Thus, my cached beliefs about relationships and my role in them served me poorly in optimizing relationship utility, and only with significant struggle can I apply rational analysis and intentional decision-making to my relationship circles. Still, since starting to apply rationality to my relationships here, I have substantially leveled up my abilities in that domain.

Another major reason why people who have a high level of agency in one life domain may not have it in another life domain results from the fact that people have domain-specific vulnerabilities to specific kinds of biases and cognitive distortions. For example, despite knowing quite a bit about self-control and willpower management, I suffer from challenges managing impulse control over food. I have worked to apply both rational analysis and proven habit management and change strategies to modify my vulnerability to the Kryptonite of food and especially sweets. I know well what I should be doing to exhibit greater agency in that field and have made very slow progress, but the challenges in that domain continually surprise me.

My assessment of my level of agency, which sprang from the areas where I had high agency, caused me to greatly overestimate my ability to optimize in areas where I had low levels of agency, e.g., in relationships and impulse control. As a result, I applied incorrect strategies to level up in those domains, and caused myself a great deal of unnecessary stress, and much loss of time, energy, and motivation.

My realization of the differentiated agency I had across different domains resulted in much more accurate evaluations and optimization strategies. For some domains, such as relationships, the problem resulted primarily from a lack of rational self-reflection. This suggests one major fix to differentiated levels of agency across different life domains – namely, a project that involves rationally evaluating one’s utility optimization in each life area. For some domains, the problem stems from domain-specific vulnerability to certain biases, and that requires applying self-awareness, data gathering, and tolerance toward one’s personally slow optimization in these areas.

My evaluation of the levels of agency of others underwent a similar transformation after the realization that they had different levels of agency in different life domains. Previously, mistaken assessments resulting from the halo effect about agency undermined my social relationships through misunderstandings, conflicts, and general interpersonal stress. For instance, before this realization I found it difficult to understand how one member of the Columbus Rationality Meetup excelled in some life areas, such as managing relationships and social interactions, but suffered from deep challenges in time management and organization. Caring about this individual deeply as a close friend and collaborator, I invested much time and energy resources to help improve this life domain. The painfully slow improvement and many setbacks experienced by this individual caused me to experience much frustration and stress, and resulted in conflicts and tensions between us. However, after making the discovery of differentiated agency across domains, I realized that not only was such frustration misplaced, but that the strategies I was suggesting were targeted too high for this individual, in this domain. A much more accurate assessment of his current capacities and the actual efforts required to level up resulted in much less interpersonal stress and much more effective strategies that helped this individual. Besides myself, other Columbus Rationality Meetup members have experienced similar benefits in applying this paradigm to themselves and to others.

Final Thoughts

To sum up, this essay provided an overview and some strategies for achieving greater agency - a highly instrumental framework of thinking that helps empower individuals to optimize their ability to assess reality accurately and achieve goals effectively. The essay in particular aims to enrich current discourse on agency by highlighting how individuals have different levels of agency across various life domains, and underscoring the epistemic and instrumental implications of this perspective on agency. While the strategies listed above help achieve specific skills and abilities required to gain greater agency, I would suggest that one can benefit greatly from tying positive emotions to the framework of thinking about agency described above. For instance, one might think to one’s self, “It is awesome to take an appropriately fine grained perspective on how agency works, and I’m awesome for dedicating cycles to that project.” Doing so motivates one’s System 1 to pursue increasing levels of agency: it’s the emotionally rational step to assess reality accurately, achieve goals effectively, and thus gain greater agency in all life domains.





The Truth and Instrumental Rationality

11 the-citizen 01 November 2014 11:05AM

One of the central focuses of LW is instrumental rationality. It's been suggested, rather famously, that this isn't about having true beliefs, but rather its about "winning". Systematized winning. True beliefs are often useful to this goal, but an obsession with "truthiness" is seen as counter-productive. The brilliant scientist or philosopher may know the truth, yet be ineffective. This is seen as unacceptable to many who see instrumental rationality as the critical path to achieving one's goals. Should we all discard our philosophical obsession with the truth and become "winners"?

The River Instrumentus

You are leading a group of five people away from deadly threat which is slowly advancing behind you. You come to a river. It looks too dangerous to wade through, but through the spray of the water you see a number of stones. They are dotted across the river in a way that might allow you to cross. However, the five people you are helping are extremely nervous and in order to convince them to cross, you will not only have to show them its possible to cross, you will also need to look calm enough after doing it to convince them that it's safe. All five of them must cross, as they insist on living or dying together.

Just as you are about to step out onto the first stone it splutters and moves in the mist of the spraying water. It looks a little different from the others, now you think about it. After a moment you realise its actually a person, struggling to keep their head above water. Your best guess is that this person would probably drown if they got stepped on by five more people. You think for a moment, and decide that, being a consequentialist concerned primarily with the preservation of life, it is ultimately better that this person dies so the others waiting to cross might live. After all, what is one life compared with five?

However, given your need for calm and the horror of their imminent death at your hands (or feet), you decide it is better not to think of them as a person, and so you instead imagine them being simply a stone. You know you'll have to be really convincingly calm about this, so you look at the top of the head for a full hour until you utterly convince yourself that the shape you see before you is factually indicitative not of a person, but of a stone. In your mind, tops of heads aren't people - now they're stones. This is instrumentally rational - when you weigh things up the self-deception ultimately increases the number of people who will likely live, and there is no specific harm you can identify as a result.

After you have finished convincing yourself you step out onto the per... stone... and start crossing. However, as you step out onto the subsequent stones, you notice they all shift a little under your feet. You look down and see the stones spluttering and struggling. You think to yourself "lucky those stones are stones and not people, otherwise I'd be really upset". You lead the five very greatful people over the stones and across the river. Twenty dead stones drift silently downstream.

When we weigh situations on pure instrumentality, small self deception makes sense. The only problem is, in an ambiguous and complex world, self-deceptions have a notorious way of compounding eachother, and leave a gaping hole for cognitive bias to work its magic. Many false but deeply-held beliefs throughout human history have been quite justifiable on these grounds. Yet when we forget the value of truth, we can be instrumental, but we are not instrumentally rational. Rationality implies, or ought to imply, a value of the truth.

Winning and survival

In the jungle of our evolutionary childhood, humanity formed groups to survive. In these groups there was a hierachy of importance, status and power. Predators, starvation, rival groups and disease all took the weak on a regular basis, but the groups afforded a partial protection. However, a violent or unpleasant death still remained a constant threat. It was of particular threat to the lowest and weakest members of the group. Sometimes these individuals were weak because they were physically weak. However, over time groups that allowed and rewarded things other than physical strength became more successful. In these groups, discussion played a much greater role in power and status. The truely strong individuals, the winners in this new arena were one's that could direct converstation in their favour - conversations about who will do what, about who got what, and about who would be punished for what. Debates were fought with words, but they could end in death all the same.

In this environment, one's social status is intertwined with one's ability to win. In a debate, it was not so much a matter of what was true, but of what facts and beliefs achieved one's goals. Supporting the factual position that suited one's own goals was most important. Even where the stakes where low or irrelevant, it payed to prevail socially, because one's reputation guided others limited cognition about who was best to listen to. Winning didn't mean knowing the most, it meant social victory. So when competition bubbled to the surface, it payed to ignore what one's opponent said and instead focus on appearing superior in any way possible. Sure, truth sometimes helped, but for the charismatic it was strictly optional. Politics was born.

Yet as groups got larger, and as technology began to advance for the first time, there appeared a new phenomenon. Where a group's power dynamics meant that it systematically had false beliefs, it became more likely to fail. The group that believing that fire spirits guided a fire's advancement fared poorly compared with those who checked the wind and planned their means of escape accordingly. The truth finally came into its own. Yet truth, as opposed to simple belief by politics, could not be so easily manipulated for personal gain. The truth had no master. In this way it was both dangerous and liberating. And so slowly but surely the capacity for complex truth-pursuit became evolutionarily impressed upon the human blueprint.

However, in evolutionary terms there was little time for the completion of this new mental state. Some people had it more than others. It also required the right circumstances for it to rise to the forefront of human thought. And other conditions could easily destroy it. For example, should a person's thoughts be primed with an environment of competition, the old ways came bubbling up to the surface. When a person's environment is highly competitive, it reverts to its primitive state. Learning and updating of views becomes increasingly difficult, because to the more primitive aspects of a person's social brain, updating one's views is a social defeat.

When we focus an organisation's culture on winning, there can be many benefits. It can create an air of achievement, to a degree. Hard work and the challenging of norms can be increased. However, we also prime the brain for social conflict. We create an environment where complexity and subtlety in conversation, and consequently in thought, is greatly reduced. In organisations where the goals and means are largely intellectual, a competitive environment creates useless conversations, meaningless debates, pointless tribalism, and little meaningful learning. There are many great examples, but I think you'd be best served watching our elected representatives at work to gain a real insight.

Rationality and truth

Rationality ought to contain an implication of truthfulness. Without it, our little self-deceptions start to gather and compond one another. Slowly but surely, they start to reinforce, join, and form an unbreakable, unchallengable yet utterly false belief system. I need not point out the more obvious examples, for in human society, there are many. To avoid this on LW and elsewhere, truthfulness of belief ought to inform all our rational decisions, methods and goals. Of course true beliefs do not guarantee influence or power or achievement, or anything really. In a world of half-evolved truth-seeking equipment, why would we expect that?  What we can expect is that, if our goals are anything to do with the modern world in all its complexity, the truth isn't sufficient, but it is neccessary.

Instrumental rationality is about achieving one's goals, but in our complex world goals manifest in many ways - and we can never really predict how a false belief will distort our actions to utterly destroy our actual achievements. In the end, without truth, we never really see the stones floating down the river for what they are.

View more: Next