Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Background Reading: The Real Hufflepuff Sequence Was The Posts We Made Along The Way

18 Raemon 26 April 2017 06:15PM

This is the fourth post of the Project Hufflepuff sequence. Previous posts:

Epistemic Status: Tries to get away with making nuanced points about social reality by using cute graphics of geometric objects. All models are wrong. Some models are useful. 

Traditionally, when nerds try to understand social systems and fix the obvious problems in them, they end up looking something like this:

Social dynamics is hard to understand with your system 2 (i.e. deliberative/logical) brain. There's a lot of subtle nuances going on, and typically, nerds tend to see the obvious stuff, maybe go one or two levels deeper than the obvious stuff, and miss that it's in fact 4+ levels deep and it's happening in realtime faster than you can deliberate. Human brains are pretty good (most of the time) at responding to the nuances intuitively. But in the rationality community, we've self-selected for a lot of people who:

  1. Don't really trust things that they can't understand fully with their system 2 brain. 
  2. Tend not to be as naturally skilled at intuitive mainstream social styles. 
  3. Are trying to accomplish things that mainstream social interactions aren't designed to accomplish (i.e. thinking deeply and clearly on a regular basis).
This post is an overview of essays that rationalist-types have written over the past several years, that I think add up to a "secret sequence" exploring why social dynamics are hard, and why they are important to get right. This may useful both to understand some previous attempts by the rationality community to change social dynamics on purpose, as well as to current endeavors to improve things.

(Note: I occasionally have words in [brackets], where I think original jargon was pointing in a misleading direction and I think it's worth changing)

To start with, a word of caution:

Armchair sociolology can be harmful - Ozy's post is pertinent - most essays below fall into the category of "armchair sociology", and attempts by nerds to understand and articulate social-dynamics that they aren't actually that good at. Several times when an outsider has looked in at rationalist attempts to understood human interaction they've said "Oh my god, this is the blind leading the blind", and often that seemed to me like a fair assessment.

I think all the essays that follow are useful, and are pointing at something real. But taken individually, they're kinda like the blind men groping at the elephant, each coming away with the distinct impression an elephant is like a snake, tree, a boulder, depending on which aspect they're looking at.

[Fake Edit: Ozy informs me that they were specifically warning against amateur sociology and not psychology. I think the idea still roughly applies]

Part 1. Cultural Assumptions of Trust

Guess [Infer Culture], Ask Culture, and Tell [Reveal] Culture(Malcolm Ocean)

Different people have different ways of articulating their needs and asking for help. Different ways of asking require different assumptions of trust. If people are bringing different expectations of trust into an interaction, they may feel that that trust is being violated, which can seem rude, passive aggressive or oppressive.

I'm listing this article, instead of numerous others about Ask/Guess/Tell, because I think: a) Malcolm does a good job of explaining how all the cultures work, and b) I think his presentation of Reveal culture is a good, clearer upgrade for Brienne's Tell culture, and I'm a bit sad it didn't seem to make it into the zeitgeist yet. 

I also like the suggestion to call Guess Culture "Infer Culture" (implying a bit more about what skills the culture actually emphasizes).

Guess Culture Screens for Trying to Cooperate(Ben Hoffman)

Rationality folk (and more generally, nerds), tend to prefer explicit communication over implicit, and generally see Guess culture as strictly inferior to Ask culture once you've learned to assert yourself. 

But there is something Guess culture does which Ask culture doesn't, which is give you evidence of how much people understand you and are trying to cooperate. Guess cultures filters for people who have either invested effort into understanding your culture overall, or people who are good at inferring your own wants. 

Sharp Culture and Soft Culture (Sam Rosen)

[WARNING: It turned out lots of people thought this meant something different than what I thought it meant. Some people thought it meant soft culture didn't involve giving people feedback or criticism at all. I don't think Soft/Sharp are totally-naturally clusters in the first place, and the distinction I'm interested in (as applies to rationality-culture), is how you give feedback.

(i.e. "Dude, your art sucks. It has no perspective." vs "oh, cool. Nice colors. For the next drawing, you might try incorporating perspective", as a simplified example)]

Somewhat orthogonal to Infer/Ask/Reveal culture is "Soft" vs "Sharp" culture. Sharp culture tends to have more biting humor, ribbing each other, and criticism. Soft culture tends to value kindness and social harmony more. Sam says that Sharp culture "values honesty more." Robby Bensinger counters in the comments: "My own experience is that sharp culture makes it more OK to be open about certain things (e.g., anger, disgust, power disparities, disagreements), but less OK to be open about other things (e.g., weakness, pain, fear, loneliness, things that are true but not funny or provocative or badass)."

Handshakes, Hi, and What's New: What's Going on With Small Talk? (Ben Hoffman)

Small talk often sounds nonsensical to literally-minded people, but it serves a fairly important function: giving people a structured path to figure out how much time/sympathy/interest they want to give each other. And even when the answer is "not much", it still is, significantly, nonzero - you regard each other as persons, not faceless strangers.

Personhood [Social Interfaces?] (Kevin Simler)

This essays gets a lot of mixed reactions, much of which I think has to do with its use of the word "Person." The essay is aimed at explaining how people end up treating each other as persons or nonpersons, without making any kind of judgement about it. This includes noting some things human tend to do that you might consider horrible.

Like many grand theories, I think it overstates it's case and ignores some places where the explanation breaks down, but I think it points at a useful concept which is summarized by this adorable graphic:

The essay uses the word "personhood". In the original context, this was useful: it gets at why cultures develop, why it matters whether you're able to demonstrate reliably, trust, etc. It helps explain outgroups and xenophobia: outsiders do not share your social norms, so you can't reliably interact with them, and it's easier to think of them as non-people than try to figure out how to have positive interactions.

But what I'm most interested in is "how can we use this to make it easier for groups with different norms to interact with each other"? And for that, I think using the word "personhood" makes it way more likely to veer into judging each other for having different preferences and communication styles.

What makes a person is... arbitrary, but not fully arbitrary. 

Rationalist culture tends to attract people who prefer a particular style of “social interface”, often favoring explicit communication and discussing ideas in extreme detail. There's a lot of value to those things, but they have some problems:

a) this social interface does NOT mesh well with the rest of world (this is a problem if you have any goals that involve the rest of the world)

b) this goal does not uniformly mesh well with all people interested in and valuable to the rationality community.

I don't actually think it's possible to develop a set of assumptions that fit everyone's needs. But I do think it's possible to develop better tools for navigating different social contexts. I think it may be possible both to tweak sets-of-norms so that they mesh better together, or at least when they bump into each other, there's greater awareness of what's happening and people's default response is "oh, we seem to have different preferences, let's figure out how 

Maybe we can end up with something that looks kinda like this:

Against Being Against or For Tell Culture (Brienne Yudkowsky)

Having said a bunch of things about different cultural interfaces, I think this post by Brienne is really important, and highlights the end goal of all of this.

"Cultures" are a crutch. They are there to help you get your bearings. They're better than nothing. But they are not a substitute for actually having the skills needed to navigate arbitrary social situations as they come up so you can achieve whatever it is you want to achieve. 

To master communication, you can't just be like, "I prefer Tell Culture, which is better than Guess Culture, so my disabilities in Guess Culture are therefore justified." Justified shmustified, you're still missing an arm.

My advice to you - my request of you, even - if you find yourself fueling these debates [about which culture is better], is to (for the love of god) move on. If you've already applied cognitive first aid, you've created an affordance for further advancement. Using even more tourniquettes doesn't help.

Part 2. Game Theory, Recursion and Trust

(or, "Social dynamics are really complicated, you are not getting away with the things you think you are getting away with, stop trying to be clever, manipulative, act-utilitarian or naive-consequentialist without actually understanding what is going on")

Grokking Newcomb's Problem and Deserving Trust (Andrew Critch)

Critch argues why it is not just "morally wrong", but an intellectual mistake, to violate someone’s trust (even when you don’t expect any repercussions in the future).

When someone decides whether to trust you (say, giving you a huge opportunity), on the expectation that you’ll refrain from exploiting them, they’ve already run a low-grade simulation of you in their imagination. And the thing is that you don’t know whether you’re in a simulation or not when you make the decision whether to repay them. 

Some people argue “but I can tell that I’m a conscious being, and they aren’t a literal super-intelligent AI, they’re just a human. They can’t possibly be simulating me in this high fidelity. I must be real.” This is true. But their simulation of you is not based on your thoughts, it’s based on your actions. It’s really hard to fake. 

One way to think about it, not expounded on in the article: Yes, if you pause to think about it you can notice that you’re conscious and probably not being simulated in their imagination. But by the time you notice that, it’s too late. People build up models of each other all the time, based on very subtle cues such as how fast you respond to something. Conscious you knows that you’re conscious. But their decision of whether to trust you was based off the half-second it took for unconscious you to reply to questions like “Hey, do you think you handle Project X while I’m away?”

The best way to convince people you’re trustworthy is to actually be trustworthy.

You May Not Believe In Guess[Infer] Culture But It Believes In You (Scott Alexander)

This is short enough to just include the whole thing:

Consider an "ask culture" where employees consider themselves totally allowed to say "no" without repercussions. The boss would prefer people work unpaid overtime so ey gets more work done without having to pay anything, so ey asks everyone. Most people say no, because they hate unpaid overtime. The only people who agree will be those who really love the company or their job - they end up looking really good. More and more workers realize the value of lying and agreeing to work unpaid overtime so the boss thinks they really love the company. Eventually, the few workers who continue refusing look really bad, like they're the only ones who aren't team players, and they grudgingly accept.

Only now the boss notices that the employees hate their jobs and hate the boss. The boss decides to only ask employees if they will work unpaid overtime when it's absolutely necessary. The ask culture has become a guess culture.

How this applies to friendship is left as an exercise for the reader.

The Social Substrate (Lahwran)

A fairly in depth look into how common knowledge, signaling, newcomb-like problems and recursive modeling of each other interact to produce "regular social interaction."

I think there's a lot of interesting stuff here - I'm not sure if it's exactly accurate but it points in directions that seem useful. But I actually think the most important takeaway is the warning at the beginning:

WARNING: An easy instinct, on learning these things, is to try to become more complicated yourself, to deal with the complicated territory. However, my primary conclusion is "simplify, simplify, simplify": try to make fewer decisions that depend on other people's state of mind. You can see more about why and how in the posts in the "Related" section, at the bottom.

When you're trying to make decisions about people, you're reading a lot of subtle cues off them to get a sense of how you feel about that. When you [generic person you, not necessarily you in particular] can tell someone is making complex decisions based on game theory and trying to model all of this explicitly, it a) often comes across as a bit off, and b) even if it doesn't, you still have to invest a lot of cognitive resources figuring out how they are modeling things and whether they are actually doing a good job or missing key insights or subtle cues. The result can be draining, and it can output a general response of "ugh, something about this feels untrustworthy."

Whereas when people are able to cache this knowledge down into a system-1 level, you're able to execute a simpler algorithm that looks more like "just try to be a good trustworthy person", that people can easily read off your facial expression, and which reduces overall cognitive burden.

System 1 and System 2 Morality (Sophie Grouchy)

There’s some confusion over what “moral” means, because there’s two kinds of morality: 

 - System 1 morality is noticing-in-realtime when people need help, or when you’re being an asshole, and then doing something about it. 

 - System 2 morality is when you have a complex problem and a lot of time to think about it. 

System 1 moralists will pay back Parfit’s Hitchhiker because doing otherwise would be being a jerk. System 2 moralists invent Timeless [Functional?] decision theory. You want a lot of people with System 2 morality in the world, trying to fix complex problems. You want people with System 1 morality in your social circle.

The person who wrote this post eventually left the rationality community, in part due to frustration due to people constantly violating small boundaries that seemed pretty obvious (things in the vein of “if you’re going to be 2 hours late, text me so I don’t have to sit around waiting for you.”)

Final Remarks

I want to reiterate - all models are wrong. Some models are useful. The most important takeaway from this is not that any particular one of these perspectives is true, but that social dynamics has a lot of stuff going on that is more complicated than you're naively imagining, and that this stuff is important enough to put the time into getting right.

April '17 I Care About Thread

4 MaryCh 18 April 2017 02:08PM

As an experiment, here's a thread for people to post about things they care about. Specifically, for things that are possible to contribute to, in some way, and preferably, to invite others to join.

Mine is buying and donating highschool textbooks to schools in the 'grey zone' of Ukraine (where the war kinda isn't fought, but few people would be surprised if it started.) I don't deliver them myself, though.

What's yours?

Straw Hufflepuffs and Lone Heroes

24 Raemon 16 April 2017 11:48PM
I was hoping the next Project Hufflepuff post would involve more "explain concretely what I think we should do", but as it turns out I'm still hashing out some thoughts about that. In the meanwhile, this is the post I actually have ready to go, which is as good as any to post for now.

Epistemic Status: Mythmaking. This is tailored for the sort of person for whom the "Lone Hero" mindset is attractive. If that isn't something you're concerned with and this post feels irrelevant or missing some important things, note that my vision for Project Hufflepuff has multiple facets and I expect different people to approach it in different ways.

The Berkeley Hufflepuff Unconference is on April 28th. RSVPing on this Facebook Event is helpful, as is filling out this form.

For good or for ill, the founding mythology of our community is a Harry Potter fanfiction.

This has a few ramifications I’ll delve into at some point, but the most pertinent bit is: for a community to change itself, the impulse to change needs to come from within the community. I think it’s easier to build change off of stories that are already a part of our cultural identity.*

* with an understanding that maybe part of the problem is that our cultural identity needs to change, or be more accessible, but I’m running with this mythos for the time being.

In J.K Rowling’s original Harry Potter story, Hufflepuffs are treated like “generic background characters” at best and as a joke at worst. All the main characters are Gryffindors, courageous and true. All the bad guys are Slytherin. And this is strange - Rowling clearly was setting out to create a complex world with nuanced virtues and vices. But it almost seems to me like Rowling’s story takes place in an alternate, explicitly “Pro-Gryffindor propaganda” universe instead of the “real” Harry Potter world. 

People have trouble taking Hufflepuff seriously, because they’ve never actually seen the real thing - only lame, strawman caricatures.

Harry Potter and the Methods of Rationality is… well, Pro-Ravenclaw propaganda. But part of being Ravenclaw is trying to understand things, and to use that knowledge. Eliezer makes an earnest effort to steelman each house. What wisdom does it offer that actually makes sense? What virtues does it cultivate that are rare and valuable?

When Harry goes under the sorting hat, it actually tries to convince him not to go into Ravenclaw, and specifically pushes towards Hufflepuff House:

Where would I go, if not Ravenclaw?

"Ahem. 'Clever kids in Ravenclaw, evil kids in Slytherin, wannabe heroes in Gryffindor, and everyone who does the actual work in Hufflepuff.' This indicates a certain amount of respect. You are well aware that Conscientiousness is just about as important as raw intelligence in determining life outcomes, you think you will be extremely loyal to your friends if you ever have some, you are not frightened by the expectation that your chosen scientific problems may take decades to solve -"

I'm lazy! I hate work! Hate hard work in all its forms! Clever shortcuts, that's all I'm about!

"And you would find loyalty and friendship in Hufflepuff, a camaraderie that you have never had before. You would find that you could rely on others, and that would heal something inside you that is broken."

But my plans -

"So replan! Don't let your life be steered by your reluctance to do a little extra thinking. You know that."

In the end, Harry chooses to go to Ravenclaw - the obvious house, the place that seemed most straightforward and comfortable. And ultimately… a hundred+ chapters later, I think he’s still visibly lacking in the strengths that Hufflepuff might have helped him develop. 

He does work hard and is incredibly loyal to his friends… but he operates in a fundamentally lone-wolf mindset. He’s still manipulating people for their own good. He’s still too caught up in his own cleverness. He never really has true friends other than Hermione, and when she is unable to be his friend for an extended period of time, it takes a huge toll on him that he doesn’t have the support network to recover from in a healthy way. 

The story does showcase Hufflepuff virtue. Hermione’s army is strong precisely because people work hard, trust each other and help each other - not just in big, dramatic gestures, but in small moments throughout the day. 

But… none of that ends up really mattering. And in the end, Harry faces his enemy alone. Lip service is paid to the concepts of friendship and group coordination, but the dominant narrative is Godric Gryffindor’s Nihil Supernum:

No rescuer hath the rescuer.
No lord hath the champion.
No mother or father.
Only nothingness above.

The Sequences and HPMOR both talk about the importance of groups, of emotions, of avoiding the biases that plague overly-clever people in particular. But I feel like the communities descended from Less Wrong, as a whole, are still basically that eleven-year-old Harry Potter: abstractly understanding that these things are important, but not really believing in them seriously enough to actually change their plans and priorities.

Lone Heroes

In Methods of Rationality, there’s a pretty good reason for Harry to focus on being a lone hero: he literally is alone. Nobody else really cares about the things he cares about or tries to do things on his level. It’s like a group project in high school, which is supposed to teach cooperation but actually just results in one kid doing all the work while the others either halfheartedly try to help (at best) or deliberately goof off.

Harry doesn’t bother turning to others for help, because they won’t give him the help he needs.

He does the only thing he can do reliably: focus on himself, pushing himself as hard as he can. The world is full of impossible challenges and nobody else is stepping up, so he shuts up and does the impossible as best he can. Learning higher level magic. Learning higher level strategy. Training, physically and mentally. 

This proves to be barely enough to survive, and not nearly enough to actually play the game. The last chapters are Harry realizing his best still isn’t good enough, and no, this isn’t fair, but it’s how the world is, and there’s nothing to do but keep trying.

He helps others level up as best they can. Hermione and Neville and some others show promise. But they’re not ready to work together as equals.

And frankly, this does match my experience of the real world. When you have a dream burning in your heart... it is incredibly hard to find someone who shares it, who will not just pitch in and help but will actually move heaven and earth to achieve it. 

And if they aren’t capable, level themselves up until they are.

In my own projects, I have tried to find people to work alongside me and at best I’ve found temporary allies. And it is frustrating. And it is incredibly tempting to say “well, the only person I can rely on is myself.”

But… here’s the thing.

Yes, the world is horribly unfair. It is full of poverty, and people trapped in demoralizing jobs. It is full of stupid bureaucracies and corruption and people dying for no good reason. It is full of beautiful things that could exist but don’t. And there are terribly few people who are able and willing to do the work needed to make a dent in reality.

But as long as we’re willing to look at monstrously unfair things and roll up our sleeves and get to work anyway, consider this:

It may be that one of the unfair things is that one person can never be enough to solve these problems. That one of the things we need to roll up our sleeves and do even though it seems impossible is figure out how to coordinate and level up together and rely on each other in a way that actually works.

And maybe, while we’re at it, find meaningful relationships that actually make us happy. Because it's not a coincidence that Hufflepuff is about both hard work and warmth and camaraderie. The warmth is what makes the hard work sustainable.

Godric Gryffindor has a point, but Nihil Supernum feels incomplete to me. There are no parents to step in and help us, but if we look to our left, or right…

Yes, you are only one
No, it is not enough—
But if you lift your eyes,
I am your brother

Vienna Teng, Level Up 


Reminder that the Berkeley Hufflepuff Unconference is on April 28th. RSVPing on this Facebook Event is helpful, as is filling out this form.

What exactly is the "Rationality Community?"

18 Raemon 09 April 2017 12:11AM

This is the second post in the Project Hufflepuff sequence. It’s also probably the most standalone and relevant to other interests. The introduction post is here.

The Berkeley Hufflepuff Unconference is on April 28th. RSVPing on this Facebook Event is helpful, as is filling out this form.


I used to use the phrase "Rationality Community" to mean three different things. Now I only use it to mean two different things, which is... well, a mild improvement at least. In practice, I was lumping a lot of people together, many of whom neither wanted to get lumped together nor had much in common.


As Project Hufflepuff took shape, I thought a lot about who I was trying to help and why. And I decided the relevant part of the world looks something like this:

I. The Rationalsphere

The Rationalsphere is defined in the broadest possible sense - a loose cluster of overlapping interest groups, communities and individuals. It includes people who disagree wildly with each other - some who are radically opposed to one another. It includes people who don’t identify as “rationalist” or even as especially interested in “rationality” - but who interact with each other on a semi-regular basis. I think it's useful to be able to look at that ecosystem as a whole, and talk about it without bringing in implications of community.

continue reading »

5 Project Hufflepuff Suggestions for the Rationality Community

9 lifelonglearner 04 March 2017 02:23AM

<cross-posed on Facebook>

In the spirit of Project Hufflepuff, I’m listing out some ideas for things I would like to see in the rationality community, which seem like perhaps useful things to have. I dunno if all of these are actually good ideas, but it seems better to throw some things out there and iterate.



Idea 1) A more coherent summary of all the different ideas that are happening across all the rationalist blogs. I know LessWrong is trying to become more of a Schelling point, but I think a central forum is still suboptimal for what I want. I’d like something that just takes the best ideas everyone’s been brewing and centralizes them in one place so I can quickly browse them all and dive deep if something looks interesting.


A) A bi-weekly (or some other period) newsletter where rationalists can summarize their best insights of the past weeks in 100 words or less, with links to their content.

B) An actual section of LessWrong that does the above, so people can comment / respond to the ideas.


This seems straightforward and doable, conditional on commitment from 5-10 people in the community. If other people are also excited, I’m happy to reach out and get this thing started.

Idea 2) A general tool/app for being able to coordinate. I’d be happy to lend some fraction of my time/effort in order to help solve coordination problems. It’s likely other people feel the same way. I’d like a way to both pledge my commitment and stay updated on things that I might be able to plausibly Cooperate on.


A) An app that is managed by someone, which sends out broadcasts for action every so often. I’m aware that similar things / platforms already exist, so maybe we could just leverage an existing one for this purpose.


In abstract, this seems good. Wondering what others think / what sorts of coordination problems this would be good for. The main value here is being confident in *actually* getting coordination from the X people who’ve signed up.

Idea 3) More rationality materials that aren’t blogs. The rationality community seems fairly saturated with blogs. Maybe we could do with more webcomics, videos, or something else?


A) Brainstorm good content from other mediums, benefits / drawbacks, and see why we might want content in other forms.

B) Convince people who already make such mediums to touch on rationalist ideas, sort of like what SMBC does.


I’d be willing to start up either a webcomic or a video series, conditional on funding. Anyone interested in sponsoring? Happy to have a discussion below.



Links to things I've done for additional evidence:


Idea 4) More systematic tools to master rationality techniques. To my knowledge, only a small handful of people have really tried to implement Systemization to learning rationality, of whom Malcolm and Brienne are the most visible. I’d like to see some more attempts at Actually Trying to learn techniques.


A) General meeting place to discuss the learning / practice.

B) Accountability partners + Skype check-ins .

C) List of examples of actually using the techniques + quantified self to get stats.


I think finding more optimal ways to do this is very important. There is a big step between knowing how techniques work and actually finding ways to do them. I'd be excited to talk more about this Idea.

Idea 5) More online tools that facilitate rationality-things. A lot of rationality techniques seem like they could be operationalized to plausibly provide value.


A) An online site for Double Cruxing, where people can search for someone to DC with, look at other ongoing DC’s, or propose topics to DC on.

B) Chatbots that integrate things like Murphyjitsu or ask debugging questions.


I’m working on building a Murphyjitsu chatbot for building up my coding skill. The Double Crux site sounds really cool, and I’d be happy to do some visual mockups if that would help people’s internal picture of how that might work out. I am unsure of my ability to do the actual coding, though.




Those are the ideas I currently have. Very excited to hear what other people think of them, and how we might be able to get the awesome ones into place. Also, feel free to comment on the FB post, too, if you want to signal boost.

Project Hufflepuff

31 Raemon 18 January 2017 06:57PM

(This is a crossposted FB post, so it might read a bit weird)

My goal this year (in particular, my main focus once I arrive in the Bay, but also my focus in NY and online in the meanwhile), is to join and champion the growing cause of people trying to fix some systemic problems in EA and Rationalsphere relating to "lack of Hufflepuff virtue".

I want Hufflepuff Virtue to feel exciting and important, because it is, and I want it to be something that flows naturally into our pursuit of both epistemic integrity, intellectual creativity, and concrete action.

Some concrete examples:

- on the 5 second reflex level, notice when people need help or when things need doing, and do those things.

- have an integrated understanding that being kind to people is *part* of helping them (and you!) to learn more, and have better ideas.

(There are a bunch of ways to be kind to people that do NOT do this, i.e. politely agreeing to disagree. That's not what I'm talking about. We need to hold each other to higher standards but not talk down to people in a fashion that gets in the way of understanding. There are tradeoffs and I'm not sure of the best approach but there's a lot of room for improvement)

- be excited and willing to be the person doing the grunt work to make something happen

- foster a sense that the community encourages people to try new events, actively take personal responsibility to notice and fix community-wide problems that aren't necessarily sexy.

- when starting new projects, try to have mentorship and teamwork built into their ethos from the get-go, rather than hastily tacked on later

I want these sorts of things to come easily to mind when the future people of 2019 think about the rationality community, and have them feel like central examples of the community rather than things that we talk about wanting-more-of.

Are You a Paralyzed Subordinate Monkey?

26 Eliezer_Yudkowsky 02 March 2011 09:12PM

During a discussion today about the bizarre "can't get crap done" phenomenon that afflicts large fractions of our community, the suggestion came up that most people can't do anything where there is a perceived choice that includes the null option / "do nothing" as an option.  Of which Michael Vassar made the following observation:

In a monkey tribe, there's no verbal communication - they can't discuss where to go using language.  So if you get up and start going anywhere, you must be the leader.

And if you're not the leader, it is not good for your reproductive fitness to act like one.  In modern times the penalties for standing up are much lower, but our instincts haven't updated.

Interesting to reconsider the events of "To lead, you must stand up" in this light.  It makes more sense if you read it as "None of those people had instincts saying it was a good idea to declare themselves the leader of the monkey tribe, in order to solve this particular coordination problem where 'do nothing' felt like a viable option" instead of "nobody had the initiative".