The maintenance of already existing cultural traits that are off-putting to outsiders may be more effective than intentionally designing filters, because the former are already part of the community, so by keeping them we're not diluting the culture, and the process of designing filters is likely to cause contestation within the community.about which of its traits are essential and which are peripheral.
It's hard to explicitly describe what the current barriers to entry are, but they include familiarity with LW ideas (and agreement with a lot of them), enjoying the analytical style of discussion and thought, etc. I occasionally see someone come across rationalistsphere and respond with something like "Ugh, a community of robots/autists started by essays written for aliens" - I want to keep whatever it is that repulses them.
I think it is both the case that:
1) a really valuable thing the community provides is a place to talk about ideas at a deep level. This is pretty rare, and it's valuable both to the sort of people who explicitly crave that, and (I believe), valuable to the world for generating ideas that are really important, and I do this this is something that is at risk of being destroyed if we lowered barriers to entry and scaled up without thinking too hard about it.
but, 2) it's also the case that
2a) there are a lot of smart people who I know would contribute valuab...
I’m a Ravenclaw and Slytherin by nature. I like being clever. I like pursuing ambitious goals. But over the past few years, I’ve been cultivating the skills and attitudes of Hufflepuff, by choice.
I think those skills are woefully under-appreciated in the Rationality Community. The problem cuts across many dimensions:
In a nutshell, the emotional vibe of the community is preventing people from feeling happy and and connected, and a swath of skillsets that are essential for group intelligence and ambition to flourish are undersupplied.
If any one of these things were a problem, we might troubleshoot it in isolated way. But collectively they seem to add up to a cultural problem, that I can’t think of any way to express other than “Hufflepuff skills are insufficiently understood and respected.”
There are two things I mean by “insufficiently respected”:
And while this is difficult to explain, it feels to me that there is a central way of being, that encompasses emotional/operational intelligence and deeply integrates it with rationality, that we are missing as a community.
This is the first in a series of posts, attempting to plant a flag down and say “Let’s work together to try and resolve these problems, and if possible, find that central way-of-being.”
I’m decidedly not saying “this is the New Way that rationality Should Be”. The flag is not planted at the summit of a mountain we’re definitively heading towards. It’s planted on a beach where we’re building ships, preparing to embark on some social experiments. We may not all be traveling on the same boat, or in the exact same direction. But the flag is gesturing in a direction that can only be reached by multiple people working together.
A First Step: The Hufflepuff Unconference, and Parallel Projects
I’ll be visiting Berkeley during April, and while I’m there, I’d like to kickstart things with a Hufflepuff Unconference. We’ll be sharing ideas, talking about potential concerns, and brainstorming next actions. (I’d like to avoid settling on a long term trajectory for the project - I think that’d be premature. But I’d also like to start building some momentum towards some kind of action)
My hope is to have both attendees who are positively inclined towards the concept of “A Hufflepuff Way”, and people for whom it feels a bit alien. For this to succeed as a long-term cultural project, it needs to have buy-in from many corners of the rationality community. If people have nagging concerns that feel hard to articulate, I’d like to try to tease them out, and address them directly rather than ignoring them.
At the same time, I don’t want to get bogged down in endless debates, or focus so much on criticism that we can’t actually move forward. I don’t expect total-consensus, so my goal for the unconference is to get multiple projects and social experiments running in parallel.
Some of those projects might be high-barrier-to-entry, for people who want to hold themselves to a particular standard. Others might be explicitly open to all, with radical inclusiveness part of their approach. Others might be weird experiments nobody had imagined yet.
In a few months, there’ll be a followup event to check in on how those projects are going, evaluate, and see what more things we can try or further refine.
[Edit: The Unconference has been completed. Notes from the conference are here]
Thanks to Duncan Sabien, Lauren Horne, Ben Hoffman and Davis Kingsley for comments