I agree that "cohorts happen automatically", and the organisations that prevent this usually care explicitly about the next generations, whether we are talking about the Scout movement, religious groups, or academia. Ignoring this would be detrimental to the rationalist movement in long term.
Understandably, most of us have negative connotations associated with "spreading the word". It is yet another "motte and bailey" situation, where on some level it's true that increasing the number of people who e.g. read Less Wrong is not our terminal value, that gaining followers is almost orthogonal to being 'less wrong', and that trying to be attractive for too many people could dilute the message; but on the other hand, it can easily become reversed stupidity, something like people refusing to eat food just because Hitler did that.
There are two basic ways how can rationality movement could disappear from the world. One is gradual shrinking: people individually deciding that e.g. Pascal's wager actually makes sense, or that making their political faction win is more important than getting statistics and logic right, or otherwise trade rationality for something more appealing. The other is gradually becoming a group of old farts, whose debates are gradually reduced to talking over and over again about the things that happened decades ago. -- Where do we see ourselves, as a group, 50 years from now? (Conditional on Singularity not happening, humanity not going extinct, etc., or course.)
Of course, if we are not willing to enter a "loose confederation" with the previous generations, we should not expect a different approach from the next generations. Telling them to "read the Sequences" would be like telling us to "read Science and Sanity"; maybe one in a hundred would do, but nothing would change as a result, anyway.
Seems like two things need to be done, probably in this order:
1) Agree on a larger definition of "confederation of reason", "scions of Bacon", or whatever we decide to call it. Yes, this will be difficult, it goes against our nitpicking instinct, and it is going to rub many people the wrong way.
2) Make a strategic effort to recruit people, a lot of them (not just a few mathematical prodigies), into the "confederation of reason". This could mean joining what other organisations are already doing, instead of reinventing the wheel. This again goes against our instincts.
I expect that many rationalists will be not able to overcome their insticts on these matters, so we should not expect a wide consensus here. Instead, a few people who like this idea should just create a team, and do it. Which is how generally things get done.
I’m a Ravenclaw and Slytherin by nature. I like being clever. I like pursuing ambitious goals. But over the past few years, I’ve been cultivating the skills and attitudes of Hufflepuff, by choice.
I think those skills are woefully under-appreciated in the Rationality Community. The problem cuts across many dimensions:
In a nutshell, the emotional vibe of the community is preventing people from feeling happy and and connected, and a swath of skillsets that are essential for group intelligence and ambition to flourish are undersupplied.
If any one of these things were a problem, we might troubleshoot it in isolated way. But collectively they seem to add up to a cultural problem, that I can’t think of any way to express other than “Hufflepuff skills are insufficiently understood and respected.”
There are two things I mean by “insufficiently respected”:
And while this is difficult to explain, it feels to me that there is a central way of being, that encompasses emotional/operational intelligence and deeply integrates it with rationality, that we are missing as a community.
This is the first in a series of posts, attempting to plant a flag down and say “Let’s work together to try and resolve these problems, and if possible, find that central way-of-being.”
I’m decidedly not saying “this is the New Way that rationality Should Be”. The flag is not planted at the summit of a mountain we’re definitively heading towards. It’s planted on a beach where we’re building ships, preparing to embark on some social experiments. We may not all be traveling on the same boat, or in the exact same direction. But the flag is gesturing in a direction that can only be reached by multiple people working together.
A First Step: The Hufflepuff Unconference, and Parallel Projects
I’ll be visiting Berkeley during April, and while I’m there, I’d like to kickstart things with a Hufflepuff Unconference. We’ll be sharing ideas, talking about potential concerns, and brainstorming next actions. (I’d like to avoid settling on a long term trajectory for the project - I think that’d be premature. But I’d also like to start building some momentum towards some kind of action)
My hope is to have both attendees who are positively inclined towards the concept of “A Hufflepuff Way”, and people for whom it feels a bit alien. For this to succeed as a long-term cultural project, it needs to have buy-in from many corners of the rationality community. If people have nagging concerns that feel hard to articulate, I’d like to try to tease them out, and address them directly rather than ignoring them.
At the same time, I don’t want to get bogged down in endless debates, or focus so much on criticism that we can’t actually move forward. I don’t expect total-consensus, so my goal for the unconference is to get multiple projects and social experiments running in parallel.
Some of those projects might be high-barrier-to-entry, for people who want to hold themselves to a particular standard. Others might be explicitly open to all, with radical inclusiveness part of their approach. Others might be weird experiments nobody had imagined yet.
In a few months, there’ll be a followup event to check in on how those projects are going, evaluate, and see what more things we can try or further refine.
[Edit: The Unconference has been completed. Notes from the conference are here]
Thanks to Duncan Sabien, Lauren Horne, Ben Hoffman and Davis Kingsley for comments