I was talking to someone the other day about the ways in which I've noticed the [Berkeley] rationalist community changing. One of the main ways was that group houses seemed to be disappearing. People were getting older, moving away, or just moving into their own houses to have kids. It then occurred to me that it doesn't seem like this is happening with the EA side of the community. Thinking about it more, it seems to me that EA has a quite strong funnel in the form of student groups. I semi-regularly hear about events, companies, projects, or just impressive people that are coming from EA student groups. Meanwhile I'm not even aware of a rationalist student group (although I'm sure there are some).
When I think about where rationalists came from, my answer is 1) EY writing the original sequences, and 2) EY writing HPMOR. It feels like those things happened, tons of people joined, and then they stopped happening, and people stopped joining. Then people got older, and now we have a population pyramid problem.
I think this is something of a problem for the mission of preventing AI x-risk. It is of course great to have lots of EAs around, but I think that people that the rationalist community would differentially appeal to would provide a lot of value that EA-learning people would be a lot less likely to provide (focus on AI, obsessive investigation into understanding confusing yet important subjects, etc.).
Do others agree with the pattern? Do you also see it as a problem? Any suggestions for what we could do about it? Why aren't there many rationalist student groups?
I'm 22 (±0.35) years old and have been seriously getting involved with AI-Safety over the last few months. However, I chanced upon LW via SSC a few years ago (directed to SSC by Guzey) when I was 19.
The generational shift is a concern to me because as we start losing people who've accumulated decades of knowledge (of which only a small fraction is available to read/watch), it's possible that a lot of time would be wasted on developing ideas which have been developed via routes which have been explored. Of course, there's a lot of utility in coming up with ideas from the ground up, but there comes a time when you accept and build upon an existing framework based on true statements. Regardless of whether the timelines are shorter than what we expect, this is a cause for concern.