A couple friends of mine who were early attendees of a CFAR workshop lived in the Bay Area for several months in 2013, and returned home with stories of how wondrous the Bay Area was. They convinced several of us to attend CFAR workshops as well, and we too returned home with the sense of wonderment after our brief immersion in the Berkeley rationality community. But when my friends and I each returned, somehow our ambition transformed into depression. I tried rallying my friends to try carrying back or reigniting the spark that made the Berkeley rationalist community thrive, to really spread the rationalist project beyond the Bay Area.
You seem to be conflating "CFAR workshop atmosphere" with "Berkeley Rationalist Community" in this section, which makes me wonder if you are conflating those things more generally.
The depressive slump post-CFAR happens *in Berkeley* too. The thriving community you envision Berkeley as having *does not exist,* except at CFAR workshops. The problem you're identifying isn't a Bay-Area-vs-the-world issue, it's a general issue with the way CFAR operates, building up intense social connections over the course of a weekend, then dropping them suddenly.
it's a general issue with the way CFAR operates, building up intense social connections over the course of a weekend, then dropping them suddenly.
So, this is definitely a thing that happens, and I'm aware of and sad about it, but it's worth pointing out that this is a generic property of all sufficiently good workshops and things like workshops (e.g. summer camps) everywhere (the ones that aren't sufficiently good don't build the intense social connections in the first place), and to the extent that it's a problem CFAR runs into, 1) I think it's a little unfair to characterize it as the result of something CFAR is particularly doing that other similar organizations aren't doing, and 2) as far as I know nobody else knows what to do about this either.
Or are you suggesting that the workshops shouldn't be trying to build intense social connections?
Thank you for writing this. I think your statement of the fundamental puzzle is basically accurate. I don't know what to do about it. If I felt that by investing in NYC (or some other place) I could build up a community I'd want to be a part of in the long term, I'd devote effort to that, but I don't know how to prevent my work from being raided and destroyed by Berkeley, so I don't do the work. Hell, I don't even know how to get those people to stop recruiting me, or my wife, every chance they get. Mentioning 'the fire of a thousand suns' and writing many articles about this does not seem to prevent it causing direct stress and serious damage to my life, on an ongoing basis, even after the posts this references.
Hell, the latest such attempt was yesterday.
[Brainstorming]
One idea is to try to differentiate the NYC 'product' from the Berkeley 'product'. For example, the advantage of Vancouver over the Bay Area is that you can live in Vancouver if you're Canadian. The kernel project attempted to differentiate itself through e.g. a manifesto. In the same way, you could try to create an identity that contrasts with the Bay Area's somehow (for example, figure out the top complaints people have about the Bay Area, then figure out which ones you are best positioned to solve--what keeps you in NYC?) Academic departments at different universities are known for different things; I could imagine a world where rationalist communities in different cities are known for different things too.
The perception of the Centre for Effective Altruism slighting the Berkeley REACH
I had hoped this was clear in my original post, but apparently it wasn't - I'm not saying CEA owes Berkeley REACH anything. I'm just saying we shouldn't conflate CEA with the sort of organization that would support the Berkeley REACH, and that Bay Area locals should fund the neglected cause of themselves having nice things locally.
CEA turned down my proposal because there were other, more established groups than REACH with clearer track records of success and better thought out metrics for success/failure applying for the same round of grants. I am working on building up a track record and metrics/data capture so that I can reapply later.
I read this as "CEA cares more about procedures that appear objective and fair and that can be defended, and not making mistakes, than doing the right/best thing." That may or may not be fair to them.
I do know that someone recently claiming to be brought in to work for CEA (and raided by SF from NYC, and who proceeded to raid additional people from NYC), claimed that CEA is explicitly looking to do exactly this sort of thing, and was enthusiastic about supporting an NYC-based version of this (this was before either of knew about REACH, I believe), despite my obvious lack of track record on such matters, or any source of good metrics.
If they'll only support REACH after it has a proven track record that can point to observable metrics to demonstrate an impact lower bound, it's the proverbial bank that only gives you a loan you don't need.
I do think Benquo was clear he wasn't calling on CEA to do anything, just observing that they'd told us who they were. And we were free to not like who they were, but the onus remained on us. That sounds right.
I can appreciate that. If CEA is budget constrained, and used all its resources on proven community builders doing valuable projects, I can't really argue with that too hard. However...
If CEA did it because you had personal resources available to sacrifice in their place, knowing you would, that seems like a really bad principle to follow.
If CEA feels it can't take 'risk' on this scale, in the sense that they might fund something that isn't effective or doesn't work out, that implies curiously high risk aversion where there shouldn't be any - this would be a very small percent of their budget, so there isn't much effective risk even if CEA's effectiveness was something to be risk averse about, which given its role in the overall ecosystem is itself questionable. It's a much smaller risk for them to take than for you to take!
I can understand the frustrations of people like Zvi who don't want to invest in local rationality communities, but I don't think that reaction is inevitable.
I went to a CFAR mentor's workshop in March and it didn't make me sad that the average Tuesday NYC rationality meetup isn't as awesome. It gave me the agency-inspiration to make Tuesdays in NYC more awesome, at least by my own selfish metrics. Since March we've connected with several new people, established a secondary location for meetups in a beautiful penthouse (and have a possible tertiary location), hosted a famous writer, and even forced Zvi to sit through another circle. The personal payoff for investing in the local community isn't just in decades-long friendships, it's also in how cool next Tuesday will be. It pays off fast.
And besides, on a scale of decades people will move in an out of NYC/Berkeley/anywhere else several times anyway as jobs, schools, and residential zoning laws come and go. Several of my best friends, including my wife, came to NYC from the Bay Area. Should the Areans complain that NYC is draining them of wonderful people?
One of my favorite things about this community is that we're all geographically diverse rootless cosmopolitans. I could move to a shack in Montana next year and probably find a couple of people I met at NYC/CFAR/Solstice/Putanumonit to start a meetup with. Losing friends sucks, but it doesn't mean that investing in the local rationality community is pointless.
Reading this I was reminded of something. Now, not to say rationality or EA are exactly religions, but the two function in a lot of the same ways especially with respect to providing shared meaning and building community. And if you look at new, not-state-sponsored religions, they typically go through an early period where they are are small and geographically colocated and only later have a chance to grow after sufficient time with everyone together if they are to avoid fracturing such that we would no longer consider the growth "growth" per se and would more call it dispersion. Consider for example Jews in the desert, English Puritans moving to North America, and Mormons settling in Utah. Counterexamples that perhaps prove the rule (because they produced different sorts of communities) include early Christians spread through the Roman empire and various missionaries in the Americas.
To me this suggests that much of the conflict people feel today about Berkeley is around this unhappiness at being rationalists who aren't living in Berkeley when the rationality movement is getting itself together in preparation for later growth, because importantly for what I think many peo...
On a sufficiently meta-level, the cause of the problem may be both rationality and EA thought leaders have roots in disciplines like game theory, microeconomics, and similar. These styles of analysis usually disregard topology (structure of interactions).
For better or worse, rationalists and effective altruists actually orient themselves based on such models.
On a less meta level
Possibly I'm overconfident, but from a network science inspired perspective, the problem with the current global movement structure seems quite easily visible, and the solutions are also kind of obvious (but possibly hard to see if people are looking mainly through models like "comparative advantage"?).
So what is the solution? A healthy topology of the field should have approximately power-law distribution of hub sizes. This should be true also for related research fields we are trying to advance, like AI alignment or x-risk. If the structure is very far from that (e.g. one or two very big hubs, than nothing, than a lot of two orders of magnitude smaller groups fighting for mere existence), the movement should try to re-balance, supporting growth of medium-tier hubs.
It seems this view is now gradually spreading at least in European effective altruism community, so the structure will get better.
(Possible caveat: if people have very short AGI timelines and high risk estimates, they may want to burn whatever is available, sacrificing future options.)
A healthy topology of the field should have approximately power-law distribution of hub sizes. This should be true also for related research fields we are trying to advance, like AI alignment or x-risk. If the structure is very far from that (e.g. one or two very big hubs, than nothing, than a lot of two orders of magnitude smaller groups fighting for mere existence), the movement should try to re-balance, supporting growth of medium-tier hubs.
Although my understanding of network science is abecedarian, I'm unsure of both whether this feature is diagnostic (i.e. divergence from power-law distributions should be a warning sign) or whether we in fact observe overdispersion even relative to a power law. The latter first.
1) 'One or two big hubs, then lots of very small groups' is close to what a power law distribution should look like. If anything, it's plausible the current topology doesn't look power-lawy enough. The EA community overlaps with the rationalist community, and it has somewhat better data on topology: If anything the hub sizes of the EA community are pretty even. This also agrees with my impression: although the bay area can be identified as the...
This isn't about the Berkeley rationalist community, but rationalist communities everywhere. In reading about the experiences of rationalists in Berkeley and elsewhere, I've learned their internal coordination problems are paralleled in rationalist communities everywhere.
I'm not sure to what extends that true. It seems to me like Berkeley has problems of status competition that come through scale that I don't see in my local LessWrong community the way I see them described when I talk with people about the Bay Area.
If there are more people interested to go to an event then there are spaces for the event you need to restrict entry and thus people have to compete over entry.
I actually think this happens fairly frequently, although may be happening sort of invisibly:
Two somewhat independent thoughts:
1) If you think tech money is important, you need to be in the bay area. Just accept that. There's money elsewhere, but not with the same concentration and openness.
2) Are you focused on saving the world, or on building communitiy/ies who are satisfied with their identity as world-savers? "bring them in, use them up" _may_ be the way to get the most value from volunteer sacrifices. It may not - I haven't seen a growth plan for any org that explicitly has many orders of magnitudes of increase while...
The money in the Bay uses 'if you're not in the Bay you're not serious, and even if you are other Bay money won't take you seriously so I can't afford to'
Right. That's my "just accept it" point. If you want that money, you (currently) have to play by those rules. If you don't want to play that way, you need to stand up and say that your plan isn't based on bay-area money/support levels.
as a coercive strategy to draw people there.
It's hard for me to understand the use of "coercive" here. Other than choosing not to give you money/attention, what coercion is being applied?
Even so, I think that strategy (to draw the serious people who have the capability to contribute) is a small part of it. It's mostly just a simple acknowledgement that distance matters. it's just a bit more hassle to coordinate with distant partners, and that's enough to make many want to invest time/effort/money more locally, all else equal. This is compounded by the (weak but real) signals about your seriousness if you won't find a way to be in the center of things.
I am sort of agnostic about whether the Berkeley community is a good idea or not. On one hand it certainly feels pointless to try to build up any non-Berkeley community. If someone is a committed rationalist they are pretty likely to move to Berkeley in the near future. In addiiton it is very hard to constantly lose friends. This post probably best captures the emotional reality:
"I have lost motivation to put any effort into preserving the local community – my friends have moved away and left me behind – new members are about a decade younger than my...
One pattern I'm noticing is because of the fact that because of the relative comparative advantage of citizenship in other countries, and the relative difficulty of attaining permanent residency in the United States, the communities of rationalists abroad are more stable over time because of the practical difficulty of convincing people to move to the United States. For example, having post-secondary education that is more subsidized not just in undergrad but in graduate studies as well in countries aside from the United States keeps non-American rationalists in their home countries until their mid-to-late twenties. That's young enough I know rationalists who musing about moving to Berkeley to work on AI alignment or another community project someday, but I also know a lot of rationalists who have set down roots where they are by then, and aren't inclined to move. Another thing if is a rationalist doesn't have a university degree or highly in-demand skills (e.g., STEM) for big corporations, it's difficult enough to get health insurance and visas for a lot of rationalists emigrating to the United States it doesn't make sense to try. This first post I wr...
So its hard to tell people to refrain from moving to Berkeley
I apologize for possibly/probably twisting your words a bit here, but I never have trouble telling people to refrain from moving to the Bay/Berkeley. I tell them I lived there for a few years and it’s a pretty unpleasant place, objectively, along any of ten different metrics relevant to comfort and peace of mind. I tell them I never actually developed any sense of belonging with the local Rationalist Community, so it’s not gauranteed that that will happen. I tell them I make a pretty good amount of money in many cities, but since I’m not a Comp Sci grad that doesn’t translate to a decent living in Berkeley. I tell them on top of that, Berkeley is one of the most expensive places to live in the world and if there were some kind of objective ratio of cost of living divided by objective comfort/quality/value-of-a-dollar, Berkeley would be near the top worldwide.
I also don’t find the proposition that you have to literally move to an expensive unpleasant overcrowded dystopian city in order to be rational to be particularly, uh, rational.
I'm confused by either your Seattle timeline or your use of the term "Rationality Reading Group."
As far as I know, I started the Rationality Reading Group in 2015, after my Jan CFAR Workshop. We read through a bunch of the Sequences.
I left Seattle in late 2016 and left RRG in some other capable hands. To this day, RRG (afaik) is still going and hasn't had any significant breaks, unless they did and I just didn't know about it.
In any case, I'd appreciate some kind of update to your post such that it is either more accurate or less confusing...
As a local community organizer, I developed tactics for doing so that if they worked in Vancouver, they should work for any rationalist community.
As a fellow community organizer (Berlin), I would be happy to read about them.
Thanks for writing this post, this is a worry that I have as well.
I also believe that more could be done to build the global rationality community. I mean, I'm certainly keen to see the progress with LW2.0 and the new community section, but if we really want rationality to grow as a movement, we at least need some kind of volunteer organisation responsible for bringing this about. I think the community would be much more likely to grow if there was a group doing things like advising newly started groups, producing materials that groups could use or cr...
Background Context, And How to Read This Post
This post is inspired by and a continuation of comments I made on the post 'What is the Rationalist Berkeley Community's Culture?' by Zvi on his blog Don't Worry About the Vase. As a community organizer both online and in-person in Vancouver, Canada, my goal was to fill in what appeared to be some gaps in the conversation among rationalists mostly focused on the Berkeley community. Zvi's post was part of a broader conversation pertaining to rationalist community dynamics within Berkeley.
My commentary pertains to the dynamics between the Bay Area and other local rationality communities, informed by my own experience in Vancouver and those of rationalists elsewhere. The below should not be taken be taken as comment on rationalist community dynamics within the Bay Area. This post should be considered an off-shoot from the original conversation Zvi was contributing to. For full context, please read Zvi's original post.
I. The Rationality Community: Berkeley vs. The World
While I didn't respond to them at the time, several community members commented on Zvi's post they had similar experiences: that while some local rationality communities and their members perceive themselves in a zero-sum game with Berkeley they didn't sign up for (and, to be fair, the Berkeley community didn't consciously initiate as though it's a single agency), and some don't, a sense of what Zvi was trying to point appears ubiquitous. An example:
Similar anecdata from local rationality communities around the world:
Melbourne. When I met several rationalists originally from Melbourne in Berkeley a few years ago, the way they talked about the exodus of the core of the Melbourne rationality community to the Bay Area, it was a mixed assessment. Melbourne is an example of very successful local rationality community outside the Bay Area, with the usual milestones like successful EA non-profits, for-profit start-ups and rationalist sharehouses. So that many rationalists from Melbourne left for the Bay Area passed a cost-benefit analysis as high-impact individuals it was obvious to them they should be reducing existential risks on the other side of the world.
In conversation, Helen Toner expressed some unease that a local rationality community which had successfully become a rationality hub second only to the Bay Area had had a whole generation of rationalists from Melbourne leave at once. This could have left open the possibility a sustainable system for rationalist development for years had been gutted. My impression since then is around this time the independent organization of the Melbourne EA community began to pick up, and between that and the remaining rationalists, the Melbourne community is doing well. If past or present members of the Melbourne rationality community would like to add their two cents, it would be greatly appreciated.
The rationality community growth strategy out of Berkeley by default became to recruit the best rationalists from local communities around the world at a rate faster than rationalist organizers could replenish the strength of those local communities. Given the stories I've heard from outside Melbourne being more lopsided, with the organization of local rationality communities utterly collapsing, only recovering after multiple years if ever, I'd consider the case of Melbourne rationality community surviving the exit of its leadership for Berkeley to have been a lucky outlier.
Seattle. The Seattle rationality community has experienced a bad case of exodus to Berkeley over the last few years. My understanding of this story is as follow:
Vancouver. The experience in Vancouver has in the past certainly felt like "“if you don’t want to move to Berkeley as soon as possible, you are not *really* rational". The biggest reason Vancouver may not have exuded as many rationalists to the Bay Area as cities in the United States is the difficulty being Canadian poses to gaining permanent residence in the United States and hence moving to the Bay Area. A couple friends of mine who were early attendees of a CFAR workshop lived in the Bay Area for several months in 2013, and returned home with stories of how wondrous the Bay Area was. They convinced several of us to attend CFAR workshops as well, and we too returned home with the sense of wonderment after our brief immersion in the Berkeley rationality community. But when my friends and I each returned, somehow our ambition transformed into depression. I tried rallying my friends to try carrying back or reigniting the spark that made the Berkeley rationalist community thrive, to really spread the rationalist project beyond the Bay Area.
But the apparent consensus was it just wasn't possible. Maybe the rationality community a few years ago lacked the language to talk about it, but rationalists who'd lived in Berkeley for a time only to return felt the rationality-shaped hole in their heart could only be filled in the Berkeley. A malaise had fallen over the Vancouver rationality community. All of us were still around, but with a couple local EA organizations, many of us were drawn to that crowd. Those of us who weren't were alienated from any personal connection to the rationality community. I saw in my friends a bunch of individual heroes who together were strangely less than and not greater than the sum of their parts.
Things have been better lately, and a friend remarked they're certainly better than a few years ago, when everyone was depressed about the fact it was too difficult for us to all move to the Bay Area. In the last several months, the local rationality community has taken on as our mission our own development, and we've not rebounded so much as flourished like never before. But it took the sorts of conversations about the Berkeley rationalist community last year Zvi and others had to break the spell we had cast on ourselves, that apparently Berkeley had running a rationalist community like a well-oiled machine down to an art and a science.
II. The Berkeley Community and the Mission of Rationality
Benquo commented on Zvi's post:
Since he wrote this comment, Benquo has actually continued to participate in the rationality community. This conversation was mired in tension in the rationality community it must have been difficult to think about impersonally, and so a charitable interpretation would be while these problems exist, Benquo and others are generally not as fatalistic about the rationality community as they were the time they wrote the comments. While I and others in thread saw grains of truth in Benquo's statement, precision nonetheless remains a virtue of rationality, and I felt compelled to clarify. I commented:
Other comments in-thread from community members who had been around longer than Benquo or I confirmed my impression from their own personal experiences, so unless Benquo would further dispute these accounts, this thread seems put to rest. However, Zvi then replied to me:
To respond to Zvi here, indeed it appears to be an uncannily ubiquitous problem. I've collected a few stories and described them in some detail above. Between that and several comments from independent rationalists on Zvi's original post giving the impression members of their local communities were being sucked to Berkeley as though through a pneumatic tube and leaving a vacuum of community and organization in its wake, it appears these many local stories could be a single global one.
The original mission of the rationality community was to raise the sanity waterline to ensure human values get carried to the stars, but we're still godshatter, so doing so can and should take different forms than just ensuring superintelligence is aligned with human values. If ever the goal was to seed successful, stable rationalist communities outside Berkeley to coordinate projects beyond the Bay Area, it's been two steps forward, one step back, at best. Even if we assume for the sake of argument it's a good idea for rationalists worldwide to view Berkeley as a nucleus and their own rationalist communities as recruitment centres to drive promising individuals to Berkeley for the mission of AI alignment or whatever, the plan isn't working super well. That's because the apparent rate of local rationalist communities sending their highest-level rationalists Berkeley is occurring at a much faster rate than those rationalist communities can level up more rationalists to replenish their leadership and sustain the local community at all.
The state of affairs could be worse than it is now. But it creates the possibility that if enough local rationalist communities around the world outside the Bay Area simultaneously collapsed, the Berkeley rationalist community (BRC) could lose sufficient channels for recruitment to sustain itself. Given the tendency of communities like all things toward entropy, communities decay over time. The BRC could not be rubbing any of its members the wrong way and we would probably still observe some naturally occurring attrition. In a scenario where the decay rate of the BRC was greater than its rate of replenishment, which has historically largely depended on rationalists from outside communities, the BRC would start decaying. If we were to assume the BRC acts as a single agency, it's in the BRC's self-interest as the nucleus of the worldwide rationality movement to sustain communities-as-recruitment centres at least to the extent they can sustainably drive their highest-level rationalists to Berkeley over the long-term.
While this worst-case scenario could apply to any large-scale rationalist project, with regards to AI alignment, if the locus of control for the field falls out of the hands of the rationality community, someone else might notice and decide to pick up that slack. This could be a sufficiently bad outcome rationalists everywhere should pay more attention to decreasing the chances of it happening.
So whether a rationalist sees the outcome of the primary purpose of rationalist communities acting as a recruitment centres for the Berkeley rationalist community as an excellent plan or an awful failure mode, there's a significant chance it's unsustainable either way. It appears a high-risk strategy that's far from foolproof, and as far as I know virtually nobody is consciously monitoring the situation to prevent further failure.
III. Effective Altruism and the Rationalist Community
In another thread, I responded directly to Zvi. I commented:
Zvi replied:
Since then Zvi and others have made good on their intentions to point out said problems with effective altruism. I intend to engage these thoughts at length in the future, but suffice to say for now local rationalist communities outside the Bay Area appear to definitely have experienced being 'eaten' by EA worse than Berkeley.
I never bothered to tie up the loose ends I saw in the comments on Zvi's post last year, but something recently spurred me to do so. From Benquo's recent post 'Humans need places':
It's important for rationalists in Berkeley to know that from where they're standing, to rationalists around the world, these statements could ring hollow. The perception of the Centre for Effective Altruism slighting the Berkeley REACH is mirrored many times over in rationalists feeling like Berkeley pulled in, used up and burned out whole rationalist communities. The capital of a nation receives resources from everyone across the land. If the capital city recruits more citizens to the nation, is it not morally obligatory for the capital city offer a corresponding level of support for taking care of them once they joined your nation? Is it not the case if the rationality community can not afford to take care of our people, then we can't afford to recruit them?
The worldwide rationalist project stands between two alternatives:
This isn't about the Berkeley rationalist community, but rationalist communities everywhere. In reading about the experiences of rationalists in Berkeley and elsewhere, I've learned their internal coordination problems are paralleled in rationalist communities everywhere. The good news in the bad news is if all rationalist communities face common problems, we can all benefit from working towards common solutions. So global coordination may not be as difficult as one might think. I wrote above the Vancouver rationality community has recently taken on as our mission our own development, and we're not recovering from years of failures past so much as flourishing like never before. We haven't solved all the problems a rationalist community might face, but we've been solving a lot. As a local community organizer, I developed tactics for doing so that if they worked in Vancouver, they should work for any rationalist community. And they worked in Vancouver. I think they're some of the pieces of the puzzle of building global infrastructure to match the rationality community's global ambitions. To lay that out will be the subject of my next post.