Disclaimer: These are entirely my thoughts. I'm posting this before it's fully polished because it never will be.
Epistemic status: Moderately confident. Deliberately provocative title.
Apparently, the Bay Area rationalist community has a burnout problem. I have no idea if it's worse than base rate, but I've been told it's pretty bad. I suspect that the way burnout manifests in the rationalist community is uniquely screwed up.
I was crying the other night because our light cone is about to get ripped to shreds. I'm gonna do everything I can to do battle against the forces that threaten to destroy us. You've heard this story before. Short timelines. Tick. Tick. I've been taking alignment seriously for about a year now, and I'm ready to get serious. I've thought hard about what my strengths are. I've thought hard about what I'm capable of. I'm dropping out of Stanford, I've got something that looks like a plan, I've got the rocky theme song playing, and I'm ready to do this.
A few days later, I saw this post. And it reminded me of everything that bothers me about the EA community. Habryka covered the object level problems pretty well, but I need to communicate something a little more... delicate.
I understand that everyone is totally depressed because qualia is doomed. I understand that we really want to creatively reprioritize. I completely sympathize with this.
I want to address the central flaw of Akash+Olivia+Thomas's argument in the Buying Time post, which is that actually, people can improve at things.
There's something deeply discouraging about being told "you're an X% researcher, and if X>Y, then you should stay in alignment. Otherwise, do a different intervention." No other effective/productive community does this. I don't know how to put this, but the vibes are deeply off.
The appropriate level of confidence to have about a statement like "I can tell how good of an alignment researcher you will be after a year of you doing alignment research" feels like it should be pretty low. At a year, there's almost certainly ways to improve that haven't been tried. Especially in a community so mimetically allergic to the idea of malleable human potential.
Here's a hypothesis. I in no way mean to imply that this is the only mechanism by which burnout happens in our community, but I think it's probably a pretty big one. It's not nice to be in a community that constantly hints that you might just not be good enough and that you can't get good enough.
Our community seems to love treating people like mass-produced automatons with a fixed and easily assessable "ability" attribute. (Maybe you flippantly read that sentence and went "yeah it's called g factor lulz." In that case, maybe reflect on good of a correlate g is in absolute terms for the things you care about.).
If we want to actually accomplish anything, we need to encourage people to make bigger bets, and to stop stacking up credentials so that fellow EAs think they have a chance. It's not hubris to believe in yourself.
I am (was) an X% researcher, where X<Y. I wish I had given up on AI safety earlier. I suspect it would've been better for me if AI safety resources explicitly said things like "if you're less than Y, don't even try", although I'm not sure if I would've believed them. Now, I'm glad that I'm not trying to do AI safety anymore and instead I just work at a well paying relaxed job doing practical machine learning. So, I think pushing too many EAs into AI safety will lead to those EAs suffering much more, which happened to me, so I don't want that to happen and I don't want the AI Alignment community to stop saying "You should stay if and only if you're better than Y".
Actually, I wish there were more selfish-oriented resources for AI Alignment. Like, with normal universities and jobs, people analyze how to get into them, have a fulfilling career, earn good money, not burn out, etc. As a result, people can read this and properly analyze if it makes sense for them to try to get into jobs or universities for their own food. But with a career in AI safety, this is not the case. All the resources look out not only for the reader, but also for the whole EA project. I think this can easily burn people.
In 2017, I remember reading 80K and thinking I was obviously unqualified for AI alignment work. I am glad that I did not heed that first impression. The best way to test goodness-of-fit is to try thinking about alignment and see if you're any good at it.
That said, I apparently am the only person of whom [community-respected friend of mine] initially had an unfavorable impression, which later became strongly positive.