(Probably somebody else has said most of this. But I personally haven't read it, and felt like writing it down myself, so here we go.)
I think that EA [editor note: "Effective Altruism"] burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.
Setting aside for the moment what “values” are and what it means to “actually” have one, suppose that I actually value these things (among others):
True Values
- Abundance
- Power
- Novelty
- Social Harmony
- Beauty
- Growth
- Comfort
- The Wellbeing Of Others
- Excitement
- Personal Longevity
- Accuracy
One day I learn about “global catastrophic risk”: Perhaps we’ll all die in a nuclear war, or an AI apocalypse, or a bioengineered global pandemic, and perhaps one of these things will happen quite soon.
I recognize that GCR is a direct threat to The Wellbeing Of Others and to Personal Longevity, and as I do, I get scared. I get scared in a way I have never been scared before, because I’ve never before taken seriously the possibility that everyone might die, leaving nobody to continue the species or even to remember that we ever existed—and because this new perspective on the future of humanity has caused my own personal mortality to hit me harder than the lingering perspective of my Christian upbringing ever allowed. For the first time in my life, I’m really aware that I, and everyone I will ever care about, may die.
My fear has me very focused on just two of my values: The Wellbeing Of Others and Personal Longevity. But as I read, think, and process, I realize that pretty much regardless of what my other values might be, they cannot possibly be satisfied if the entire species—or the planet, or the lightcone—is destroyed.
[This is, of course, a version of EA that’s especially focused on the far future; but I think it’s common for a very similar thing to happen when someone transitions from “soup kitchens” to “global poverty and animal welfare”. There’s an exponential increase in stakes, accompanied by a corresponding increase in the fear of lost value.]
So I reason that a new life strategy is called for.
Over time, under the influence of my “Accuracy” value as well as my “Social Harmony” value (since I’m now surrounded by people who are thinking about this stuff), I come to believe that I should value the following:
Should Values
- Impact*
- Calibration
- Openness*
- Collaboration*
- Empiricism*
- The Wellbeing Of Others
- Personal Longevity
(The values on this new list with an asterisk beside them have a correlate on the original list (impact→power, collaboration→social harmony, empiricism→accuracy), but these new values are routed through The New Strategy, and are not necessarily plugged into their correlates from the first list.)
Over a couple of years, I change my career, my friend group, and my hobbies to reflect my new values. I spend as little time as possible on Things That Don’t Matter, because now I care about Impact, and designing computer games has very little Impact since it takes a lot of time and definitely doesn’t save the world (even though it’s pretty good on novelty, beauty, growth, and excitement).
Ok, so let’s talk now about what “values” are.
I think that in humans at least, values are drives to action. They are things that motivate a person to choose one possible action over another. If I value loyalty over honesty, I’ll readily lie to help my friend save face; if I value both about equally, I may be a little paralyzed in some situations while I consult the overall balance of my whole value system and try to figure out what to do. When I go for a hike with my field kit of watercolor paints, I tend to feel really good about that decision as I make it, as I hike and paint, and also as I look back on the experience, because it satisfies several of my values (such as novelty, growth, and beauty). When I choose to stay in and watch a movie rather than run errands out in the cold rain, that’s my comfort value expressing itself. Values are the engines of motivation.
It is one thing to recognize that a version of you who strategically prioritizes “collaboration” will be more effective at accomplishing goals that you really do care about. But it’s another to incorrectly believe that “collaboration” directly motivates your actions.
Perhaps “collaboration” really is one of your true values. Indeed, perhaps your true values just happen to exactly match the central set of EA values, and that is why you are an EA.
However, I think it’s much more common for people to be EAs because their true values have some overlap with the EA values; and I think it’s also common for EAs to dramatically overestimate the magnitude of that overlap. According to my model, this is why “EA burnout” is a thing.
[ETA: My working model is incomplete. I think there are probably other reasons also that EA burnout is a thing. But I'm nowhere near as satisfied with my understanding of the other reasons.]
If I am wrong about what I value, then I will miss-manage my motivational resources. Chronic mismanagement of motivational resources results in some really bad stuff.
Recall that in my hypothetical, I’ve oriented my whole life around The Should Values for my longtermist EA strategy—and I’ve done so by fiat, in a way that does not converse much with the values that drove me before. My career, my social connections, and my daily habits and routines all aim to satisfy my Should Values, while neglecting my True Values. As a result, my engines of motivation are hardly ever receiving any fuel.
Gradually, I find myself less and less able to take any actions whatsoever. Not at work, not with friends, not even when I’m by myself and could theoretically do anything I want. I can’t even think about my work without panicking. I am just so exhausted all of the time. Even when apparently excellent opportunities are right in front of me, I just cannot bring myself to care.
I think there are ways to prioritize world-saving or EA-type strategies without deceiving ourselves about what motivates us. I think it is possible to put skill points into calibration, for example, even when you’re not directly motivated by a drive to be well calibrated. It is often possible to choose a job that satisfices for your true values while also accomplishing instrumental goals. In fact, I think it’s crucial that many of us do this kind of thing a bunch of the time.
I also think it is devastatingly dangerous for most of us to be incorrect about what really drives us to act.
It is probably possible to recover even from severe cases of EA burnout. I think I’ve done a decent job of it myself, though there’s certainly room for improvement. But it takes years. Perhaps several of them, perhaps a whole decade. And that is time I am not at all confident our species has.
I am a bit wary of telling EAs what I think they Should do. It seems to me that as a movement, EAs are awfully tangled up about Shoulds, especially when it comes to the thoughts of other EAs.
Still, it seems awfully important to me that EAs put fuel into their gas tanks (or electricity into their batteries, if you prefer), rather than dumping that fuel onto the pavement where fictional cars sit in their imaginations.
And not just a little bit of fuel! Not just when you’re too exhausted to go on without a little hit. I think that no matter what you hope to accomplish, it is wise to act from your true values ALL of the time—to recognize instrumental principles as instrumental, and to coordinate with allies without allowing them to overwrite your self concept.
My advice to my past self would be: First, know who you are. If you’re in this for the long haul, build a life in which the real you can thrive. And then, from the abundance of that thriving, put the excess toward Impact (or wherever else you would like for it to go).
Maybe you think that you lack the time to read fiction, or to go rock climbing, or to spend the whole weekend playing board games with friends.
I think that you may lack the time not to.
I saw the development of EA as a[nother] naive series of misapprehensions about how to live in what the EA adherents see as a scary world. Akin to, "let's abstract this scary world, it'll become less scary, and we can affect / control the outcomes". That's weak adolescent thinking, and bad math besides. And this was after reading MacAskill’s book, which likewise struck me as naive academia-ism; as unpragmatic as any wooly professah unable to do basic household chores, basic tasks of living with others, of simply putting one’s shoulder to the job individually and then collectively. Very little ’survival’ in the world we live in skills [and so it was an amusing though unfortunate confirmation to read a profile of MacAskill in the New Yorker; and i dont care about lowbrow middlebrow [New Yorker/Atlantic] highbrow…. it’s all just brow; despite being ‘the most upcoming young philosopher’, his daily practical life sounds like a mild abstract chaos of ineptitude, though clearly he's a lovely human]
The ‘values’ inherent in EA seemed to me some sort of extremely core Christian penance: induced suffering and avoidance of a meaningful engagement in one’s work, in one’s life, for an abstract “good” at some unspecified future time; a ‘state of grace’, which again, grace = a piece of Christian dogma, grace being granted by god [Leroy?] Storing up ‘riches’ for the future. Instead of hoarding gold, like preppers and other religious zealots saving for armageddon prior to salvation, let’s use Crypto! Easily subverted by a mountebank like Sam Bankman-fried (bankman! now that’s funny).
The idea of working a job that pays well, but a job you dont care about, or even hate, eg - ‘i work at Goldman’ - in order to gift a lot of cash in the future to some intangible and unspecified “general good”. In other words, sacrifice your own personal integrity, it’s ok to corrode your own personal core by doing hateful work, in order to help some guy in 50 years not get malaria. Nope; that’s an edifice built by naive lummoxes who’ve never really ‘worked’. How to find the beauty and meaning in one’s current life, one’s current work - and extend that beauty and meaning to others - THAT is what spurs altruism. The desire to share one’s own personal handiwork.
I can use EAs “metrics” to show that my own personal ‘work’ - which is far smaller scaled than SBF & etc level ‘earn big for the future’- will continue to create positive generational change. The old words, the old concepts - they’re old because they work. EA…was and is novelty for youngsters who want to avoid engagement.
I thought the core of effective altruism was not the specific lifestyle of "earning to give", but rather the attempt to find the best use of each dollar that is available for charity.