There is danger in learning about existential risk. I don’t mean that as some kind of bad joke, either: some people experience personal, psychological harm after learning that the world might end in preventable ways. They may become depressed, anxious, or nihilistic over not just their own death but the potential death of everything that means anything to them, and they may wallow in that despair. But, if they can wade their way through such feelings — and many people do! — they might seek to take action against existential risks. One such group of action-takers is effective altruists.

There are many existential risks, and so there are many ways effective altruists try to address them. Some tackle fairly prosaic risks and so can use prosaic methods, like applying activism to address risks from climate change and not raising money for emergency response. Others look at more exotic risks from nanotechnology and superintelligence and so may need more exotic mitigation strategies, like performing original research that would otherwise not happen or teaching people to be more rational. And it’s among those who look at exotic risks and find only need of exotic interventions that a particular sort of angst, ennui, and guilt can arise.

Now I don’t much think of myself as an effective altruist, but I quack like one, so you might as well count me among their ranks, and many of my friends identify as effective altruists, so I speak from a place of gnosis when I say effective altruists suffer in these specific ways. And it seems to happen because people follow reasoning along these lines:

  • We face existential risk X.
  • The best option to mitigate X is action Y.
  • I should do Y.

Sometimes this works out great and we get fabulous folks I’m glad to know working at places like CSER, FHI, FRI, MIRI, and others doing vital work on addressing existential risks. Other times it doesn’t, not for lack of will or clarity of thought, but for lack of fit to perform the action Y. But, if you’ve reasoned thusly, you’re now stuck in a spot where you know the world is at grave risk, you know something specific to do about it, and yet you don’t do it. If you feel frustrated or disappointed or to have caused harmed, then you may feel the existential angst, ennui, or guilt, respectively, from not addressing X.

I see lots of evidence of this in rationalist and effective altruist culture. Perhaps no groups have ever worried more over akrasia and ways to combat it or the potential impact their possible work could have. Nate Soares wrote a whole series of posts on guilt because people kept coming to him saying “help, I feel guilty for not doing more,” and there’s lots of self questioning about how effective or altruistic the whole effective altruism movement is anyway. Even at EA and rationalist parties I hear many conversations about what people think they should be doing with their own lives and how they can make themselves do those things. A cynic might say the community is actively encouraging feelings of angst, ennui, and guilt from its members. Yet somehow I feel none of these emotions. How do I do it?

Well, if I’m honest, I dramatically oversolve the problem with meaningness, but that doesn’t mean I don’t do things that specifically address the angst I used to feel over not doing AI safety research. After all, I think risks from superintelligence generate the greatest potential negative outcomes for the world, I think the best thing I could do about that is to dedicate my energies to researching and otherwise working to reduce those risks and weaken the badness of the outcomes, yet I am not working on AI safety research. Instead I do some other stuff that I like and give a little money to fund AI safety. How do I live with myself?

I think the answer is that I fully accept the concept of comparative advantage into my heart. Sure, I could make less money doing work I find less interesting and that I’m less good at to directly advance AI safety, but I instead make more money doing things I find more interesting and that I’m better at and give the excess to fund others working on AI safety. As a result, everyone gets more of what they want: I get more life satisfaction, someone else gets more life satisfaction because I help pay them to do the AI safety research they love, and we both still see an increase in the total amount of effort put into AI safety. And if it were just me making this tradeoff it might be a bad deal because I could subvert my comparative advantage and also work on AI safety and we’d make a little more progress on it, but there are others doing the same and together we are able to mine the vast wealth of our dreamtime to get everyone more of everything, including AI safety research, than we could have otherwise had.

But if economics has taught me anything it’s that economic reasoning is unsatisfying to most people, so allow me to reframe my position in more human terms. Ability is unequally distributed: it’s sad and true. Even with lots of hard work we cannot all do all the things we might like to do as well as we’d like to do them, even if we are all free to do the things we want to do as much as we want. And although you can feel angst that the world is unfair in this way, ennui for that which you can never have, and guilt over not fighting back harder against reality, there is great dignity in accepting the world as it is and finding contentment with it. So maybe you won’t save the world and personally prevent existential disaster. That’s fine. You can still do what you can towards preventing X, even if all you can do is be a silent supporter not actively opposing Y or other interventions to address X.

And, despite my own advice, I have some hope that the future may be different. Right now there does not seem to be a Gordon-shaped hole in AI safety research, but I keep an eye out to see if one appears. Maybe one day the cosmos will give me the opportunity to defend it against risk from superintelligence or another existential threat, but if it doesn’t have to, so much the better — I can be “merely” happy with getting to live in the wonderful world others create for me. I hope such a “terrible” fate befalls us all.

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 10:31 PM

This topic was actually covered a bit in the 2016 LessWrong Survey, in which respondents were asked if they'd ever had any 'moral anxiety' over Effective Altruism:

http://lesswrong.com/r/discussion/lw/nx2/2016_lesswrong_diaspora_survey_analysis_part_four/

Are there any 2017 LessWrong surveys planned?

Sorry for the late response but yes, I was just working on finishing one up now.