~5 months I formally quit EA (formally here means “I made an announcement on Facebook”). My friend Timothy was very curious as to why; I felt my reasons applied to him as well. This disagreement eventually led to a podcast episode, where he and I try convince each other to change sides on Effective Altruism- he tries to convince me to rejoin, and I try to convince him to quit.
Some highlights:
- My story of falling in love, trying to change, and then falling out of love with Effective Altruism. That middle part draws heavily on past posts of mine, including EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem and Truthseeking is the ground in which other principles grow
- Why Timothy still believes in EA
Spoilers: Timothy agrees leaving EA was right for me, but he wants to invest more in fixing it.
Thanks to my Patreon patrons for supporting my part of this work.
[I have only read Elizabeth’s comment that I’m responding to here (so far); apologies if it would have been less confusing for me to read the entire thread before responding.]
I have always capitalized both EA and Rationality, and have never thought about it before. The first justification for capitalizing R that comes to mind is all the intentionality/intelligence that I perceive was invested into the proto-“AI Safety” community under EY’s (and others’) leadership. Isn’t it fair to describe the “Rationalist/Rationality” community as the branch of AI Safety/X-risk that is downstream of MIRI, LW, the Sequences, 🪄HPMOR, etc?