see also: https://www.lesswrong.com/posts/Wiz4eKi5fsomRsMbx/change-my-mind-veganism-entails-trade-offs-and-health-is-one
There’s a lot here and if my existing writing didn’t answer your questions, I’m not optimistic another comment will help[1]. Instead, how about we find something to bet on? It’s difficult to identify something both cruxy and measurable, but here are two ideas:
I see a pattern of:
1. CEA takes some action with the best of intentions
2. It takes a few years for the toll to come out, but eventually there’s a negative consensus on it.
3. A representative of CEA agrees the negative consensus is deserved, but since it occurred under old leadership, doesn’t think anyone should draw conclusions about new leadership from it.
4. CEA announces new program with the best of intentions.
So I would bet that within 3 years, a CEA representative will repudiate a major project occurring under Zach’s watch.
I would also bet on more posts similar to Bad Omens in Current Community Building or University Groups Need Fixing coming out in a few years, talking about 2024 recruiting.
Although you might like Change my mind: Veganism entails trade-offs, and health is one of the axes (the predecessor to EA Vegan Advocacy is not Truthseeking) and Truthseeking when your disagreements lie in moral philosophy and Love, Reverence, and Life (dialogues with a vegan commenter on the same post)
Seeing my statements reflected back is helpful, thank you.
I think Effective Altruism is upper case and has been for a long time, in part because it aggressively recruited people who wanted to follow[1]. In my ideal world it both has better leadership and needs less of it, because members are less dependent.
I think rationality does a decent job here. There are strong leaders of individual fiefdoms, and networks of respect and trust, but it's much more federated.
Which is noble and should be respected- the world needs more followers than leaders. But if you actively recruit them, you need to take responsibility for providing leadership.
I'm curious why this feels better, and for other opinions on this.
How much are you arguing about wording, vs genuinely believe and would bet money that in 3-5 years my work will have moved EA to something I can live with?
The desire for crowdfunding is less about avoiding bias[1] and more that this is only worth doing if people are listening, and small donors are much better evidence on that question than grants. If EV gave explicit instructions to donate to me it would be more like a grant than spontaneous small donors, although I in general agree people should be looking for opportunities they can beat GiveWell.
ETA: we were planning on waiting on this but since there's interest I might as well post the fundraiser now.
I'm fortunate to have both a long runway and sources of income outside of EA and rationality. One reason I've pushed as hard as I have on EA is that I had a rare combination of deep knowledge of and financial independence from EA. If couldn't do it, who could?
there are links in the description of the video
Maybe you just don't see the effects yet? It takes a long time for things to take effect, even internally in places you wouldn't have access to, and even longer for them to be externally visible. Personally, I read approximately everything you (Elizabeth) write on the Forum and LW, and occasionally cite it to others in EA leadership world. That's why I'm pretty sure your work has had nontrivial impact. I am not too surprised that its impact hasn't become apparent to you though.
I've repeatedly had interactions with ~leadership EA that asks me to assume there's a shadow EA cabal (positive valence) that is both skilled and aligned with my values. Or puts the burden on me to prove it doesn't exist, which of course I can't do. And what you're saying here is close enough to trigger the rant.
I would love for the aligned shadow cabal to be real. I would especially love if the reason I didn't know how wonderful it was was that it was so hypercompetent I wasn't worth including, despite the value match. But I'm not going to assume it exists just because I can't definitively prove otherwise.
If shadow EA wants my approval, it can show me the evidence. If it decides my approval isn't worth the work, it can accept my disapproval while continuing its more important work. I am being 100% sincere here, I treasure the right to take action without having to reach consensus- but this doesn't spare you from the consequences of hidden action or reasoning.
This is a good point. In my ideal movement makes perfect sense to disagree with every leader and yet still be a central member of the group. LessWrong has basically pulled that off. EA somehow managed to be bad at having leaders (both in the sense that the closest things to leaders don't want to be closer, and that I don't respect them), while being the sort of thing that requires leaders.
can you elaborate on "this format"?