~5 months I formally quit EA (formally here means “I made an announcement on Facebook”). My friend Timothy was very curious as to why; I felt my reasons applied to him as well. This disagreement eventually led to a podcast episode, where he and I try convince each other to change sides on Effective Altruism- he tries to convince me to rejoin, and I try to convince him to quit. 

Audio recording

Transcript

Some highlights:

Spoilers: Timothy agrees leaving EA was right for me, but he wants to invest more in fixing it.

Thanks to my Patreon patrons for supporting my part of this work. 

New Comment
78 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Some notes from the transcript:

I believe there are ways to recruit college students responsibly. I don't believe the way EA is doing it really has a chance to be responsible. I would say, the way EA is doing it can't filter and inform the way healthy recruiting needs to.  And they're funneling people, into something that naivete hurts you in. I think aggressive recruiting is bad for both the students and for EA itself.

Enjoyed this point -- I would guess that the feedback loop from EA college recruiting is super long and is weakly aligned.  Those in charge of setting recruiting strategy (eg CEA Groups team, and then university organizers) don't see the downstream impacts of their choices, unlike in a startup where you work directly with your hires, and quickly see whether your choices were good or bad.

Might be worth examining how other recruiting-driven companies (like Google) or movements (...early Christianity?) maintain their values, or degrade over time.

Seattle EA watched a couple of the animal farming suffering documentaries. And everyone was of course horrified But, not everyone was ready to just jump on, let's give this up entirely forever. So we started doing

... (read more)
[-]TsviBT279

don't see the downstream impacts of their choices,

This could be part of it... but I think a hypothesis that does have to be kept in mind is that some people don't care. They aren't trying to follow action-policies that lead to good outcomes, they're doing something else. Primarily, acting on an addiction to Steam. If a recruitment strategy works, that's a justification in and of itself, full stop. EA is good because it has power, more people in EA means more power to EA, therefore more people in EA is good. Given a choice between recruiting 2 agents and turning them both into zombies, vs recruiting 1 agent and keeping them an agent, you of course choose the first one--2 is more than 1.

Mm I'm extremely skeptical that the inner experience of an EA college organizer or CEA groups team is usefully modeled as "I want recruits at all costs". I predict that if you talk to one and asking them about it, you'd find the same.

I do think that it's easy to accidentally goodhart or be unreflective about the outcomes of pursuing a particular policy -- but I'd encourage y'all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned.

I haven't grokked the notion of "an addiction to steam" yet, so I'm not sure whether I agree with that account, but I have a feeling that when you write "I'd encourage y'all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned" you are papering over real values differences.

Tons of EAs will tell you that honesty and integrity and truth-seeking are of course 'important', but if you observe their behavior they'll trade them off pretty harshly with PR concerns or QALYs bought or plan-changes. I think there's a difference in the culture and values between (on one hand) people around rationalist circles who worry a lot about how to give honest answers to things like 'How are you doing today?', who hold themselves to the standards of intent to inform rather than simply whether they out and out lied, who will show up and have long arguments with people who have moral critiques of them, and (on the other hand) most of the people in the EA culture and positions of power who don't do this, and so the latter can much more easily deceive and take advantage of people by funneling them into career paths which basically boil down to 'd... (read more)

6Austin Chen
Mm I basically agree that: * there are real value differences between EA folks and rationalists * good intentions do not substitute for good outcomes However: * I don't think differences in values explain much of the differences in results - sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned * I'm pushing back against Tsvi's claims that "some people don't care" or "EA recruiters would consciously choose 2 zombies over 1 agent" - I think ascribing bad intentions to individuals ends up pretty mindkilly Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.

Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.

Insofar as you're thinking I said bad people, please don't let yourself make that mistake, I said bad values. 

There are occasional bad people like SBF but that's not what I'm talking about here. I'm talking about a lot of perfectly kind people who don't hold the values of integrity and truth-seeking as part of who they are, and who couldn't give a good account for why many rationalists value those things so much (and might well call rationalists weird and autistic if you asked them to try).

I don't think differences in values explain much of the differences in results - sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned

This is a crux. I acknowledge I probably share more values with a random EA than a random university student, but I don't think that's actually saying that much, and I believe there's a lot of massively impactful difference in culture and values.

I'm pushing back against Tsvi's claims that "some people don't care" or "EA recruiters would consciously

... (read more)
8Screwtape
My best guess is something like a third of rationalists are also EAs, at least going by identification. (I'm being lazy for the moment and not cross checking "Identifies as Rationalist" against "Identifies as EA" but I can if you want me to and I'm like 85% sure the less-lazy check will bear that out.) My educated but irresponsible guess is something like 10% of EAs are rationalists. Last time I did a straw poll at an ACX meetup, more than half the people attending were also EAs. Whatever the differences are, it's not stopping a substantial overlap on membership, and I don't think that's just at the level of random members but includes a lot of the notable members. I'd be pretty open to a definition of 'rationalist' that was about more than self-identification, but to my knowledge we don't have a workable definition better than that. It's plausible to me that the differences matter as you lean on them a lot, but I think it's more likely the two groups are aligned for most purposes. 
6Ben Pace
Thanks for the data! I agree there's a fair bit of overlap in clusters of people.  Two points: 1. I am talking about the cultural values more than simply the individuals. I think a person's environment really brings very different things out of them. The same person(s) working at Amazon, DC politics, and a global-health non-profit, will get invited to live out different values and build quite different identities for themselves. The same person in-person and on Twitter can also behave as quite different people. I think LessWrong has a distinct culture from the EA Forum, and I think EAG has a distinct culture from ACX meetups. 2. Not every person in a scene strongly embodies the ideals and aspirations of that scene. Many people who come to rationalist meetups I have yet to get on the same page about with lots of values e.g. I still somewhat regularly have to give arguments against various reasons for why people sometimes endorse self-deception, even to folks who have been around for many years. The ideals of EA and LW are different. So even though the two scenes have overlap in people, I still think the scenes live out and aspire to different values and different cultures, and this explains a lot of difference in outcomes.
2Austin Chen
I appreciate you drawing the distinction! The bit about "bad people" was more directed at Tsvi, or possibly the voters who agreevoted with Tsvi. Mm, I think if the question is "what accounts for the differences between the EA and rationalist movements today, wrt number of adherents, reputation, amount of influence, achievements" I would assign credit in the ratio of ~1:3 to differences in (values held by individuals):systems. Where systems are roughly: how the organizations are set up, how funding and information flows through the ecosystem. (As I write this, I realize that maybe even caring about adherents/reputation/influence/achievement in the first place is an impact-based, EA-frame, and the thing that Ben cares about is more like "what accounts for the differences in their philosophies or gestalt of what it feels like to be in the movement"; I feel like I'm lowkey failing an ITT here...)
4Ben Pace
I can think about that question if it seems relevant, but the initial claim of Elizabeth's was "I believe there are ways to recruit college students responsibly. I don't believe the way EA is doing it really has a chance to be responsible". So I was trying to give an account of the root cause there. Also — and I recognize that I'm saying something relatively trivial here — the root cause of a problem in a system can of course be any seemingly minor part of it. Just because I'm saying one part of the system is causing problems (the culture's values) doesn't mean I'm saying that's what's primarily responsible for the output. The current cause of a software company's current problems might be the slow speed with which PR reviews are happening, but this shouldn't be mistaken for the claim that the credit allocation for the company's success is primarily that it can do PR reviews fast. So to repeat, I'm saying that IMO the root cause of irresponsible movement growth and ponzi-scheme-like recruitment strategies was a lack of IMO very important values like dialogue and candor and respecting other people's sense-making and courage and so on, rather than an explanation more like 'those doing recruitment had poor feedback loops so had a hard time knowing what tradeoffs to make' (my paraphrase of your suggestion).  I would have to think harder about which specific values I believe caused this particular issue, but that's my broad point.
9TsviBT
Ben's responses largely cover what I would have wanted to say. But on a meta note: I wrote specifically I do also think the hypothesis is true (and it's reasonable for this thread to discuss that claim, of course). But the reason I said it that way, is that it's a relatively hard hypothesis to evaluate. You'd probably have to have several long conversations with several different people, in which you successfully listen intensely to who they are / what they're thinking / how they're processing what you say. Probably only then could you even have a chance at reasonably concluding something like "they actually don't care about X", as distinct from "they know something that implies X isn't so important here" or "they just don't get that I'm talking about X" or "they do care about X but I wasn't hearing how" or "they're defensive in this moment, but will update later" or "they just hadn't heard why X is important (but would be open to learning that)", etc. I agree that it's a potentially mindkilly hypothesis. And because it's hard to evaluate, the implicature of assertions about it is awkward--I wanted to acknowledge that it would be difficult to find a consensus belief state, and I wanted to avoid implying that the assertion is something we ought to be able to come to consensus about right now. And, more simply, it would take substantial work to explain the evidence for the hypothesis being true (in large part because I'd have to sort out my thoughts). For these reasons, my implied request is less like "let's evaluate this hypothesis right now", and more like "would you please file this hypothesis away in your head, and then if you're in a long conversation, on the relevant topic with someone in the relevant category, maybe try holding up the hypothesis next to your observations and seeing if it explains things or not". In other words, it's a request for more data and a request for someone to think through the hypothesis more. It's far from perfectly neutral--if so
4ChristianKl
The problem is that even small differences in values can have massive differences in outcomes when the difference is caring about truth while keeping the other values similar. As Elizabeth wrote Truthseeking is the ground in which other principles grow. 

Was there ever a time where CEA was focusing on truth-alignment? 

I doesn't seem to me like they used to be truth-aligned and then they did recruiting in a way that caused a value shift is a good explanation of what happened. They always optimized for PR instead of optimizing for truth-alignment. 

It's quite a while since they edited out Leverage Research on the photos that they published with their website, but the kind of organization where people consider it reasonable to edit photos that way is far from truth-aligned. 

Edit:

Julia Wise messaged me and made me aware that I confused CEA with the other CEA. The photo incident happened on the 80,000 hours website and the page talks about promoting CEA events like EA global and the local EA groups that CEA supports (at the time 80,000 hours was part of the CEA that's now called EV). While I don't think that this makes CEA completely innocent here, because they should see that people who promote their events under the banner of their organization name should behave ethically, I do think it gives a valid explanation for why this wouldn't be make it central for the mistakes page of CEA and they want to focus the mistakes page on mistakes made by direct employees of the entity that's now called CEA.

I think not enforcing an "in or out" boundary is big contributor to this degradation -- like, majorly successful religions required all kinds of sacrifice.

I feel ambivalent about this.  On one hand, yes, you need to have standards, and I think EA's move towards big-tentism degraded it significantly. On the other hand I think having sharp inclusion functions are bad for people in a movement[1], cut the movement off from useful work done outside itself, selects for people searching for validation and belonging, and selects against thoughtful people with other options

I think I'm reasonably Catholic, even though I don't know anything about the living Catholic leaders.

I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn't have a leader they trust and respect, because Catholicism has a longer tradition, and you can work within that. On the other hand... I wouldn't say this to most people, but my model is you'd prefer I be this blunt... my understanding is Catholicism is about submission to the hierarchy, and if you're not doing that or don't actively believe they are worthy of that, you're LARPing. I don't think this is tru... (read more)

Reply1111

I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta

I agree, am fairly worried about AI safety taking over too much of EA. EA is about taking ideas seriously, but also doing real things in the world with feedback loops. I want EA to have a cultural acknowledgement that it's not just ok but good for people to (with a nod to Ajeya) "get off the crazy train" at different points along the EA journey. We currently have too many people taking it all the way into AI town. I again don't know what to do to fix it.

I think it's good to want to have moderating impulses on people doing extreme things to fit in. But insofar as you're saying that believing 'AI is an existential threat to our civilization' is 'crazy town', I don't really know what to say. I don't believe it's crazy town, and I don't think that thinking it's crazy town is a reasonable position. Civilization is investing billions of dollars into growing AI systems that we don't understand and they're getting more capable by the month. They talk and beat us at Go and speed up our code significantly. This is just the start, companies are raising massive amounts of money to scale these systems.

I worry you're caught up worrying what people might've thought about you thinking that ten years ago. Not only is this idea now well within the overton window, my sense is that people saying it's 'crazy town' either haven't engaged with the arguments (e.g.) or are somehow throwing their own ability to do basic reasoning out of the window. 

Added: I recognize it's rude to suggest any psychologizing here but I read the thing you wrote as saying that the thing I expect to kill me and everyone I love doesn't exist and I'm crazy for thinking it, and so I'm naturally a bit scared by you asserting it as though it's the default and correct position.

2lincolnquirk
(Just clarifying that I don't personally believe working on AI is crazy town. I'm quoting a thing that made an impact on me awhile back and I still think is relevant culturally for the EA movement.)

We currently have too many people taking it all the way into AI town.

I reject the implication that AI town is the last stop on the crazy train.

4habryka
I think feedback loops are good, but how is that incompatible with taking AI seriously? At this point, even if you want to work on things with tighter feedback loops, AI seems like the central game in town (probably by developing technology that leverages it, while thinking carefully about the indirect effects of that, or at the very least, by being in touch with how it will affect whatever other problem you are trying to solve, since it will probably affect all of them).
8khafra
Catholic EA: You have a leader you trust and respect, and defer to their judgement. Sola Fide EA: You read 80k hours and Givewell, but you keep your own spreadsheet of EV calculations. 

This is a good point. In my ideal movement makes perfect sense to disagree with every leader and yet still be a central member of the group. LessWrong has basically pulled that off. EA somehow managed to be bad at having leaders (both in the sense that the closest things to leaders don't want to be closer, and that I don't respect them), while being the sort of thing that requires leaders. 

6Vaniver
As an additional comment, few organizations have splintered more publicly than Catholicism; it seems sort of surreal to me to not check whether or not you ended up on the right side of the splintering. [This is probably more about theological questions than it is about leadership, but as you say, the leadership is relevant!]
3Lorenzo
  This might be a bit off-topic, but I'm very confused by this. I was raised Catholic, and the Wikipedia description matches my understanding of Catholicism (compared to other Christian denominations) Do you not know who the living Pope is, while still believing he's the successor to Saint Peter and has authority delegated from Jesus to rule over the entire Church? Or do you disagree with the Wikipedia and the Catholic Church definitions of the core beliefs of Catholicism?   I'm confused by this as well. All the people I know who worked on those trips (either as an organiser or as a volunteer) don't think it helped their epistemics at all, compared to e.g. reading the literature on development economics. I definitely think on the ground experience is extremely valuable (see this recent comment and this classic post) but I think watching vegan documentaries, visiting farms, and doing voluntourism are all bad ways to improve the accuracy of your map of actual reality.
2Austin Chen
I understand that the current pope is Pope Francis, but I know much much more about the worldviews of folks like Joe Carlsmith or Holden Karnofsky, compared to the pope. I don't feel this makes me not Catholic; I continue to go to church every Sunday, live my life (mostly) in accordance with Catholic teaching, etc. Similarly, I can't name my senator or representative and barely know what Biden stands for, but I think I'm reasonably American. I went on one of those trips as a middle schooler (to Mexico, not Africa). I don't know that it helped my epistemics much, but I did get like, a visceral experience of what the life of someone in a third-world country would be like, that I wouldn't have gotten otherwise and no amount of research literature reading would replicate. I don't literally think that every EA should book plane tickets to Africa, or break into a factory farm, or whatnot. (though: I would love to see some folks try this!) I do think there's an overreliance on consuming research and data, and an underreliance on just doing things and having reality give you feedback.
3Lorenzo
That makes sense, thanks. I would say that compared to Catholicism, in EA you have much less reason to care about the movement leaders, as them having authority to rule over EA is not part of its beliefs.   For what it's worth, I've talked with several people I've met through EA who regularly "break" into factory farms[1] or who regularly work in developing countries. It's definitely possible that it should be more, but I would claim that the percentage of people doing this is much higher than baseline among people who know about EA, and I think it can have downsides for the reasons mentioned in 'Against Empathy.' 1. ^ They claim that they enter them without any breaking, I can't verify that claim, but I can verify that they have videos of themselves inside factory farms.
2Screwtape
Counterargument, I think there's enough different streams of EA that this would not be especially helpful. There exists a president of GiveWell. There exists a president of 80k Hours. There exists a president of Open Philanthropy. Those three organizations seem pretty close to each other, and there's a lot of others further afield. I think there would be a lot of debating, some of it acrimonious, about who counted as 'in the movement' enough to vote on a president of EA, and it would be easy to wind up with a president that nobody with a big mailing list or a pile of money actually had to listen to. 

(Commenting as myself, not representing any org)

Thanks Elizabeth and Timothy for doing this! Lots of valuable ideas in this transcript.

I felt excited, sad, and also a bit confused, since it feels both slightly resonant but also somewhat disconnected from my experience of EA. Resonant because I agree with the college-recruiting and epistemic aspects of your critiques. Disconnected, because while collectively the community doesn't seem to be going in the direction that I would hope, I do see many individuals in EA leadership positions who I deeply respect and trust to have good individual views and good process and I'm sad you don't see them (maybe they are people who aren't at their best online, and mostly aren't in the Bay).

I am pretty worried about the Forum and social media more broadly. We need better forms of engagement online - like this article + your other critiques. In the last few years, it's become clearer and clearer to me that EA's online strategy is not really serving the community well. If I knew what the right strategy was, I would try to nudge it. Regardless I still see lots of good in EA's work and overall trajectory.

[my critiques] dropped like a stone through wa

... (read more)

Maybe you just don't see the effects yet? It takes a long time for things to take effect, even internally in places you wouldn't have access to, and even longer for them to be externally visible. Personally, I read approximately everything you (Elizabeth) write on the Forum and LW, and occasionally cite it to others in EA leadership world. That's why I'm pretty sure your work has had nontrivial impact. I am not too surprised that its impact hasn't become apparent to you though.

I've repeatedly had interactions with ~leadership EA that asks me to assume there's a shadow  EA cabal (positive valence) that is both skilled and aligned with my values. Or puts the burden on me to prove it doesn't exist, which of course I can't do. And what you're saying here is close enough to trigger the rant.

I would love for the aligned shadow cabal to be real. I would especially love if the reason I didn't know how wonderful it was was that it was so hypercompetent I wasn't worth including, despite the value match. But I'm not going to assume it exists just because I can't definitively prove otherwise. 

If shadow EA wants my approval, it can show me the evidence. If it decides my approval isn't worth the work, it can accept my disapproval while continuing its more important work. I am being 100% sincere here, I treasure the right to take action without having to reach consensus- but this doesn't spare you from the consequences of hidden action or reasoning. 

7Raemon
I think I actually agree with Lincoln here and think he was saying a different thing than your comment here seems to be oriented around. I don't think Lincoln's comment had much to do with assuming there was a shadow EA cabal that was aligned with your values. He said "your words are having an impact." Words having impacts just does actually take time. I updated from stuff Ben Hoffman said, but it did take 3-4 years or something for the update to fully happen (for me in particular), and when I did ~finish updating the amount I was going to update, it wasn't exactly the way Ben Hoffman wanted. In the first 3 years, it's not like I can show Ben Hoffman "I am ready for your approval", or even that I've concretely updated any particular way, because it was a slow messy process and it wasn't like I knew for sure how close to his camp I was going to land. But, it wouldn't have been true to say "his critiques dropped like a stone through water". (Habryka has said they also affected him, and this seems generally to have actually reveberated a lot).  I don't know whether or not your critiques have landed, but I think it is too soon to judge.
1Elizabeth
How much are you arguing about wording, vs genuinely believe and would bet money that in 3-5 years my work will have moved EA to something I can live with?
5Raemon
I definitely wouldn't bet money that EA will have evolved into something you can live with (Neither EA nor the threads of rationality that he affeted evolved into things Ben Hoffman could live with) But, I do think there is something important about the fact that, despite that, it is inaccurate to say "the critiques dropped like a stone through water" (or, what I interpret that poetry to mean, which is something like "basically nobody listened at all". I don't think I misunderstood that part but if I did then I do retract my claim)
2Raemon
The thing I would bet is "your 'build a lifeboat for some people-like-you to move to somewhere other than EA' plan will work at least a bit, and, one of the important mechanisms for it working will be those effortful posts you wrote."

I liked Zach's recent talk/Forum post about EA's commitment to principles first. I hope this is at least a bit hope-inspiring, since I get the sense that a big part of your critique is that EA has lost its principles.

The problem is that Zach does not mention being truth-aligned as one of the core principles that we wants to uphold. 

He writes "CEA focuses on scope sensitivity, scout mindset, impartiality, and the recognition of tradeoffs".

If we take an act like deleting out inconvenient information like the phrase Leverage Research from a photo on the CEA website, it does violate the principle of being truth aligned but not any of the one's that Zach mentioned.

If I would ask Zach whether he thinks releasing the people that CEA bars with nondisclosure agreements about that one episode with Leverage about which we unfortunately don't know more than that there are nondisclosure agreements, I don't think he would release them. A sign of being truth-aligned would be to release the information but none of the principles Zach points in the direction of releasing people from the nondisclosure agreements. 

Saying that your principle is "impartiality" instead of saying that it is "un... (read more)

5jefftk
I don't know the details in the Leverage case, but usually the way this sort of non-disclosure works is that both parties in a dispute, including employees, have non-disclosure obligations. But one party isn't able to release their (ex) employees unilaterally; the other party would need to agree as well. That is, I suspect the agreements are structured such that CEA releasing people the way you propose (without Leverage also agreeing, which I doubt they would) would be a willful contract violation.
9habryka
All the involved Leverage employees told me that they would be fine having the thing released, and that it was CEA who wanted to keep things private (this might be inaccurate, this situation sure involves a lot of people saying contradictory things).
4ChristianKl
I did talk with Geoff Anders about this. He told me that there's no legal agreement between CEA and Leverage. However, there are Leverage employees that are ex-CEA and thus bound by legal agreement. Geoff himself said, that he would consider it positive for the information to be public but he would not want to pick another fight with CEA by publically talking about what happened. 
8ChristianKl
That does sound like learned helplessness and that the EA leadership filters people out who would see ways forward. Let me give you one: If people in EA would consider her critiques to have real value, then the obvious step is to give Elizabeth money to write more. Given that she has a Patreon the way to give her money is pretty straightforward. If the writing influences what happens in EV board discussions, paying Elizabeth for the value she provides for the board would be straightforward.  If she would get paid decently, I would expect she would feel she's making an impact.  Paying Elizabeth might not be the solution to all of EA's problems, but it's a way to signal priorities. Estimate the value she provides to EA and then pay her for that value and publically publish as EV a writeup that EV thinks that this is the amount of value she provides to EA and was paid by EV. 
9Elizabeth
  First of all, thank you, love it when people suggest I receive money. Timothy and I have talked about fundraising for a continued podcast. I would strongly prefer most of the funding be crowdfunding, for the reason you say. If we did this it would almost certainly be through Manifund. Signing up for Patreon and noting this as the reason also works, although for my own sanity this will always be a side project. I should note that my work on EA up through May was covered by a Lightspeed grant, but I don't consider that EA money.
2ChristianKl
Yes, giving money in form of a grant might not be the best way to fund good posts as it makes it harder to criticize the entity that funds you and decentralized crowdfunding is better. Maybe, an EV blog post saying something like: If the problem is as lincolnquirk, describes that in general they don't have much ideas about how to do better and your writing had nontrivial impact by giving ideas about what to do better, that would be the straightforward way forward. 
4Elizabeth
The desire for crowdfunding is less about avoiding bias[1] and more that this is only worth doing if people are listening, and small donors are much better evidence on that question than grants. If EV gave explicit instructions to donate to me it would be more like a grant than spontaneous small donors, although I in general agree people should be looking for opportunities they can beat GiveWell.  ETA: we were planning on waiting on this but since there's interest I might as well post the fundraiser now.   1. ^ I'm fortunate to have both a long runway and sources of income outside of EA and rationality. One reason I've pushed as hard as I have on EA is that I had a rare combination of deep knowledge of and financial independence from EA. If couldn't do it, who could?
2Chris_Leong
  Why do you think that this is the case?
2Elizabeth
Reading this makes me feel really sad because I’d like to believe it, but I can’t, for all the reasons outlined in the OP.  I could get into more details, but it would be pretty costly for me for (I think) no benefit. The only reason I came back to EA criticism was that talking to Timothy feels wholesome and good, as opposed to the battery acid feeling I get from most discussions of EA. 

I want to register high appreciation of Elizabeth for her efforts and intentions described here. <3

The remainder of this post is speculations about solutions.  "If one were to try to fix the problem", or perhaps "If one were to try to preempt this problem in a fresh community".  I'm agnostic about whether one should try.

Notes on the general problem:

  • I suspect lots of our kind of people are not enthusiastic about kicking people out.  I think several people have commented, on some cases of seriously bad actors, that it took way too long to a
... (read more)
9ChristianKl
It's worth noting that Jacy was sort-of kicked out (see https://nonprofitchroniclesdotcom.wordpress.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/ )
2ChristianKl
To me, that will lead to an environment where people think that they are engaging with criticism without having to really engage with the criticism that actually matters.  From Scott's Criticism Of Criticism Of Criticism: If you frame the criticism as having to be about the mission of psychiatry, it's easy for people to see "Is it ethical to charge poor patients three-digit fees for no-shows?" as off-topic.  In an organization like GiveWell people who criticize GiveWell's mission in such a way, are unlikely to talk about the ways, in which GiveWell favors raising more donations over being more truthseeking, that Ben Hoffman described. 
2localdeity
This is a possible outcome, especially if the above tactic were the only tactic to be employed.  That tactic helps reduce ignorance of the "other side" on the issues that get the steelmanning discussion, and hopefully also pushes away low-curiosity tribalistic partisans while retaining members who value deepening understanding and intellectual integrity.  There are lots of different ways for things to go wrong, and any complete strategy probably needs to use lots of tactics.  Perhaps the most important tactic would be to notice when things are going wrong (ideally early) and adjust what you're doing, possibly designing new tactics in the process. Also, in judging a strategy, we should know what resources we assume we have (e.g. "the meetup leader is following the practice we've specified and is willing to follow 'reasonable' requests or suggestions from us"), and know what threats we're modeling.  In principle, we might sort the dangers by [impact if it happens] x [probability of it happening], enumerate tactics to handle the top several, do some cost-benefit analysis, decide on some practices, and repeat. My understanding/guess is that "Is it ethical to charge poor patients three-digit fees for no-shows?" is an issue where the psychiatrists know the options and the impacts of the options, and the "likelihood of people actually coming to blows" comes from social signaling things like "If I say I don't charge them, this shows I'm in a comfortable financial position and that I'm compassionate for poor patients"/"If I say I do charge them, this opens me up to accusations (tinged with social justice advocacy) of heartlessness and greed".  I would guess that many psychiatrists do charge the fees, but would hate being forced to admit it in public.  Anyway, the problem here is not that psychiatrists are unaware of information on the issue, so there'd be little point in doing a steelmanning exercise about it. That said, as you suggest, it is possible that people would sp
4ChristianKl
If I say that other psychiatrists at the conference are engaging in an ethical lapse when they charge late fees to poor people then I'm engaging in an uncomfortable interpersonal conflict. It's about personal incentives that actually matter a lot to the day-to-day practice of psychiatry.  While the psychiatrists are certainly aware of them charging poor people, they are likely thinking about it normally as business as usual instead of considering it as an ethical issue.  If we take Scott's example of psychiatrists talking about racism being a problem in psychiatry I don't think the problem is that that racism is unimportant. The problem is rather that you can get points by virtue signaling talking about the problem and find common ground around the virtue signaling if you are willing to burn a few scapegoats while talking about the issues of charging poor people late fees is divisive.  Washington DC is one of the most liberal places in the US with people who are good at virtue signaling and pretending they care about "solving systematic racism" yet, they passed a bill to require college degrees for childcare services. If you apply the textbook definition of systematic racism, requiring college degrees for childcare services is about creating a system that prevents poor Black people to look after children.  Systematic racism that prevents poor Black people from offering childcare services is bad but the people in Washington DC are good at rationalising. The whole discourse about racism is of a nature where people score their points by virtue signaling about how they care about fighting racism. They practice steelmanning racism all the time and steelmanning the concept of systematic racism and yet they pass systematic racist laws because they don't like poor Black people looking after their children.  If you tell White people in Washington DC who are already steelmanning systematic racism to the best of their ability that they should steelman it more because they

Issues in transcript labeling (I'm curious how much of it was done by machine):

  • After 00:07:55, a line is unattributed to either speaker; looks like it should be Timothy.
  • 00:09:43 is attributed to Timothy but I think must be Elizabeth.
  • Then the next line is unattributed (should be Timothy).
  • After 00:14:00, unattributed (should be Timothy).
  • After 00:23:38, unattributed (should be Timothy)
  • After 00:32:34, unattributed (probably Elizabeth)
1Timothy Telleen-Lawton
Awesome, thank you! I'm not sure if we're going to correct this; it's a pain in the butt to fix, especially in the YouTube version, and Elizabeth (who has been doing all the editing herself) is sick right now.

I work at CEA, and I recently became the Interim EA Forum Project Lead. I’m writing this in a personal capacity. This does not necessarily represent the views of anyone else at CEA.

I’m responding partly because my new title implies some non-zero amount of “EA leadership”. I don’t think I’m the person anyone would think of when they think “EA leadership”, but I do in fact have a large amount of say wrt what happens on the EA Forum, so if you are seriously interested in making change I’m happy to engage with you. You’re welcome to send me a doc and ask me to... (read more)

There’s a lot here and if my existing writing didn’t answer your questions, I’m not optimistic another comment will help[1]. Instead, how about we find something to bet on? It’s difficult to identify something both cruxy and measurable, but here are two ideas:

I see a pattern of:
1. CEA takes some action with the best of intentions
2. It takes a few years for the toll to come out, but eventually there’s a negative consensus on it.
3. A representative of CEA agrees the negative consensus is deserved, but since it occurred under old leadership, doesn’t think anyone should draw conclusions about new leadership from it.
4. CEA announces new program with the best of intentions.

So I would bet that within 3 years, a CEA representative will repudiate a major project occurring under Zach’s watch.

I would also bet on more posts similar to Bad Omens in Current Community Building or University Groups Need Fixing coming out in a few years, talking about 2024 recruiting. 
 

  1. ^
... (read more)
4Sarah Cheng
Thanks! I'm down to bet, though I don't feel like it would make sense for me to take either of those specific bets. I feel pretty clueless about whether "a CEA representative will repudiate a major project occurring under Zach’s watch". I guess I think it's reasonable for someone who was just hired at CEA to not to be held personally responsible for projects that started and ended before they were hired (though I may be misunderstanding your proposed bet). I also have very little information about the current state of EA university group recruiting, so I wouldn't be that surprised if "more posts similar to Bad Omens in Current Community Building or University Groups Need Fixing coming out in a few years, talking about 2024 recruiting". TBH I'm still not clear on what we disagree about, or even whether we actually disagree about anything. 😅 Apologies if I wasn't clear about this, but my main comment was primarily a summary of my personal perspective, which is based on a tiny fraction of all the relevant information. I'm very open to the possibility that, for example, EA university group recruiting is pressuring students more than I would find appropriate. It's just that, based on the tiny fraction of information I have, I see no evidence of that and only see evidence of the opposite. I would be really interested to hear if you have done a recent investigation and have evidence to support your claims, because you would have a fair chance of convincing me to take some action. Anyway, I appreciate you responding and no worries if you want to drop this. :) My offer to chat synchronously still stands, if you're ever interested. Though since I'm in an interim position, I'm not sure how long I will have the "EA Forum Project Lead" title.
5Lorenzo
I think this would be a mistake (or more likely I think you and Elizabeth mean different things here.)  As you mention in other parts of your comment, most people who consider themselves aligned with EA don't know or care much about CEA, and coupling their alignment with EA as principles with an alignment with CEA as an organization seems counterproductive.
3Sarah Cheng
Ah interesting, yeah it's certainly possible that I misunderstood Elizabeth here. Apologies if that's the case! I'll try to explain what I mean more, since I'm not sure I understand how my interpretation differs from Elizabeth's original intent. So in the past, CEA's general stance was one more like "providing services" to help people in the EA community improve the world. Under Zach, we are shifting in the direction of "stewardship of EA". I feel that this implies CEA should be more proactive and take more responsibility for the trajectory of EA than it has in the past (to be clear, I don't think this means we should try to be the sole leader, or give people orders, or be the only voice speaking for EA). One concrete example is about how much steering the Forum team does: in the past, I would have been more hesitant to steer discussions on the Forum, but now it feels more appropriate (and perhaps even necessary) for the Forum team to be more opinionated and steer discussions in that space. Sorry, I don't feel like I understand this point — could you expand on this, or rephrase?
5Lorenzo
  As a personal example, I feel really aligned with EA principles[1], I feel much less sure about CEA as an organization.[2] If the frame becomes "EA is what CEA does", you would lose a lot of the value of the term "EA", and I think very few people would find it useful. See why effective altruism is always lowercase, and William MacAskill "effective altruism is not a package of particular views." My understanding is that you agree with me, while Elizabeth would want effective altruism to be uppercase in a sense, with a package of particular views that she can clearly agree or disagree with, and an EA Leader that says "this is EA" and "this is not EA." (Apologies if I misunderstood your views) "CEA as an institution is taking more of a leadership role" could be interpreted as saying that CEA is now more empowered to be the "EA Leader" that decides what is EA, but I think that's not what you mean from the rest of your comment. Does that make sense? 1. ^ For me EA principles are these ones: I think these are principles that most people disagree with, and most people are importantly wrong. I think they are directionally importantly right in my particular social context (while of course they could be dangerous in other theoretical contexts) 2. ^ Despite thinking that all people I've interacted with who work there greately care about those same principles.
4Elizabeth
Seeing my statements reflected back is helpful, thank you. I think Effective Altruism is upper case and has been for a long time, in part because it aggressively recruited people who wanted to follow[1]. In my ideal world it both has better leadership and needs less of it, because members are less dependent.  I think rationality does a decent job here. There are strong leaders of individual fiefdoms, and networks of respect and trust, but it's much more federated.  1. ^ Which is noble and should be respected- the world needs more followers than leaders. But if you actively recruit them, you need to take responsibility for providing leadership. 
2Sarah Cheng
Thanks, that's very helpful! Yeah I believe you've correctly described my views. To me, EA is defined by the principles. I'll update my original comment, since now it seems that bit is misleading. (I still think there is something there that gestures in the direction that Elizabeth is going. When I say "CEA is taking more of a leadership role", I simply mean that literally — like, previously CEA was not viewing itself as being in a leadership role, and now it is doing that a non-zero amount. I think it matters that someone views themselves as even slightly responsible for the trajectory of EA, and you can't really be responsible without wielding some power. So that's how I read the "willing to own their power more" quote.)
[-]Raemon412

fwiw, I think it'd be helpful if this post had the transcript posted as part of the main post body.

2Elizabeth
I'm curious why this feels better, and for other opinions on this. 
2Ben Pace
You could put it in a collapsible section, so that it's easy to get to the comment section by-default.

I still consider myself to be EA, but I do feel like a lot of people calling themselves that and interacting with the EA forum aren't what I would consider EA. Amusingly, my attempts to engage with people on the EA forum recently resulted in someone telling me that my views weren't EA. So they also see a divide. What to do about two different groups wanting to claim the same movement? I don't yet feel ready to abandon EA. I feel like I'm a grumpy old man saying "I was here first, and you young'uns don't understand what the true EA is!"

A link to a comment I... (read more)

What I think is more likely than EA pivoting is a handful of people launch a lifeboat and recreate a high integrity version of EA.

Thoughts on how this might be done:

  • Interview a bunch of people who became disillusioned. Try to identify common complaints.

  • For each common complaint, research organizational psychology, history of high-performing organizations, etc. and brainstorm institutional solutions to address that complaint. By "institutional solutions", I mean approaches which claim to e.g. fix an underlying bad incentive structure, so it won't

... (read more)

[00:31:25] Timothy:... This is going to be like, they didn't talk about any content, like there's no specific evidence, 

[00:31:48] Elizabeth: I wrote down my evidence ahead of time.

[00:31:49] Timothy: Yeah, you already wrote down your evidence

I feel pretty uncertain to what extent I agree with your views on EA. But this podcast didn't really help me decide because there wasn't much discussion of specific evidence. Where is all of it written down? I'm aware of your post on vegan advocacy but unclear if there are lots more examples. I also hea... (read more)

3Elizabeth
there are links in the description of the video

That was an interesting conversation.

I do have some worries about the EA community.

At the same, I'm excited to see that Zach Robison has taken the reins as CEA and I'm looking forward to seeing how things develop under his leadership. The early signs have been promising.

5ChristianKl
What concrete things did he change at CEA that are promising signs?
3Chris_Leong
I thought that this post on strategy and this talk were well done. Obviously, I'll have to see how this translates into practise.

The post basically says that the taking actions like "running EA global" is the "principles-first" approach as it is not "cause-first". None of the actions he advocates as principle-first are about, rewarding people for upholding principles or holding people accountable for violating principles. 

How can a strategy for "principle-first" that does not deal with the questions of how to set incentives for people to uphold principles be a good strategy?

If you read the discussion on this page with regards to university groups not upholding principles, there are issues. Zach's proposed strategy sees funding them in the way they currently operate, as a good example of what he sees as principle-first because:

Our Groups program supports EA groups that engage with members who prioritize a variety of causes.
Our current training for facilitators for the intro program emphasizes framing EA as a question and not acting as if there is a clear answer.

This suggests that Zach sees the current training for facilitators already as working well and not as something that should be changed. Suggesting that just because EA groups prioritize a variety of causes they are principles-first seems to me lik... (read more)

-1Chris_Leong
I recommend rereading his post. I believe his use of the term makes sense.
5ChristianKl
I did read his post. The question is not whether the term makes sense but whether it's a good strategy.  It's not about getting people to act according to principles but to rebrand what previously would be called cause-neutral as principle-first and continue to do the same thing CEA did in the past.
0Chris_Leong
Sadly, cause-neutral was an even more confusing term, so this is better than the comparative. I also think that the two notions of principles-first are less disconnected than you think, but through somewhat indirect effects.
5ChristianKl
Even if the term would be an improvement, why would changing out one term for another make you say "early signs have been promising". Promising in the sense that he will come up with new terms, because the core problems of EA is not having the right labels to speak about what EAs are doing?  I would find the perspective on EA where the biggest problem of EA is about it using the wrong labels, to be a quite strange perspective.  A good post about a strategy that attempts to produce indirect effects would lay out the theory of change through which the indirect effects would be created.
0Chris_Leong
I would suggest adopting a different method of interpretation, one more grounded in what was actually said. Anyway, I think it's probably best that we leave this thread here.
[-]ROM10

Elizabeth: So I got them nutritional testing. It showed roughly what I thought. And this was like a whole thing. I applied for a grant. I had to test a lot of people. It's a logistical nightmare. I found exactly what I thought I would. that there were serious nutritional issues, not in everyone, but enough that people should have been concerned. 

How many people in total were tested? From the Interim report, it looks like only six people got tested, so I assume you're referencing something else. 

4Elizabeth
There were ~20 in round 2, and I've gotten reports of other people being inspired by the post to get tested themselves that I estimate at least double that.  
7ROM
Nice! I really like that you did that work and am in agreement that too many vegans in general (not just EA vegans) suck at managing their diet. Of the four former vegans who I know/known personally, all of them stopped because of health reasons (though not necessarily health reasons induced by being vegan).  That said, I don't see round 1 or round 2 as being particularly strong evidence of anything. The sample sizes seem too small to draw much inference from. There's +7k people in the EA movement,[1] with around 46% of whom are vegan or vegetarian. Two surveys, one of six people from Lightcone, another of 20 people (also from Lightcone?) just don't have enough participants to make strong claims on ˜3000 people. You say as much in the second post: This seems at odds with what you claim in the podcast:[2] Separately, it's unclear to me how many people in the second survey actually are vegan / vegetarian rather than people with fatigue problems:   1. ^ This was back in published back in 2021, so I expect the numbers to be even higher now.  2. ^ A separate point/nitpick, this part of the transcript incorrectly attributes your words to Timothy:
2Elizabeth
see also: https://www.lesswrong.com/posts/Wiz4eKi5fsomRsMbx/change-my-mind-veganism-entails-trade-offs-and-health-is-one
1ROM
This post seems to be arguing that veganism involves trade offs (I didn't read through the comments). I don't disagree with that claim[1] (and am grateful for you taking the time to write it up). The part I take issue with is that the two surveys you conducted were strong evidence, which I don't think they are. 1. ^ Though I do lean towards thinking most people or even everyone should bite the bullet and accept the reduced health to spare the animals.