I wouldn’t be surprised if a lot of EAs see my takes here as a slippery slope to warm glow thinking and wanton spending that needs to be protected against.
I didn't have this reaction at all. The four lessons you present are points about execution, not principles. IMO a lot of these ideas are cheap or free while being super high-value. We can absolutely continue our borg-like utilitarianism and coldhearted cost-benefit analysis while projecting hospitality, building reputation, conserving slack, and promoting inter-institutional cooperation!
But I do think they'll require an EA spin. For example, EA can't eschew high-value cause areas (like X-risk) because it would look weird to be associated with them. But we can and should take reputation into account when selecting interventions (i.e. we should have weighed the benefits of a chance at getting an EA-aligned congressman with the reputational risk that stemmed from putting millions of cryptobucks into a congressional election, not that realistically we had any control over SBF's actions or identity as an EA).
For hospitality, I think one thing EAs can do is to distinguish the "controlling reason" we do an intervention vs. the "felt reason" we do it. What do I mean by that? An EA may choose to donate to Against Malaria Foundation for coldhearted cost-benefit analysis reasons. But that EA can also have other motivations, feelings and values alongside the analysis - being able to tell a visceral, vivid, felt story about why they personally feel connected to that cause is a way to come across as not borglike.
We can donate a little money locally just to project warmth and connection to the people around us, because we do believe in helping locally - we just try to prioritize helping globally even more. But if people are concerned that we've shut off our compassion and feel alienated from EA on that basis, this is a way we can counteract that impression in a way that might even help improve EA engagement, since it's honestly a little difficult to relentlessly reject local appeals for aid in order to give 100% of your charity to EA causes. Like, donate 9% of your income to EA-aligned charities and 1% of your income to local charities. If you make $80,000/year, that's still $800 and an average American's annual charitable donation on its own. And now, instead of the story being "you give zero dollars to local charities so you can do borglike optimization for X-risk-related donations" the story can be "you give as much as the next person to local charities, while also donating a very substantial portion of your income to X-risk-related charities."
To me this just seems like the same line of thinking that leads us to limit the EA donation appeal to 10% of the typical person's income, instead of demanding that people donate until they're living like the global poor. We relax the demands we make on our members in order to make our movement human-compatible. Encouraging a fraction of EA donations to be local or warm-fuzzy-optimized is another way of being human-compatible while still doing a huge amount of good.
not that realistically we had any control over SBF's actions or identity as an EA
Agree little could be done then. But since then, I've noticed the community has an attitude of "Well I'll just keep an eye out next time" or "I'll be less trusting next time" or something. This is inadequate, we can do better.
I'm offering decision markets that will make it harder for frauds to go unnoticed, prioritizing crypto (still experimenting with criteria). But when I show EAs these, I'm kind of stunned by the lack of interest. As if their personal judgment is supposed to be less-corruptible at detecting fraud, than a prediction market. This has been very alarming for me to see.
But who knows -- riffing off the post, maybe that just means prediction markets haven't built up enough reputation for LW/EA to trust it.
Thanks for your super thought out response! I agree with all of it, especially the final paragraph about making EA more human-compatible. Also, I really love this passage:
We can absolutely continue our borg-like utilitarianism and coldhearted cost-benefit analysis while projecting hospitality, building reputation, conserving slack, and promoting inter-institutional cooperation!
Yes. You get me :')
You inspired me to write this up over at EA forum, where it’s getting a terrible reception :D All the best ideas start out unpopular?
“what is your fantasy partner/complement organization?”
I love this question and would never have thought of it on my own.
Samaritans (fake name)
In the UK, this is a suicide helpline charity, so just to flag that it's a fake name in case anyone else missed this intially.
1. Long Term Reputation is Priceless
I agree long term reputation is valuable, but the hard question is "how valuable". It isn't priceless but yes, I agree it's possibly underrated by EAs. But like, when should we use it. What actions should it stop?
what is your fantasy partner/complement organization?
I like this question
I’m anticipating writing several posts on this topic in the coming weeks on the EA forum. I just want to flag that I think your questions about how to think about and value reputation are important, that the EA community is rife with contradictory ideas and inadequate models on this too if, and that we can do a lot better by getting a grip on this subject. I don’t have all the answers, but right now it seems like people are afraid to even talk about the issue openly.
I share your sense that EAs should be thinking about reputation a lot more. A lot of the current thinking has also been very reactive/defensive, and I think that's due both to external factors and to the fact that the community doesn't realize how valuable an actually good reputation can be - thought Nathan is right that it's not literally priceless. Still, I'd love to see the discourse develop in a more proactive position.
I agree long term reputation is valuable, but the hard question is "how valuable". It isn't priceless but yes, I agree it's possibly underrated by EAs. But like, when should we use it. What actions should it stop?
You can't buy reputation, as the OP pointed out, and you can't spend it either, e.g. by lending one's name to dodgy projects, or getting people to take a lie on trust. You use reputation by having it, and the OP described things flowing towards those of good reputation. The actions that maintaining your reputation should stop are those that would damage it. The question is rather, what qualities do EAs want themselves and the EA movement to have a reputation for?
The question is rather, what qualities do EAs want themselves and the EA movement to have a reputation for?
Yes, I think this is a pretty central question. To cross the streams a little, I did talk about this a bit more in the EA Forums comments section: https://forum.effectivealtruism.org/posts/5oTr4ExwpvhjrSgFi/things-i-learned-by-spending-five-thousand-hours-in-non-ea?commentId=KNCg8LHn7sPpQPcR2
"The Good Samaritans" (oft abrebiated to "Good Sammys") is the name of a major local poverty charity here in australia run by the uniting church Generally well regarded and tend not to push religion too hard (compared to the salvation army). So yeah, it would appear to be a fairly recurring name.
Well the Good Samaritan parable is the most well known, most important and most striking parable in the Gospels on the very specific topic of who you should help and how you should help. It's not a wonder it's a recurring name for Christian inspired charities.
I think understanding why traditional charities are allergic to EA could imaginably cause major changes that help reduce the borg-like-ness of EA, which I agree with the critics is a serious problem of these communities.
I don't think the answer is super mysterious; a lot of people are in the field for the fuzzies and it weirds them out that there's some weirdos that seem to be in the field, but missing "heart".
It is definitely a serious problem because it gates a lot of resources that could otherwise come to EA, but I think this might be a case where the cure could be worse than the disease if we're not careful - how much funding needs to be dangled before you're willing to risk EA's assimilation into the current nonprofit industrial complex?
I think being in it for the fuzzies is in some way actually pretty important to effectiveness, and bridging these viewpoints would unlock more effective reasoning patterns. Of course don't give up on effectiveness, but the majority of altruists and solidarity-seekers in the world are fuzzies or anger-at-injustice motivated, and I don't think that's actually bad. Seeing it as bad strikes me as a very negative consequence of the current shallow-thought version of the effectiveness mindset; finding the approaches-to-thinking which can combine their benefits reliably seems exciting for a number of reasons, most centrally that it would be compatible with both memeplexes and thereby allow the coordination groups to merge without borging each other. EA has only been around for a few years, but it's already had some very serious negative impacts under its brand, which I think is in fact a result of having cold calculations at the core. Hmm. "Finally. Warm calculations"
This is awesome. Thank you for writing it up (and thank you more for actually doing the work).
It strikes me that most of these successes stem from the very fact that it's NOT a rigorous cost-benefit calculation, and certainly not a somewhat abstract long-term benefit. These activities are providing actual interactions with real people, and that leads to the higher level of trust and cooperation that they experience.
I know it sucks for nerds to hear that reputation (popularity) is important
I'm not sure we should be thinking of these as the same thing. For example I'd say this is reputation:
Samaritans has a much better, easier time at city hall compared to newer organizations, because of a decades-long productive relationship where we were really helpful with issues surrounding unemployment and homelessness. Permits get back to us really fast, applications get waved through with tedious steps bypassed, and fees are frequently waived. And it made sense that this was happening! Cities also deal with budget and staffing issues, why waste more time and effort than necessary on someone who you *know *knows the proper procedure and will ethically follow it to the letter?
and this is popularity:
the NIMBYs were reluctant to come out against us
And they probably have overlapping causes and effects, but they're not the same.
I think most of what you describe feels more like it comes from reputation than popularity, and also I feel like EA as a whole simply can't ever have that kind of reputation. Individual EA orgs can, but with no barrier to entry, it will never be the case that someone can reliably say "this organization calls itself an EA organization, so I can be confident it will execute competently and clean up after itself". But also, if EA as a whole is unpopular, that's also going to cause problems for well-reputed EA orgs.
Relatedly, I'd be curious about the mechanism behind Samaritans keeping its reputation for however long. (I get a sense that the org is probably between 15 and 50 years old? Not sure where I get that from.) It's probably been through a bunch of CEOs, or whatever equivalent it has, in that time. Those CEOs probably weren't selected on the basis of "who will pick the best successor to themselves". Why has no one decided "we can help people better like this, even if that means breaking some (implicit?) promises we've made" and then oops, no one really trusts them any more?
So some questions I'd be interested in, I guess:
(Acknowledging that this is super reductive about the concept of reputation.)
I get a sense that the org is probably between 15 and 50 years old
Yep, close to the top end of that.
It's probably been through a bunch of CEOs, or whatever equivalent it has, in that time. Those CEOs probably weren't selected on the basis of "who will pick the best successor to themselves". Why has no one decided "we can help people better like this, even if that means breaking some (implicit?) promises we've made" and then oops, no one really trusts them any more?
That's a really great observation. Samaritans has chosen to elide this problem simply by having no change in leadership throughout the entire run of the organization so far. They'll have to deal with a transition soon as the founders are nearing retirement age, but I think they'll be okay; there are lots of well aligned people in the org who have worked there for decades.
Have they had any major fuck ups? If so, did that cost them reputationally? How did they regain trust?
If not, how did they avoid them? Luck? Tending to hire the sorts of people who don't gamble with reputation? (Which might be easier because that sort of person will instead play the power game in a for-profit company?) Just not being old enough yet for that to be a serious concern?
They haven't had any major fuck ups, and there's two main reasons for that imo:
[reputation and popularity] probably have overlapping causes and effects, but they're not the same.
I'm inclined to think that this is a distinction without a difference, but I'm open to having my mind changed on this. Can you expand on this point further? I'm struggling to model what an organization that has a good reputation but is unpopular, or vice versa, might look like.
If EA as a whole is unpopular, that's also going to cause problems for well-reputed EA orgs.
Yes, I think that's the important part, even though you're right that we can't do much about individual orgs choosing to associate itself with EA branding.
This feels vague and not very well pinned down, but:
I think a large part of it is the difference between "how positively do people feel about you" (popularity) and "how confidently can people predict what you'll do" (reputation). Of course both of these also depend on "what people/organizations are we even talking about here".
So when NIMBYs don't want to fight you, that feels like a combination of
both of which are because you're popular with the general public, and I can imagine you being popular with the general public even if you're incompetent at your relations with other organizations.
But when your dealings with government get fast-tracked, that's probably also related to people at city hall thinking well of you. But it also seems to me that a lot of it is "we know what happens when these people get to do the thing they're asking to do". And I can imagine that being the case even if the general public basically doesn't know you, or doesn't like you.
("Reputation" doesn't feel like quite the word for that, because I'm more imagining it being a "you have a history of working with us" thing than a "we heard about you from someone else you worked with" thing.)
Sketches for EA integration
Thinking of money as a universal means of exchange slightly less. Money can buy many goods and services, but not all of them. I know it sucks for nerds to hear that reputation (popularity) is important but I think it’s unfortunately a real thing, and not just on the margin.
Thinking more about what actions and trade-offs EA organizations should take such that they’re beloved institutions in 25 years’ time – and if such a thing is worth it to pursue.
The larger point I'd say that is relevant to EA is that PR scandals are very bad, and thus it's worth doing a lot of work to make sure that what you're doing won't come back to haunt you.
This is why the recent scandals on FTX, Nick Bostrom's comment, etc have been so bad for EA: It blew up their good reputation, and it's going to be very tricky to get it back, if ever.
This fundamentally matters, because IMO EAs underestimate how good a good reputation or good PR is, and underestimate the consequences of bad reputation and bad PR is.
So from my perspective, it means that looking good matters as much as actually doing things.
Ehhh, I think this stuff is easy to overrate. Most people don't know what EA is. There have been some missteps but I think it's still very early on reputationally.
Also, it really depends on the amount of good.
If EA as a brand had to be retired, I'd still stand by the impact. I think currently I wouldn't have wanted to be much more cautious in order to maintain a good brand.
Most people don't know what EA is. There have been some missteps
Naive extrapolation says that when EA becomes larger and more known, there will be more missteps, unless something changes dramatically.
I agree that it is unfortunately easy to overrate reputation, or at least slide down the gradient towards reputation/looking good so hard that nothing much real gets done.
However, the point that matters most is that one should be somewhat wary of associating and praising people who are a big PR risk, since it can blow up your own reputation.
At the very least, you need to be able to know when to separate the person's/other organization's reputation from your organization's reputation. That's the big issue with people like SBF/FTX and, to a lesser extent, Nick Bostrom: They essentially tied their reputation to EA's reputation, such that if there was a crisis in their reputation, EA's reputation would fall too. And it did happen.
It's usually OK to take money from even PR-risky people or organizations, but you absolutely should keep it quiet, and in particular don't try to tie their reputation to yours, and you need a plan for how to respond to bad PR, and maybe cutting ties, or at least not-advertising the most PR risky people/organizations you take money from.
Great post!
I wouldn’t be surprised if a lot of EAs see my takes here as a slippery slope to warm glow thinking and wanton spending that needs to be protected against.
It's a shame that we have to caveat things like this when we write. Some readers, perversely, see what they don't want to see and will attack you for it.
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
I don't think you meant for it to be, but this, like a lot of EA stuff, reads like it was written by a psychopath slightly obsessed with helping people.
Though, to be fair, a lot of EA stuff seems more like an obsession with 'making the world a better place' than helping people, so this is actually less disturbing than a lot of EA stuff.
Edit: which is probably why part of why so many people are turned off by EA stuff.
Seems the same as a thousand other reports written by people at the intersection of volunteer work and organized charity, trying to ameliorate poverty, domestic violence, you name it. I really don't see what's "disturbing" about it (let alone "psychopathic"!).
Have nonprofits that are public-facing, and EA infrastructure orgs, care more about customer service.
Can you clarify who would be the 'customers'?
From late 2020 to last month, I worked at grassroots-level non-profits in operational roles. Over that time, I’ve seen surprisingly effective deployments of strategies that were counter-intuitive to my EA and rationalist sensibilities.
I spent 6 months being the on-shift operations manager at one of the largest food banks in Toronto (~50 staff/volunteers), and 2 years doing logistics work at Samaritans (fake name), a long-lived charity that was so multi-armed that it was basically operating as a supplementary social services department for the city it was in(~200 staff and 200 volunteers). Both orgs were well-run, though both dealt with the traditional non-profit double whammy of being underfunded and understaffed.
Neither place was super open to many EA concepts (explicit cost-benefit analyses, the ITN framework, geographic impartiality, the general sense that talent was the constraining factor instead of money, etc). Samaritans in particular is a spectacular non-profit, despite(?) having basically anti-EA philosophies, such as:
Over this post, I’ll be largely focusing on Samaritans as I’ve worked there longer and in a more central role, and it’s also a more interesting case study due to its stronger anti-EA sentiment.
Things I Learned
For each learning, I have a section for sketches for EA integration – I hesitate to call them anything as strong as recommendations, because the point is to give more concrete examples of what it could look like integrated in an EA framework, rather than saying that it’s the correct way forward.
1. Long Term Reputation is Priceless
Institutional trust unlocks a stupid amount of value, and you can’t buy it with money. Lots of resources (amenity rentals; the mayor’s endorsement; business services; pro-bono and monetary donations) are priced/offered based on tail risk. If you can establish that you’re not a risk by having a longstanding, unblemished reputation, costs go way down for you, and opportunities way up. This is the world that Samaritans now operate in.
Samaritans has a much better, easier time at city hall compared to newer organizations, because of a decades-long productive relationship where we were really helpful with issues surrounding unemployment and homelessness. Permits get back to us really fast, applications get waved through with tedious steps bypassed, and fees are frequently waived. And it made sense that this was happening! Cities also deal with budget and staffing issues, why waste more time and effort than necessary on someone who you know knows the proper procedure and will ethically follow it to the letter?
It’s not just city hall. A few years ago, a local church offered up their recreation space for us to run an emergency winter shelter in – an incredibly generous move on their part, as using a space as a shelter puts a lot of wear on it. They made the offer only to Samaritans, and would not have made it to any other org. But we had good reputations for treating the unhoused well, and for cleaning after themselves when they move out of temporary spaces that were donated to them for use, so we got an exception.
Several companies with good reputations of their own and deep expertise on topics we weren’t as familiar with also approached us to do pro-bono work, both for their staff to get some fuzzies and to improve their own reputation as ethical companies who give back to the community.
Samaritans also leveraged their reputation proactively. Recently, we established a respectful and novel way of supporting the homeless in our city. The solution (in short, tiny homes on public land) would have been deadlocked for possibly years if the organization’s name didn’t grease the wheels significantly on many fronts. The city was eager to work with us, the NIMBYs were reluctant to come out against us, and the city’s unhoused community had a level of trust in us that made them willing to leave their established encampments.
I can see how it’s unfair for Samaritans to have gotten this kind of special treatment from everyone, and it’s the exact same dynamic that leads to entrenchment of older and less efficient institutions over newer ones. However, these dynamics are inevitable in any system or industry, and hard to overcome with brute cash. I am not very thrilled about having this take, but I think it may be worth figuring out how to gain similar kinds of advantage or leverage these dynamics for EA causes.
Sketches for EA integration
Thinking of money as a universal means of exchange slightly less. Money can buy many goods and services, but not all of them. I know it sucks for nerds to hear that reputation (popularity) is important but I think it’s unfortunately a real thing, and not just on the margin.
Thinking more about what actions and trade-offs EA organizations should take such that they’re beloved institutions in 25 years’ time – and if such a thing is worth it to pursue.
2. Non-Profits Shouldn’t Be Islands
Effective altruists consider the overall neglectedness of a cause area in terms of total field capacity, but when it’s time to donate, they support specific charities within that space. This approach makes sense, but it risks missing the bigger picture. Multiple organizations working on parts of the same problem can achieve more collectively than one big charity alone.
The non-profits I worked at communicated closely with community partners. This is good for the people we help. For example, knowing which shelters still have beds open (and what restrictions they impose around couples, pets, and drug use) when our own beds are at capacity so we can send people with very limited means for travel to places that can take them. Or which nearby food banks are open late if people arrive 5 minutes after we closed.
It’s also good for us, the service operators. It leads to better resource allocation and decision making on a community-wide scale. People who need the help of one charitable organization often need the help of other ones (e.g. food banks, affordable housing, job search support, and possibly translation support to access the above). When someone comes to your non-profit for a service, you can direct them to other services that they need.
When I operated the seasonal tax clinic, I can often see through people’s financial information when participants were eligible for benefits that they are not getting. I was trained in being able to spot this information by another non-profit that was focused on increasing benefits access for all Canadians. Providing assistance for benefits applications was out of scope for the tax clinic, but I was able to integrate a very streamlined path for referring people out to get those additional benefits at basically zero cost to us. I really don’t want to sound like I’m bragging here; it’s less that I was able to do that as much as there was a concerted effort by all community organizations to cross-train and communicate with each other to maximize the help that we can all provide to the community with the least amount of effort.
We were also able to take advantage of specialization, such as providing supervised injection sites for harm reduction purposes with staff trained by the non-profit that was focused on harm reduction specifically. Having another org provide training once every month or two was a lot more cost effective than having to have our own specialists.
Sketches for EA integration
Evaluate single charities slightly less, and [non-profit + government] networks for specific regions or cause areas slightly more, and think of possibly shoring up weak links. When evaluating new cause areas and how to best approach them, think about potential groupings of charities instead of single charities.
One question I often see on EA grant applications is something along the lines of “if we gave you 10x the money you requested, what would you do with it?” I think another useful question to ask could be something like, “what is your fantasy partner/complement organization?” Lots of nonprofits are doing their thing and they have no intentions to expand to do an entire other thing, and if you give them more money they will just do more of their own thing. But I’d bet that a lot of them have recurring problems just outside of their own scope that they would love having another org to refer out to, and a sense of what those problems are could be useful for the EA community as a while.
3. Slack is Powerful
This was a really interesting lesson from Samaritans. Because we had staff for what were basically 20 semi-autonomous organizations doing almost uncorrelated things, we ended up with a lot of organizational slack. Different parts of the organization underwent crunch at different times, and people were temporarily re-allocated to smooth out the spikiness regularly. If you’re an organization of like 20 people and you can occasionally, with minimal friction, harness the efforts of 20 more people who are aligned with you, you can do some really significant barn-raising moves that you couldn’t if you were just an organization with 25 FTEs.
The coolest example I participated in was when 30 people from various departments showed up to help move an emergency shelter we were running from one location to another. The work included deep cleaning the previous space and the new space, doing last minute construction work in the new space, packing and unpacking a bunch of cot beds, sleeping mats and bedding, a boatload of laundry, re-assembling all of the beds and making them, moving in all of the kitchen supplies and sundry, setting up the phone system, and dozens of other miscellaneous tasks. What would have taken a week to do if it was just shelter staff ended up taking only two days, which was great for the people who were depending on us for shelter. In addition to this, the shelter folks were relatively well rested despite the ordeal and able to continue their work without burning out.
Sketches for EA integration
Thinking more about what sorts of resources can be constrained besides money. I know, I know, the EA thing is about how money beats other interventions in like 99.9% of cases, but I do think that there could be some exceptions – especially when it comes to staffing.
Creating a group of EA free agents that can be allocated/rented to EA-aligned non-profits? One thing that might make sense is to have lawyers/payroll/HR people on retainer on hand to consult with fledgling nonprofits who aren’t big enough to hire them full-time.
4. Hospitality is Pretty Important
People won’t use your service if it seems impersonal and cold, even if, like, their livelihoods depend on it? Samaritans had a policy where we try to help people as much as we can and say no as infrequently as possible. As a result, people line up for up to six hours a day, or come back three or four days in a row, to use Samaritan’s services. While we’re drowning in this demand, competing service providers which are as close as a 5 minute walk away had no wait times.
This didn’t really make sense to me as we were helping with some pretty urgent things. Things like emergency benefits applications so a person can make rent and not get evicted, or helping new refugees find jobs before their savings run out.
Despite all this, trying to refer people out was a pretty futile practice. A lot of them will come back a few days later and say stuff like “I’m here because Samaritans are the only ones that will actually listen to my problems”.
From this, I’ve realized that it’s actually really important to make the people you help feel comfortable – especially since a lot of them likely had terrible experiences with other service providers previously.
Sketches for EA integration
Have nonprofits that are public-facing, and EA infrastructure orgs, care more about customer service.
This take is so basic that I honestly feel a little dumb giving it. But honestly yeah, I now think that organizations that are interfacing directly with the public can increase uptake pretty significantly by just strongly signalling that they care about the people that they are helping, to the people that they are helping. Be warm, caring, convivial presences.
Final Thoughts
Effective altruism aims to avoid the pitfalls of human brains and traditional charities by using optimized, data-driven approaches as much as possible. I wouldn’t be surprised if a lot of EAs see my takes here as a slippery slope to warm glow thinking and wanton spending that needs to be protected against.
My goal is that this post provides insight into why many relatively well run, non-EA organizations adopt these strategies. They recognize that reputation, relationships and culture, while seemingly intangible, can become viable vehicles for realizing impact. And when implemented responsibly, based on evidence, I think there’s room for compatibility with EA.
To be clear, I don’t actually expect that most of the strategies outlined here will pass muster when thrown into the cost-benefit analysis machine, most of the time. On the other hand, if there exists no marginal case in which they are useful at all, that would also be pretty surprising to me.
I hope that it’s clear that I am not aiming to strong-arm EA towards these practices; I only want to bring them to the community’s attention because I think they’re pretty neat. Better understanding of diverse approaches will only benefit this community, making it stronger, wiser and more able to do the most good.
Thanks for reading <3