Stories such as Peter Singer's "drowning child" hypothetical frequently imply that there is a major funding gap for health interventions in poor countries, such that there is a moral imperative for people in rich-countries to give a large portion of their income to charity. There are simply not enough excess deaths for these claims to be plausible.

Much of this is a restatement of part of my series on GiveWell and the problem of partial funding, so if you read that carefully and in detail, this may not be new to you, but it's important enough to have its own concise post. This post has been edited after its initial publication for clarity and tone.

People still make the funding gap claim

In his 1997 essay The Drowning Child and the Expanding Circle, Peter Singer laid out the basic argument for a moral obligation to give much more than most to, for the good of poor foreigners:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.
I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.
Once we are all clear about our obligations to rescue the drowning child in front of us, I ask: would it make any difference if the child were far away, in another country perhaps, but similarly in danger of death, and equally within your means to save, at no great cost – and absolutely no danger – to yourself? Virtually all agree that distance and nationality make no moral difference to the situation. I then point out that we are all in that situation of the person passing the shallow pond: we can all save lives of people, both children and adults, who would otherwise die, and we can do so at a very small cost to us: the cost of a new CD, a shirt or a night out at a restaurant or concert, can mean the difference between life and death to more than one person somewhere in the world – and overseas aid agencies like Oxfam overcome the problem of acting at a distance.

Singer no longer consistently endorses cost-effectiveness estimates that are so low, but still endorses the basic argument. Nor is this limited to him. As of 2019, GiveWell claims that its top charities can avert a death for a few thousand dollars, and the Center for Effective Altruism claims that someone with a typical American income can save dozens of lives over their lifetime by donating 10% of their income to the Against Malaria Foundation, which points to GiveWell's analysis for support. (This despite GiveWell's long-standing disclaimer that you shouldn't take its expected value calculations literally). The 2014 Slate Star Codex post Infinite Debt describes the Giving What We Can pledge as effectively a negotiated compromise between the perceived moral imperative to give literally everything you can to alleviate Bottomless Pits of Suffering, and the understandable desire to still have some nice things.

How many excess deaths can developing-world interventions plausibly avert?

According to the 2017 Global Burden of Disease report, around 10 million people die per year, globally, of "Communicable, maternal, neonatal, and nutritional diseases.”* This is roughly the category that the low cost-per-life-saved interventions target. If we assume that all of this is treatable at current cost per life saved numbers - the most generous possible assumption for the claim that there's a funding gap - then at $5,000 per life saved (substantially higher than GiveWell's current estimates), that would cost about $50 Billion to avert.

This is already well within the capacity of funds available to the Gates Foundation alone, and the Open Philanthropy Project / GiveWell is the main advisor of another multi-billion-dollar foundation, Good Ventures. The true number is almost certainly much smaller because many communicable, maternal, neonatal, and nutritional diseases do not admit of the kinds of cheap mass-administered cures that justify current cost-effectiveness numbers.

Of course, that’s an annual number, not a total number. But if we think that there is a present, rather than a future, funding gap of that size, that would have to mean that it’s within the power of the Gates Foundation alone to wipe out all fatalities due to communicable diseases immediately, a couple times over - in which case the progress really would be permanent, or at least quite lasting. And infections are the major target of current mass-market donor recommendations.

Even if we assume no long-run direct effects (no reduction in infection rates the next year, no flow-through effects, the people whose lives are saved just sit around not contributing to their communities), a large funding gap implies opportunities to demonstrate impact empirically with existing funds. Take the example of malaria alone (the target of the intervention specifically mentioned by CEA in its "dozens of lives" claim). The GBD report estimates 619,800 annual deaths - a reduction by half at $5k per life saved would only cost $3 billion per year, an annual outlay that the Gates Foundation alone could sustain for over a decade, and Good Ventures could certainly maintain for a couple of years on its own.

GiveWell's stated reason for not bothering to monitor statistical data on outcomes (such as e.g. malaria incidence and mortality, in the case of AMF) is that the data are too noisy. A reduction like that ought to be very noticeable, and therefore ought to make filling the next year's funding gap much more appealing to other potential donors. (And if the intervention doesn't do what we thought, then potential donors are less motivated to step in - but that's good, because it doesn't work!)

Imagine the world in which funds already allocated are enough to bring deaths due to communicable, maternal, neonatal, and nutritional diseases to zero or nearly zero even for one year. What else would be possible? And if you think that people's revealed preferences correctly assume that this is far from possible, what specifically does that imply about the cost per life saved?

What does this mean?

If the low cost-per-life-saved numbers are meaningful and accurate, then charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths. If the Gates Foundation and Good Ventures are behaving properly because they know better, then the opportunity to save additional lives cheaply has been greatly exaggerated. My former employer GiveWell in particular stands out, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried that Good Ventures would be saving more than their "fair share" of lives.

In either case, we're not getting these estimates from a source that behaves as though it both cared about and believed them. The process that promoted them to your attention is more like advertising than like science or business accounting. Basic epistemic self-defense requires us to interpret them as marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process.

We should be more skeptical, not less, of vague claims by the same parties to even more spectacular returns on investment for speculative, hard to evaluate interventions, especially ones that promise to do the opposite of what the argument justifying the intervention recommends.

If you give based on mass-marketed high-cost-effectiveness representations, you're buying mass-marketed high-cost-effectiveness representations, not lives saved. Doing a little good is better than buying a symbolic representation of a large amount of good. There's no substitute for developing and acting on your own models of the world.

As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk while maximizing earnings, and give all your excess money to the global poor or something even more urgent. Insofar as there's a way to fix these problems as a low-info donor, there's already enough money. Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits. And try to fix the underlying systems problems that got you so confused in the first place.

* A previous version of this post erroneously read a decadal rate of decline as an annual rate of decline, which implied a stronger conclusion than is warranted. Thanks to Alexander Gordon-Brown to pointing out the error.

Drowning children are rare
New Comment
173 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
Either scenario clearly implies that these estimates are severely distorted and have to be interpreted as marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process.

While most things have at least some motives to control your behavior, I do think GiveWell outlines a pretty reasonable motivation here that they explained in detail in the exact blogpost that you linked (and I know that you critiqued that reasoning on your blog, though I haven't found the arguments there particularly compelling). Even if their reasoning is wrong, they might still genuinely believe that their reasoning is right, which I do think is very important to distinguish from "marketing copy designed to control your behavior".

I am often wrong and still try to explain to others why I am right. Sometimes this is the cause of bad external incentives, but sometimes it's also just a genuine mistake. Humans are not perfect reasoners and they make mistakes for reasons other than to take advantage of other people (sometimes they are tired, or sometimes they haven't invented physics yet and try to build planes anyway, or sometimes they haven't figured out what good game theory actually looks like and try their best anyways).

For clarity, the claim Givewell-at-the-time made was:

For giving opportunities that are above the benchmark we’ve set but not “must-fund” opportunities, we want to recommend that Good Ventures funds 50%. It’s hard to say what the right long-term split should be between Good Ventures (a major foundation) and a large number of individual donors, and we’ve chosen 50% largely because we don’t want to engineer – or appear to be engineering – the figure around how much we project that individuals will give this year (which would create the problematic incentives associated with “funging” approaches). A figure of 50% seems reasonable for the split between (a) one major, “anchor” donor who has substantial resources and great conviction in a giving opportunity; (b) all other donors combined.

With the two claims I've heard about why the 50% split being:

1. There's still more than $8 billion dollars worth of good to do, and they expect their last dollar to be worth more than current dollars.

(I agree that this is at least somewhat sketchy, esp. when you think about Gates Foundation and others, although I think the case is less strong than Benquo is presenting here)

2. Having a charity ha... (read more)

Here's the part of the old series that dealt with this consideration: http://benjaminrosshoffman.com/givewell-case-study-effective-altruism-4/

The problem already exists on multiple levels, and the decision GiveWell made doesn't really alleviate it much. We should expect that GiveWell / Open Philanthropy Project is already distorting its judgment to match its idea of what Good Ventures wants, and the programs it's funding are already distorting their behavior to match their idea of what GiveWell / Open Philanthropy Project wants (since many of the "other" donors aren't actually uncorrelated with GiveWell's recommendations either!).

This line of thinking also seems like pretty much the opposite of the one that suggests that making a large grant to OpenAI in order to influence it would be a good idea, as I pointed out here. The whole arrangement is very much not what someone who was trying to avoid this kind of problem would build, so I don't buy it as an ad hoc justification for this particular decision.

I find this general pattern (providing reasons for things, that if taken seriously would actually recommend a quite different course of action t... (read more)

6Raemon
This ended up taking awhile (and renewed some of my sympathy for the "I tried to discuss this all clearly and dispassionately and basically nobody listened" issue). First, to save future people some effort, here is my abridged summary of what you said relating to "independence." (Also: here is a link directly to the relevant part of the blogpost) * Relying on a single donor does come with issues. * There are separate issues for: * Givewell's Independence (from Good Ventures) * Top Charity Independence (from Givewell) Top Charity Independence This section mostly summarized the bits I and Benquo covered in this thread, with Ben's takeaways being: I'm not sure I understand these suggestions yet, but they seem worth mulling over. GiveWell independence This section was fairly long (much longer than the previous one). I'm tempted to say "the thing I really cared about was the answer to the first problem". But I've tried to build a habit where, when I ask a question and someone responds in a different frame, I try to grok why their frame is different since that's often more illuminating (and at least seems like good form, building good will so that when I'm confident my frame makes more sense I can cash in and get others to try to understand mine) Summarizing the section will take awhile and for now I think I just recommend people read the whole thing.
4Raemon
My off-the-cuff, high level response to the Givewell independence section + final conslusions (without having fully digested them) is: Ben seems to be arguing that Givewell should either become much more independent from Good Ventures and OpenPhil (and probably move to a separate office), so that it can actually present the average donor will unbiased, relevant information (rather than information entangled with Good Venture's goals/models) or I can see both of these as valid options to explore, and that going to either extreme would probably maximize particular values. But it's not obvious either of those maximize area-under-the-curve-of-total-values. There's value to people with deep models being able to share those models. Bell Labs worked by having people being able to bounce ideas off each other, casually run into each other, and explain things to each other iteratively. My current sense is that I wish there was more opportunity for people in the EA landscape to share models more deeply with each other on a casual, day-to-day basis, rather than less (while still sharing as much as possible with the general public so people in the general public can also get engaged) This does come with tradeoffs of neither maximizing independent judgment nor maximizing output nor most easily avoiding particular epistemic and integrity pitfalls, but it's where I expect the most total value to lie.
4Benquo
Trying to build something kind of like Bell Labs would be great! I don't see how it's relevant to the current discussion, though.
2Raemon
Right now, we (maybe? I'm not sure) have something like a few different mini-Bell-labs, that each have their own paradigm (and specialists within that paradigm). The world where Givewell, Good Ventures and OpenPhil share an office is more Bell Labs like than one where they all have different offices. (FHI and UK CEA is a similar situation, as is CFAR/MIRI/LW). One of your suggestions in the blogpost was specifically that they split up into different, fully separate entities. I'm proposing that Bell Labs exists on a spectrum, that sharing office space is a mechanism to be more Bell Labs like, and that generally being more Bell Labs like is better (at least in a vacuum) (My shoulder Benquo now says something like "but if you're models are closely entangled with those of your funders, don't pretend like you are offering neutral services." Or maybe "it's good to share office space with people thinking about physics, because that's object level. It's bad to share office space with the people funding you." Which seems plausible but not overwhelmingly obvious given the other tradeoffs at play)
4Benquo
People working at Bell Labs were trying to solve technical problems, not marketing or political problems. Sharing ideas across different technical disciplines is potentially a good thing, and I can see how FHI and MIRI in particular are a little bit like this, though writing white papers is a very different even within a technical field from figuring out how to make a thing work. But it doesn't seem like any of the other orgs substantially resemble Bell Labs at all, and the benefits of collocation for nontechnical projects are very different from the benefits for technical projects - they have more to do with narrative alignment (checking whether you're selling the same story), and less to do with opportunities to learn things of value outside the context of a shared story. Collocation of groups representing (others') conflicting interests represents increased opportunity for corruption, not for generative collaboration.
2Raemon
Okay. I'm not sure whether I agree precisely but agree that that's the valid hypothesis, which I hadn't considered before in quite these terms, and updates my model a bit. The version of this that I'd more obviously endorse goes: Collocation of groups representing conflicting interests represents increased opportunity for corruption. Collocation of people who are building models represents increased opportunity for generative collaboration. Collocation of people who are strategizing together represents increased opportunity for working on complex goals that require shared complex models, and/or shared complex plans. (Again, as said elsethread, I agree that plans are models are different, but I think they are subject to a lot of the same forces, with plans being subject to some additional forces as well) These are all true, and indeed in tension.
2Raemon
I also think "sharing a narrative" and "building technical social models" are different, although easily confused (both from the outside and inside – I'm not actually sure which confusion is easier). But you do actually need social models if you're tackling social domains, which do actually benefit from interpersonal generativity.
2Benquo
I think these are a much stronger objection jointly than separately. If Cari Tuna wants to run her own foundation, then it's probably good for her to collocate with the staff of that foundation.
6Raemon
(I do want to note that this is a domain where I'm quite confused about the right answer. I think I stand by the individual comments I made last night but somewhat regret posting them as quickly as I did without thinking about it more and it seems moderately likely that some pieces of my current take on the situation are incoherent)
4Raemon
Thanks. Will re-read the original post and think a bit more.

Some further thoughts on that: I agree social-reality-distortions are a big problem, although I don't think the werewolf/villager-distinction is the best frame. (In answer to Wei_Dai's comment elsethread, "am I a werewolf" isn't a very useful question. You almost certainly are at least slightly cognitively-distorted due to social reality, at least some of the time. You almost certainly sometimes employ obfuscatory techniques in order to give yourself room to maneuver, at least sort of, at least some times.)

But I think thinking in terms of villagers and werewolves leads you to ask the question 'who is a werewolf' moreso than 'how do we systematically disincentivize obfuscatory or manipulative behavior', which seems a more useful question.

I bring this all up in this particular subthread because I think it's important that one thing that incentivizes obfuscatory behavior is giving away billions of dollars.

My sense (not backed up by much legible argument) is that a major source of inefficiencies of the Gates Foundation (and OpenPhil to a lesser degree) is that they've created an entire ecosystem, which both attracts people motivate... (read more)

It seems like there's this general pattern, that occurs over and over, where people follow a path going:

1. Woah. Drowning child argument!

2. Woah. Lives are cheap!

3. Woah, obviously this is important to take action on and scale up now. Mass media! Get the message out!

4. Oh. This is more complicated.

5. Oh, I see, it's even more complicated. (where complication can include moving from global poverty to x-risk as a major focus, as well as realizing that global poverty isn't as simple to solve)

6. Person has transitioned into a more nuanced and careful thinker, and now is one of the people in charge of some kind of org or at least a local community somewhere. (for one example, see CEA's article on shifting from mass media to higher fidelity methods of transition)

But, the mass media (and generally simplified types of thinking independent of strategy) are more memetically virulent than the more careful thinking, and new people keep getting excited about them in waves that are self-sustaining and hard to clarify (esp. since the original EA infrastructure was created by people at the earlier stages of thinking). So it keeps on being something that a newcomer will bump into most often in EA spaces.

CEA continues to actively make the kinds of claims implied by taking GiveWell's cost per life saved numbers literally, as I pointed out in the post. Exact quote from the page I linked:

If you earn the typical income in the US, and donate 10% of your earnings each year to the Against Malaria Foundation, you will probably save dozens of lives over your lifetime.

Either CEA isn't run by people in stage 6, or ... it is, but keeps making claims like this anyway.

4Eli Tyre
I want to upvote this in particular.
2jessicata
Clearly, the second question is also useful, but there is little hope of understanding, much less effectively counteracting, obfuscatory behavior, unless at least some people can see it as it happens, i.e. detect who is (locally) acting like a werewolf. (Note that the same person can act more/less obfuscatory at different times, in different contexts, about different things, etc)

Sure, I just think the right frame here is "detect and counteract obfuscatory behavior" rather than "detect werewolves." I think the "detect werewolves", or even "detect werewolf behavior" frame is more likely to collapse into tribal and unhelpful behavior at scale [edit: and possibly before then]

(This is for very similar reasons to why EA arguments often collapse into "donate all your money to help people". It's not that the nuances position isn't there, it just gets outcompeted by simpler versions of itself)

5jessicata
In your previous comment you're talking to Wei Dai, though. Do you think Wei Dai is going to misinterpret the werewolf concept in this manner? If so, why not link to the original post to counteract the possible misinterpretation, instead of implying that the werewolf frame itself is wrong? (meta note: I'm worried here about the general pattern of people optimizing discourse for "the public" who is nonspecific and assumed to be highly uninformed / willfully misinterpreting / etc, in a way that makes it impossible for specific, informed people (such as you and Wei Dai) to communicate in a nuanced, high-information fashion) [EDIT: also note that the frame you objected to (the villagers vs werewolf frame) contains important epistemic content that the "let's incentivize non-obfuscatory behavior" frame doesn't, as you agreed in your subsequent comment after I pointed it out. Which means I'm going to even more object to saying "the villagers/werewolf frame is bad" with the defense being that "people might misinterpret this", without offering a frame that contains the useful epistemic content of the misinterpretable frame]
I'm worried here about the general pattern of people optimizing discourse for "the public"

I do agree that this is a pattern to watch out for. I don't think it applies here, but could be wrong. I think it's very important that people be able to hold themselves to higher standards than what they can easily explain to the public, and it seems like a good reflex to notice when people might be trying to do that and point it out.

But I'm worried here about well-informed people caching ideas wrongly, not about the general public. More to say about this, but first want to note:

also note that the frame you objected to (the villagers vs werewolf frame) contains important epistemic content that the "let's incentivize non-obfuscatory behavior" frame doesn't, as you agreed in your subsequent comment after I pointed it out.

Huh - this just feels like a misinterpretation or reading odd things into what I said.

It had seemed obvious to me that to disincentivize obfuscatory behavior, you need people to be aware of what obfuscatory behavior looks like and what to do about it, and it felt weird that you saw that as something different.

It is fair that... (read more)

I'm worried about this, concretely, because after reading Effective Altruism is Self Recommending a while, despite the fact that I thought lots about it, and wrote up detailed responses to it (some of which I posted and some of which I just thought about privately), and I ran a meetup somewhat inspired by taking it seriously...
...despite all that, a year ago when I tried to remember what it was about, all I could remember was "givewell == ponzi scheme == bad", without any context of why the ponzi scheme metaphor mattered or how the principle was supposed to generalize. I'm similarly worried that a year from now, "werewolves == bad, hunt werewolves", is going to be the thing I remember about this.
The five-word-limit isn't just for the uninformed public, it's for serious people trying to coordinate. The public can only coordinate around 5-word things. Serious people trying to be informed still have to ingest lots of information and form detailed models but those models are still going to have major bits that are compressed, out of pieces that end up being about five words. And this is a major part of whymany people are confused about Effective Altruism and how to do it right in the first place.

If that's your outlook, it seems pointless to write anything longer than five words on any topic other than how to fix this problem.

I agree with the general urgency of the problem, although I think the frame of your comment is somewhat off. This problem seems... very information-theoretically-entrenched. I have some sense that you think of it as solvable in a way that it's fundamentally not actually solvable, just improvable, like you're trying to build a perpetual motion machine instead of a more efficient engine. There is only so much information people can process.

(This is based entirely off of reading between the lines of comments you've made, and I'm not confident what your outlook actually is here, and apologies for the armchair psychologizing).

I think you can make progress on it, which would look something like:

0) make sure people are aware of the problem

1) building better infrastructure (social or technological), probably could be grouped into a few goals:

  • nudge readers towards certain behavior
  • nudge writers towards certain behavior
  • provide tools that amplify readers capabilities
  • provide tools that amplify writer's capabilities

2) meanwhile, as a writer, make sure that the concepts you create for the public discourse are optimized for the right kind of compression. Some ideas com... (read more)

Trying to nudge others seems like an attempt to route around the problem rather than solve it. It seems like you tried pretty hard to integrate the substantive points in my "Effective Altruism is self-recommending" post, and even with pretty extensive active engagement, your estimate is that you only retained a very superficial summary. I don't see how any compression tech for communication at scale can compete with what an engaged reader like you should be able to do for themselves while taking that kind of initiative.

We know this problem has been solved in the past in some domains - you can't do a thing like the Apollo project or build working hospitals where cardiovascular surgery is regularly successful based on a series of atomic five-word commands; some sort of recursive general grammar is required, and at least some of the participants need to share detailed models.

One way this could be compatible with your observation is that people have somewhat recently gotten worse at this sort of skill; another is that credit-assignment is an unusually difficult domain to do this in. My recent blog posts have argued that at least the latter is true.

In the former case... (read more)

4Raemon
I think I may have communicatedly somewhat poorly by phrasing this in terms of 5 words, rather than 5 chunks, and will try to write a new post sometime that presents a more formal theory of what's going on. I mentioned in the comments of the previous post: And: I do in fact expect that the Apollo project worked via finding ways to cache things into manageable chunks, even for the people who kept the whole project in their head. Chunks can be nested, and chunks can include subtle neural-network-weights that are part of your background experience and aren't quite explicit knowledge. It can be very hard to communicate subtle nuances as part of the chunks if you don't have excess to high volume and preferably in-person communication. I'd be interested in figuring out how to operationalize this as a bet and check how the project actually worked. What I have heard (epistemic status: heard it from some guy on the internet) is that actually, most people on the project did not have all the pieces in their head, and the only people who did were the pilots. My guess is that the pilots had a model of how to *use* and *repair* all the pieces of the ship, but couldn't have built it themselves. My guess it that "the people who actually designed and assembled the thing" had a model of how all the pieces fit together, but not as a deep a model of how and when to use it, and may have only understood the inputs and outputs of each piece. And meanwhile, while I'm not quite sure how to operationalize the bet, I would bet maybe $50 that (conditional on us finding a good operationalization), that the number of people who had the full model or anything like it was quite small. ("You Have About Five Words" doesn't claim you can't have more than 5 words of nuance, it claims that you can't coordinate large groups of people that depend on more than 5 words of nuance. I bet there were less than 100 people and probably closer to 10 who had anything like a full model of everything going o
6Benquo
I think I'm unclear on how this constrains anticipations, and in particular it seems like there's substantial ambiguity as to what claim you're making, such that it could be any of these: * You can't communicate recursive structures or models with more than five total chunks via mass media such as writing. * You can't get humans to act (or in particular to take initiative) based on such models, so you're limited to direct commands when coordinating actions. * There exist such people, but they're very few and stretched between very different projects and there's nothing we can do about that. * ??? Something else ???
4Raemon
I think there are two different anticipation-constraining-claims, similar but not quite what you said there: Working Memory Learning Hypothesis – people can learn complex or recursive concepts, but each chunk that they learn cannot be composed of more than 7 other chunks. You can learn a 49 chunk concept but first must distill it into seven 7-chunk-concepts, learn each one, and then combine them together. Coordination Nuance Hypothesis – there are limits to how nuanced a model you can coordinate around, at various scales of coordination. I'm not sure precisely what the limits are, but it seems quite clear that the more people you are coordinating the harder it is to get them to share a nuanced model or strategy. It's easier to have a nuanced strategy with 10 people than 100, 1000, or 10,000. I'm less confident of the Working Memory hypothesis (it's an armchair inside view based on my understanding of how working memory works) I'm fairly confident in the Coordination Nuance Hypothesis, which is based on observations about how people actually seem to coordinate at various scales and how much nuance they seem to preserve. In both cases, there are tools available to improve your ability to learn (as an individual), disseminate information (as a communicator), and keep people organized (as a leader). But none of the tools changed the fundamental equation, just the terms. Anticipation Constraints: The anticipation-constraint of the WMLH is "if you try to learn a concept that requires more than 7 chunks, you will fail. If a concept requires 12 chunks, you will not successfully learn it (or will learn a simplified bastardization of it) until you find a way to compress the 12 chunks into 7. If you have to do this yourself, it will take longer than if an educator has optimized it for you in advance." The anticipation constraint of the CNH is that if you try to coordinate with 100 people of a given level of intelligence, the shared complexity of the plan that you are e
2Benquo
CNH is still ambiguous between "nuanced plan" and "nuanced model" here, and those seem extremely different to me.
2Raemon
I agree they are different but think it is the case that with a larger group you have a harder time with either of them, for roughly the same reasons at roughly the same rate of increased difficulty.
2Raemon
The Working Memory Hypothesis says the Bell Labs is useful, in part, because whenever you need to combine multiple interdisciplinary concepts that are each complicated to invent a new concept... instead of having to read a textbook that explains it one-particular-way (and, if it's not your field, you'd need to get up to speed on the entire field in order to have any context at all) you can just walk down the hall and ask the guy who invented the concept "how does this work" and have them explain it to you multiple times until they find a way to compress it down into a 7 chunks, optimized for your current level of understanding.
2Raemon
A slightly more accurate anticipation of the CNH is: * people need to spend time learning a thing in order to coordinate around it. At the very least, the more time you need to spend getting people up to speed on a model, the less time they have to actually act on that model * people have idiosyncratic learning styles, and are going to misinterpret some bits of your plan, and you won't know in advance which ones. Dealing with this requires individual attention, noticing their mistakes and correcting them. Middle managers (and middle "educators" can help to alleviate this, but every link in the chain reduces your control over what message gets distributed. If you need 10,000 people to all understand and act on the same plan/model, it needs to be simple or robust enough to survive 10,000 people misinterpreting it in slightly different ways * This gets even worse if you need to change your plan over time in response to new information, since now people are getting it confused with the old plan, or they don't agree with the new plan because they signed up for the old plan, and then you have to Do Politics to get them on board with the new plan. * At the very least, if you've coordinated perfectly, each time you change your plan you need to shift from "focusing on execution" to "focusing on getting people up to speed on the new model."

when I tried to remember what it was about, all I could remember [...] I'm similarly worried that a year from now

Make spaced repetition cards?

4Raemon
The way that I'd actually do this, and plan to do this (in line with Benquo's reply to you), is to repackage the concept into something that I understand more deeply and which I expect to unpack more easily in the future. Part of this requires me to do some work for myself (no amount of good authorship can replace putting at least some work into truly understanding something) Part of this has to do with me having my own framework (rooted in Robust Agency among other things) which is different from Benquo's framework, and Ben's personal experience playing werewolf. But a lot of my criticism of the current frame is that it naturally suggest compacting the model in the wrong way. (to be clear, I think this is fine for a post that represents a low-friction strategy to post your thoughts and conversations as they form, without stressing too much about optimizing pedagogy. I'm glad Ben posted the Villager/Werewolf post. But I think the presentation makes it harder to learn than it needs to be, and is particularly ripe for being misinterpreted in a way that benefits rather than harms werewolves, and if it's going to be coming up in conversation a lot I think it'd be worth investing time in optimizing it better)
2Benquo
That seems like the sort of hack that lets you pass a test, not the sort of thing that makes knowledge truly a part of you. To achieve the latter, you have to bump it up against your anticipations, and constantly check to see not only whether the argument makes sense to you, but whether you understand it well enough to generate it in novel cases that don’t look like the one you’re currently concerned with.
6Zack_M_Davis
I think it's possible to use in a "mindful" way even if most people are doing it wrong? The system reminding you what you read n days ago gives you a chance to connect it to the real world today when you otherwise would have forgotten.

Holden Karnofsky explicitly disclaimed the "independence via multiple funders" consideration as not one that motivated the partial funding recommendation.

If you give based on mass-marketed high-cost-effectiveness representations, you're buying mass-marketed high-cost-effectiveness representations, not lives saved. Doing a little good is better than buying a symbolic representation of a large amount of good. There's no substitute for developing and acting on your own models of the world.
As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent. Insofar as there's a way to fix these problems as a low-info donor, there's already enough money. Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits. And try to fix the underlying systems problems that got you so confused in the first place.

Some people on your blog have noted that this doesn't seem true, at least, because GiveDirectly still exists (both literally and as a sort of metaphorical pri... (read more)

It seems to me that if attending to the ordinary business of your life, including career and hobbies, amounts to doing nothing, there's something very deeply wrong happening, and people would do well to attend to that problem first. On the other hand, doing nothing is preferable to doing harm, and it's entirely possible that many people are actually causing harm, e.g. by generating misinformation, and it would be better if they just stopped, even if they can't figure out how to do whatever they were pretending to do.

I certainly don't think that someone donating their surplus to GiveDirectly, or living more modestly in order to share more with others, is doing a wrong thing. It's admirable to want to share one's wealth with those who have less.

It seems to me that if attending to the ordinary business of your life, including career and hobbies, amounts to doing nothing, there’s something very deeply wrong happening, and people would do well to attend to that problem first.

I'm tempted to answer this statement by saying that something very deeply wrong is clearly happening, e.g., there's not nearly enough effort in the world to prevent coordination failures that could destroy most of the potential value of the universe, and attending to that problem would involve doing something besides or in addition to attending to the ordinary business of life. I feel like this is probably missing your point though. Do you want to spell out what you mean more, e.g., is there some other "something very deeply wrong happening" you have in mind, and if so what do you think people should do about it?

If people who can pay their own rent are actually doing nothing by default, that implies that our society's credit-allocation system is deeply broken. If so, then we can't reasonably hope to get right answers by applying simplified economic models that assume credit-allocation is approximately right, the way I see EAs doing, until we have a solid theoretical understanding of what kind of world we actually live in.

Here's a simple example: Robin Hanson's written a lot about how it's not clear that health care is beneficial on the margin. This is basically unsurprising if you think there are a lot of bullshit jobs. But 80,000 Hours's medical career advice assumes that the system basically knows what it's doing and that health care delivers health on the margin - the only question is how much.

It seems to me that if an intellectual community isn't resolving these kind of fundamental confusions (and at least one side has to be deeply confused here, or at least badly misinformed), then it should expect to be very deeply confused about philanthropy. Not just in the sense of "what is the optimal strategy," but in the sense of "what does giving away money even do."

[I wrote the 80k medical careers page]

I don't see there as being a 'fundamental confusion' here, and not even that much of a fundamental disagreement.

When I crunched the numbers on 'how much good do doctors do' it was meant to provide a rough handle on a plausible upper bound: even if we beg the question against critics of medicine (of which there are many), and even if we presume any observational marginal response is purely causal (and purely mediated by doctors), the numbers aren't (in EA terms) that exciting in terms of direct impact.

In talks, I generally use the upper 95% confidence bound or central estimate of the doctor coefficient as a rough steer (it isn't a significant predictor, and there's reasonable probability mass on the impact being negative): although I suspect there will be generally unaccounted confounders attenuating 'true' effect rather than colliders masking it, these sort of ecological studies are sufficiently insensitive to either to be no more than indications - alongside the qualitative factors - that the 'best (naive) case' for direct impact as a doctor isn't promising.

There's little ... (read more)

Something that nets out to a small or no effect because large benefits and harms cancel out is very different (with different potential for impact) than something like, say, faith healing, where you can’t outperform just by killing fewer patients. A marginalist analysis that assumes that the person making the decision doesn’t know their own intentions & is just another random draw of a ball from an urn totally misses this factor.

8Thrasymachus
Happily, this factor has not been missed by either my profile or 80k's work here more generally. Among other things, we looked at: * Variance in impact between specialties and (intranational) location (1) (as well as variance in earnings for E2G reasons) (2, also, cf.) * Areas within medicine which look particularly promising (3) * Why 'direct' clinical impact (either between or within clinical specialties) probably has limited variance versus (e.g.) research (4), also I also cover this in talks I have given on medical careers, as well as when offering advice to people contemplating a medical career or how to have a greater impact staying within medicine. I still think trying to get a handle on the average case is a useful benchmark.
2Douglas_Knight
I just want to register disagreement.
4Eli Tyre
I want to double click on "credit-allocation system." it sounds like an important part of your model, but I don't really know what you mean. Something like "answering the question of 'who is responsible for the good in our world?'" Like I'm miss-allocating credit to the health sector, which is (maybe) not actually responsible for much good? What does this have to do with if people who can pay their rent are doing something or nothing by default? Is your claim that by participating in the economy, they should be helping by default (they pay their landlord, who buys goods, which pays manufacturers, etc.) And if that isn't having a positive impact, that must mean that society is collectively able to identify the places where value come from? I think I don't get it.

helping by de­fault (they pay their land­lord, who buys goods, which pays man­u­fac­tur­ers, etc.)

The exact opposite - getting paid should imply something. The naive Econ 101 view is that it implies producing something of value. "Production" is generally measured in terms of what people are willing to pay for.

If getting paid has little to do with helping others on net , then our society’s official unit of account isn’t tracking production (Talents), GDP is a measurement of the level of coercion in a society (There Is a War), the bullshit jobs hypothesis is true, we can’t take job descriptions at face value, and CEA’s advice to build career capital just means join a powerful gang.

This undermines enough of the core operating assumptions EAs seem to be using that the right thing to do in that case is try to build better models of what's going on, not act based on what your own models imply is disinformation.

2Eli Tyre
I'm trying to make sense of what you're saying here, but bear with me, we have a large inferential distance. Let's see. * The Talents piece was interesting. I bet I'm still missing something, but I left a paraphrase as a comment over there. * I read the all of "There Is a War", but I still don't get the claim, "GDP is a measurement of the level of coercion in a society." I'm going to keep working at it. * I basically already thought that lots of jobs are bullshit, but I might skim or listen to David Graeber's book to get more data. * Oh. He's the guy that wrote Debt: the First 5000 Years! (Which makes a very similar point about money as the middle parts of this post.) Given my current understanding, I don't get either the claim that "CEA’s advice to build career capital just means join a powerful gang" or that "This undermines enough of the core operating assumptions EAs seem to be using." I do agree that the main work to be done is figuring out what is actually going on in the world and how the world actually works. I'm going to keep reading and thinking and try to get what you're saying. . . . My initial response before I followed your links, so this is at least partially obviated: 1. First of all...Yep it does seem pretty weird that we maybe live in a world where most people are paid but produce no wealth. As a case in point, my understanding is that a large fraction of programmers actually add negative value, by adding bugs to code. It certainly seems correct to me, to stop and be like "There are millions of people up there in those skyscrapers, working in offices, and it seems like (maybe) a lot of them are producing literally no value. WTF?! How did we end up in a world like this! What is going on?" My current best guess, the following: Some people are creating value, huge amounts of value in total (we live in a very rich society, by historical standards), but many (most?) people are doing useless work. But for employers, the overhead of iden
5Benquo
I think it's analytically pretty simple. GDP involves adding up all the "output" into a single metric. Output is measured based on others' willingness to pay. The more payments are motivated by violence rather than the production of something everyone is glad to have more of, the more GDP measures expropriation rather than production. There Is A War is mostly about working out the details & how this relates to macroeconomic ideas of "stimulus," "aggregate demand," etc, but if that analytic argument doesn't make sense to you, then that's the point we should be working out.

Ok. This makes sense to me. GDP measures a mix of trades that occur due to simple mutual benefit and "trades" that occur because of extortion or manipulation.

If you look at the combined metric, and interpret it to be a measure of only the first kind of trade, you're likely overstating how much value is being created, perhaps by a huge margin, depending on what percentage of trades are based on violence.

But I'm not really clear on why you're talking about GDP at all. It seems like you're taking the claim that "GDP is a bad metric for value creation", and concluding that "interventions like give directly are a misguided."

Rereading this thread, I come to

If people who can pay their own rent are actually doing nothing by default, that implies that our society's credit-allocation system is deeply broken. If so, then we can't reasonably hope to get right answers by applying simplified economic models that assume credit-allocation is approximately right, the way I see EAs doing, until we have a solid theoretical understanding of what kind of world we actually live in.

Is the argument something like...

  • 1. GDP is is irreparably
... (read more)
2Benquo
This is something like a 9 - gets the overall structure of the argument right with some important caveats: I'd make a slightly weaker a claim for 2 - that credit-allocation methods have to be presumed broken until established otherwise, and no adequate audit has entered common knowledge. An important part of the reason for 3 is that, the larger the share of "knowledge work" that we think is mostly about creating disinformation, the more one should distrust any official representations one hasn't personally checked, when there's any profit or social incentive to make up such stories. Based on my sense of the character of the people I met while working at GiveWell, and the kind of scrutiny they said they applied to charities, I'd personally be surprised if GiveDirectly didn't actually exist, or simply pocketed the money. But it's not at all obvious to me that people without my privileged knowledge should be sure of that.

Ok. Great.

credit-allocation methods have to be presumed broken until established otherwise, and no adequate audit has entered common knowledge.

That does not seem obvious to me. It certainly does not seem to follow from merely the fact the GDP is not a good measure of national welfare. (In large part, because my impression is that economists say all the time that GDP is not a good measure of national welfare.)

Presumably you believe that point 2 holds, not just because of the GDP example, but because you've seen many, many examples (like health care, which you mention above). Or maybe because you have an analytical argument that the sort of thing that happens with GDP has to generalize to other credit allocation systems?

Is that right? Can you say more about why you expect this to be a general problem?

. . .

I have a much higher credence that give Directly Exists and is doing basically what it says it is doing than you do.

If I do a stack trace on why I think that...

  • I have a background expectation that the most blatant kinds of fraudulence will be caught. I live in a society that has laws, including laws about what sorts of things non-profits are allowed to do, and not do, with m
... (read more)
5Benquo
Consider how long Theranos operated, its prestigious board of directors, and the fact that it managed to make a major sale to Walgreens before blowing up. Consider how prominent Three Cups of Tea was (promoted by a New York Times columnist), for how long, before it was exposed. Consider that official US government nutrition advice still reflects obviously distorted, politically motivated research from the early 20th Century. Consider that the MLM company Amway managed to bribe Harvard to get the right introductions to Chinese regulators. Scams can and do capture the official narrative and prosecute whistleblowers. Consider that pretty much by definition we're not aware of the most successful scams. Related: The Scams Are Winning
2Eli Tyre
[Note that I'm shifting the conversation some. The grandparent was about things like Give Directly, and this is mostly talking about large, rich companies like Theanos.] One could look at this evidence and think: Or a person might look at this evidence and think: Because this is a situation involving hidden evidence, I'm not really sure how to distinguish between those worlds, except for something like a randomized audit: 0.001% of companies in the economy are randomly chosen for a detailed investigation, regardless of any allegations. I would expect that we live in something closer to the second world, if for no other reason than that this world looks really rich, and that wealth has to be created by something other than outright scams (which is not to say that everyone isn't also dabbling in misinformation). I would be shocked if more than one of the S&P 500 companies was a scam on the level of Theanos. Does your world model predict that some of them are?
4Benquo
Coca-Cola produces something about as worthless as Theranos machines, substituting the experience of a thing for the thing itself, & is pretty blatant about it. The scams that “win” gerrymander our concept-boundaries to make it hard to see. Likewise Pepsi. JPMorgan Chase & Bank of America, in different ways, are scams structurally similar to Bernie Madoff but with a legitimate state subsidy to bail them out when they blow up. This is not an exhaustive list, just the first 4 that jumped out at me. Pharma is also mostly a scam these days, nearly all of the extant drugs that matter are already off-patent. Also Facebook, but “scam” is less obviously the right category.

Somewhat confused by the coca-cola example. I don't buy coke very often, but it seems usually worth it to me when I do buy it (in small amounts, since I do think it tastes pretty good). Is the claim that they are not providing any value some kind of assumption about my coherent extrapolated volition? 

2Benquo
It was originally marketed as a health tonic, but its apparent curative properties were due to the powerful stimulant and analgesic cocaine, not any health-enhancing ingredients. Later the cocaine was taken out (but the “Coca” in the name retained), so now it fools the subconscious into thinking it’s healthful with - on different timescales - mass media advertising, caffeine, and refined sugar. It’s less overtly a scam now, in large part because it has the endowment necessary to manipulate impressions more subtly at scale.

I mean, I agree that Coca Cola engages in marketing practices that try to fabricate associations that are not particularly truth-oriented, but that's very different from the thing with Theranos. 

I model Coca Cola mostly as damaging for my health, and model its short-term positive performance effects to be basically fully mediated via caffeine, but I still think it's providing me value above and beyond those those benefits, and outweighing the costs in certain situations. 

Theranos seems highly disanalogous, since I think almost no one who knew the actual extend of Theranos' capabilities, and had accurate beliefs about its technologies, would give money to them. I have pretty confident bounds on the effects of coca-cola, and still decide to sometimes give them my money, and would be really highly surprised if there turns out to be a fact about coke that its internal executives are aware of (even subconsciously) that would drastically change that assessment for me, and it doesn't seem like that's what you are arguing for. 

3Benquo
Both - it would be worrying to have an analytic argument but not notice lots of examples, and it would require much more investigation (and skepticism) if it were happening all the time for no apparent reason. I tried to gesture at the gestalt of the argument in The Humility Argument for Honesty. Basically, all conflict between intelligent agents contains a large information component, so if we're fractally at war with each other, we should expect most info channels that aren't immediately life-support-critical to turn into disinformation, and we should expect this process to accelerate over time. For examples, important search terms are "preference falsification" and "Gell-Mann amnesia". I don't think I disagree with you on GiveDirectly, except that I suspect you aren't tracking some important ways your trust chain is likely to make correlated errors along the lines of assuming official statistics are correct. Quick check: what's your 90% confidence interval for global population, after Googling the official number, which is around 7.7 billion?
2Eli Tyre
Interesting. I don't know, certainly not off by more than a half billion in either direction? I don't know how hard it is to estimate the number of people on earth. It doesn't seem like there's much incentive to mess with the numbers here.
2Raemon
Guessing at potential comfounders - There may be incentives for individual countries (or cities) to inflate their numbers (to seem more important) – or, deflate their numbers, to avoid taxes. 
2Benquo
It's not really about how many jobs are bullshit, so much as what it means to do a bullshit job. On Graeber's model, bullshit jobs are mostly about propping up the story that bullshit jobs are necessary for production. Moral Mazes might help clarify the mechanism, and what I mean about gangs - a lot of white-collar work involves a kind of participatory business theater, to prop up the ego claims of one's patron. The more we think the white-collar world works this way, the more skeptical we should be of the literal truth of claims to be "working on" some problem or other using conventional structures.
2Eli Tyre
My intuitive answer to the question "What is a gang?": * A gang is an organization of thugs that claims resources, like territory or protection money, via force or the threat of force. Is that close to how you are using the term? What's the important/relevant feature of a "gang", when you say "CEA’s advice to build career capital just means join a powerful gang"? Do you mean something like the following? (This is a probably incorrect paraphrase, not a quote) Am I on the right track at all? Or is it more direct than that? Is any of that right?
5Benquo
Overall your wording seems pretty close. I think it's actually a combination of this, and actual coordination to freeze out marginal gangs or things that aren't gangs, from access to the system. Venture capitalists, for example, will tend to fund people who feel like members of the right gang, use the right signifiers in the right ways, went to the right schools, etc. Everyone I've talked with about their experience pitching startups has reported that making judgments on the merits is at best highly noncentral behavior. If enough of the economy is cartelized, and the cartels are taxing noncartels indirectly via the state, then it doesn't much matter whether the cartels apply force directly, though sometimes they still do. It basically involves sending or learning how to send a costly signal of membership in a prestigious gang, including some mixture of job history, acculturation, and integrating socially into a network.
3Eli Tyre
If I replaced the word "gang" here, with the word "ingroup" or "club" or "class", does that seem just as good? In these sentences in particular... and ...I'm tempted to replace the word "gang" with the word "ingroup". My guess is that you would say, "An ingroup that coordinates to exclude / freeze out non-ingroup-members from a market is a gang. Let's not mince words."

Maybe more specifically an ingroup that takes over a potentially real, profitable social niche, squeezes out everyone else, and uses the niche’s leverage to maximize rent extraction, is a gang.

5Raemon
While I'm not sure I get it either, I think Benquo's frame has a high level disagreement with the sort of question that utilitarianism asks in the first place (as well as the sort of questions that many non-utilitarian variants of EA are asking). Or rather, objects to the frame in which the question is often asked. My attempt to summarize the objection is (curious how close this lands for Benquo) is: "Much of the time, people have internalized moral systems not as something they get to reason about and have agency over, but as something imposed from outside, that they need to submit to. This is a fundamentally unhealthy way to relate to morality. A person in a bad relationship is further away from a healthy relationship, than a single person, because first the person has to break up with their spouse, which is traumatic and exhausting. A person with a flawed moral foundation trying to figure out how to do good is further away from figuring out how to do good than a person who is just trying to make a generally good life for themselves. This is important: a) because if you try to impose your morality on people who are "just making a good life for themselves", you are continuing to build societal momentum in a direction that alienates people from their own agency and welbeing. b) "just making a good life for themselves" is, in fact, one of the core goods one can do, and in a just world it'd be what most people were doing. I think There is A War is one of the earlier Benquo pieces exploring this (or: probably there are earlier-still-ones, but it's the one I happened to re-read recently). A more recent comment is his objection to Habryka's take on Integrity (link to comment deep in the conversation that gets to the point, but might require reading the thread for context) My previous attempt to pass his ITT may also provide some context.  
3Said Achmiz
Why is it impossible to to give money locally, yet spend some small amount of thinking about where/how to do so? Is effectiveness incompatible with philanthrolocalism…?
[-]gjm180

Many people find that thinking about effectiveness rapidly makes local giving seem a less attractive option.
The thought processes I can see that might lead someone to give locally in pursuit of effectiveness are quite complex ones:

  • Trading off being able to do more good per dollar in poorer places against the difficulty of ensuring that useful things actually happen with those dollars. Requires careful thought about just how severe the principal/agent problems, lack of visibility, etc, are.
  • Giving explicitly higher weighting to the importance of people and causes located near to oneself, and trading off effectiveness against closeness. Requires careful thought about one's own values, and some sort of principled way of mapping closeness to importance.

Those are both certainly possible but I think they take more than a "small amount of thinking". Of course there are other ways to end up prioritizing local causes, but I think those go in the "without reflecting much" category. It seems to me that a modest amount of (serious) thinking about effectiveness makes local giving very hard to justify for its effectiveness, unless you happen to have a really exceptional local cause on your doorstep.

1Said Achmiz
I’m afraid I completely disagree, and in fact find this view somewhat ridiculous. “Giving explicitly higher weighting to the importance of people and causes located near to oneself” (the other clause in that sentence strikes me as tendentious and inaccurate…) is not, in fact, complex. It is a perfectly ordinary—and perfectly sensible—way of thinking about, and valuing, the world. That doing good in contexts distant from oneself (both in physical and in social/culture space) is quite difficult (the problems you allude to are indeed very severe, and absolutely do not warrant a casual dismissal) merely turns the aforementioned perspective from “perfectly sensible” to “more sensible than any other view, absent some quite unusual extenuating circumstances or some quite unusual values”. Now, it is true that there is a sort of “valley of bad moral philosophy”, where if you go in a certain philosophical direction, you will end up abandoning good sense, and embracing various forms of “globalist” perspectives on altruism (including the usual array of utilitarian views), until you reach a sufficient level of philosophical sophistication to realize the mistakes you were making. (Obviously, many people never make it out of the valley at all—or at least they haven’t yet…) So in that sense, it requires ‘more than a “small amount of thinking”’ to get to a “localist” view. But… another alternative is to simply not make the mistakes in question in the first place. Finally, it is a historical and terminological distortion (and a most unfortunate one) to take “effectiveness” (in the context of discussions of charity/philanthropy) to mean only effectiveness relative to a moral value. There is nothing at all philosophically inconsistent in selecting a goal (on the basis, presumably, of your values), and then evaluating effectiveness relative to that goal. There is a good deal of thinking, and of research, to be done in service of discovering what sort of charitable activity most effec

I haven't read your entire series of posts on Givewell and effective altruism. So I'm basing this comment mostly off of just this post. It seems like it is jumping all over the place.

You say:

Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated. My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried that this would be an unfair way to save lives.

This sets up a false dichotomy. Both the Gates Foundation and Good Ventures are focused on areas in addition to funding interventions in the developing world. Obviously, they both believe those other areas, e.g., in Good Ventures' case, existential risk reduction, present them with the opportunity to prevent just as many, if not more, deaths than interventions in the developing world. Of course, a lot of people disagree with the idea something like AI alignment, which Good Ventures funds, is in any way comparable ... (read more)

8Benquo
I think I can summarize my difficulties with this comment a bit better now. (1) It's quite long, and brings up many objections that I dealt with in detail in the longer series I linked to. There will always be more excuses someone can generate that sound facially plausible if you don't think them through. One has to limit scope somehow, and I'd be happy to get specific constructive suggestions about how to do that more clearly. (2) You're exaggerating the extent to which Open Philanthropy Project, Good Ventures, and GiveWell, have been separate organizations. The original explanation of the partial funding decision - which was a decision about how to recommend allocating Good Ventures's capital - was published under the GiveWell brand, but under Holden's name. My experience working for the organizations was broadly consistent with this. If they've since segmented more, that sounds like an improvement, but doesn't help enough with the underlying revealed preferences problem.
4Raemon
I don't know that this suggestion is best – it's a legitimately hard problem – but a policy I think would be pretty reasonable is: When responding to lengthy comments/posts that include at least 1-2 things you know you dealt with in a longer series, one option is to simply leave it at: "hmm, I think it'd make more sense for you to read through this longer series and think carefully about it before continuing the discussion" rather than trying to engage with any specific points. And then shifting the whole conversation into a slower mode, where people are expected to take a day or two in between replies to make sure they understand all the context. (I think I would have had similar difficulty responding to Evan's comment as what you describe here)
4Benquo
To clarify a bit - I'm more confused about how to make the original post more clearly scope-limited, than about how to improve my commenting policy. Evan's criticism in large part deals with the facts that there are specific possible scenarios I didn't discuss, which might make more sense of e.g. GiveWell's behavior. I think these are mostly not coherent alternatives, just differently incoherent ones that amount to changing the subject. It's obviously not possible to discuss every expressible scenario. A fully general excuse like "maybe the Illuminati ordered them to do it as part of a secret plot," for instance, doesn't help very much, since that posits an exogenous source of complications that isn't very strongly constrained by our observations, and doesn't constrain our future anticipations very well. We always have to allow for the possibility that something very weird is going on, but I think "X or Y" is a reasonable short hand for "very likely, X or Y" in this context. On the other hand, we can't exclude scenarios arbitrarily. It would have been unreasonable for me, on the basis of the stated cost-per-life-saved numbers, to suggest that the Gates Foundation is, for no good reason, withholding money that could save millions of lives this year, when there's a perfectly plausible alternative - that they simply don't think this amazing opportunity is real. This is especially plausible when GiveWell itself has said that its cost per life saved numbers don't refer to some specific factual claim. "Maybe partial funding because AI" occurred to enough people that I felt the need to discuss it in the long series (which addressed all the arguments I'd heard up to that point), but ultimately it amounts to a claim that all the discourse about saving "dozens of lives" per donor is beside the point since there's a much higher-leverage thing to allocate funds to - in which case, why even engage with the claim in the first place? Any time someone addresses a specific part
8Benquo
They share a physical office! Good Ventures pays for it! I'm not going to bother addressing comments this long in depth when they're full of basic errors like this.
4habryka
For the record, this is no longer going to be true starting in I think about a month, since GiveWell is moving to Oakland and Open Phil is staying in SF.
2Evan_Gaensbauer
Otherwise, here is what I was trying to say: 1. Givewell focuses on developing-world interventions, and not AI alignment, or any other focus area of Open Phil other than developing-world interventions, which means they're aren't responsible for anything to do with OpenAI. 2. It's unclear from you what write what role, if any, Open Phil plays in the relationship between Givewell and Good Ventures in Givewell's annual recommendations to Good Ventures. If it was clear Open Phil was an intermediary in that regard somehow, then you treating all 3 projects under 1 umbrella as 1 project with no independence between any of them might make sense. You didn't establish that, so it doesn't make sense. 3. Good Ventures signs off on all the decisions Givewell and Open Phil make, and they should be held responsible for the decisions of both Givewell and Open Phil. Yet you know that that there are people who work for Givewell and Open Phil who make decisions that are completed before Good Ventures signs off on them. Or I assume you do, since you worked for Givewell. If you somehow know it's all-top down both ways, that Good Ventures tells Open Phil and Givewell each what they want from them, and Open Phil and Givewell just deliver the package, then say so. Yes, they do share the same physical office. Yes, Good Ventures pays for it. Shall I point to mistakes made by one of MIRI, CFAR, or LW, but not more than one, and then link the mistake made, whenever, and however tenuously, to all of those organizations? Should I do the same to any two or more other AI alignment/x-risk organizations you favour, who share offices or budgets in some way? Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a "Vassar crowd" that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, amo
-12Evan_Gaensbauer
8Benquo
Then why do Singer and CEA keep making those exaggerated claims? I don't see why they'd do that if they didn't think it was responsible for persuading at least some people.
2Evan_Gaensbauer
I don't know. Why don't you ask Singer and/or the CEA? They probably believe it is responsible for persuading at least some people. I imagine the CEA does it through some combo of revering Singer, thinking it's good for optics, and not thinking the level of precision at which there error is taking place is so grievous as to be objectionable in the context they're presented in.
2Benquo
I don't expect to get an honest answer to "why do you keep making dishonest claims?", for reasons I should hope to be obvious. I had hoped I might have gotten any answer at all from you about why *you *(not Singer or CEA) claim that Singer's thesis is not based exclusively on a specific set of cost-effectiveness estimates about specific causes form specific organizations, or why you think it's relevant that Singer’s thesis isn’t the exclusive basis for the effective altruism movement.
6Benquo
Pretty weird that restating a bunch of things GiveWell says gets construed as an attack on GiveWell (rather than the people distorting what it says), and that people keep forgetting or not noticing those things, in directions that make giving based on GiveWell's recommendations seem like a better deal than it is. Why do you suppose that is?
2Evan_Gaensbauer
I believe it's because people get their identities very caught up in EA, and for EAs focused on global poverty alleviation, Givewell and their recommended charities. So, when someone like you criticizes Givewell, a lot of them react in primarily emotional ways, creating a noisy space where the sound of messages like yours get lost. So, the points you're trying to make about Givewell, or what similar points many others have tried making about Givewell, don't stick to enough for enough of the EA community, or whoever else the relevant groups of people are. Thus, in the collective memory of the community, these things are forgotten or not noticed. Then, the cycle repeats itself each time you write another post like this.

So, EA largely isn't about actually doing altruism effectively (which requires having correct information about what things actually work, e.g. estimates of cost per life saved, and not adding noise to conversations about these), it's an aesthetic identity movement around GiveWell as a central node, similar to e.g. most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it), which is also claiming credit for, literally, evaluating and acting towards the moral good (as environmentalism claims credit for evaluating and acting towards the health of the planet). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated values of EA, EA-as-it-is ought to be replaced with something very, very different.

[EDIT: noting that what you said in another comment also agrees with the aesthetic identity movement view: "I imagine the CEA does it through some combo of revering Singer, thinking it’s good for optics, and not thinking the level of precision at which there error is taking place is so grievous as to be objectionable in the context they’re presented in."]

I agree with your analysis of the situation, but I wonder whether it’s possible to replace EA with anything that won’t turn into exactly the same thing. After all, the EA movement is the result of some people noticing that much of existing charity is like this, and saying “we should replace that with something very, very different”…

And EA did better than the previous things, along some important dimensions! And people attempting to do the next thing will have EA as an example to learn from, which will (hopefully) prompt them to read and understand sociology, game theory, etc. The question of "why do so many things turn into aesthetic identity movements" is an interesting and important one, and, through study of this (and related) questions, it seems quite tractable to have a much better shot at creating something that produces long-term value, than by not studying those questions.

Success is nowhere near guaranteed, and total success is quite unlikely, but, trying again (after a lot of study and reflection) seems like a better plan than just continuing to keep the current thing running.

The question of “why do so many things turn into aesthetic identity movements” is an interesting and important one, and, through study of this (and related) questions, it seems quite tractable to have a much better shot at creating something that produces long-term value, than by not studying those questions.

I agree that studying this is quite important. (If, of course, such an endeavor is entered into with the understanding that everyone around the investigators, and indeed the investigators themselves, have an interest in subverting the investigation. The level of epistemic vigilance required for the task is very unusually high.)

It is not obvious to me that further attempts at successfully building the object-level structure (or even defining the object-level structure) are warranted, prior to having substantially advanced our knowledge on the topic of the above question. (It seems like you may already agree with me, on this; I am not sure if I’m interpreting your comment correctly.)

6Evan_Gaensbauer
I'm going to flip this comment on you, so you can understand how I'm seeing it, and thus I fail to see why the point you're trying to make matters. One could nitpick about how HPMoR has done much more to save a number of lives through AI alignment than Givewell has ever done through developing-world interventions, and I'll go share that info as from Jessica Taylor in defence of (at least some of) what Ben Hoffman is trying to achieve, perhaps among other places on the public internet, and we'll see how that goes. The point I was trying to make is that much of the rationality community has nothing to do with the community's stated values. So, in stating as though a fact about EA your personal impression of it based on Sarah's blog post as if that means something unique about EA that isn't true about other human communities, you've argued for too much. Also, in this comment I indicated my awareness of what was once known as the "Vassar crowd", which I recall you were a part of: While we're here, would you mind explaining with me what all of your beef was with the EA community as misleading in myriad ways to the point of menacing x-risk reduction efforts, and other pursuits of what is true and good, without applying the same pressure to parts of the rationality community that pose the same threat, or for that matter, any other group of people who does the same? What makes EA special?

So, rationality largely isn't actually about doing thinking clearly [...] it's an aesthetic identity movement around HPMoR as a central node [...] This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of rationality, rationality-as-it-is ought to be replaced with something very, very different.

This just seems obviously correct to me, and I think my failure to properly integrate this perspective until very recently has been extremely bad for my sanity and emotional well-being.

Specifically: if you fail to make a hard mental disinction between "rationality"-the-æsthetic-identity-movement and rationality-the-true-art-of-systematically-correct-reasoning, then finding yourself in a persistent disagreement with so-called "rationalists" about something sufficiently basic-seeming creates an enormous amount of cognitive dissonance ("Am I crazy? Are they crazy? What's going on?? Auuuuuugh") in a way that disagreeing with, say, secular humanists or arbitrary University of Chicago graduates, doesn't.

But ... it shouldn't. Sure, self-identification with the "rationalist" brand name is a signal that someone knows some things abou

... (read more)
6Evan_Gaensbauer
For Ben's criticisms of EA, it's my opinion that while I agree with many of his conclusions, I don't agree with some of the strongest conclusions he reaches, or how he makes the arguments for them, simply because I believe they are not good arguments. This is common for interactions between EA and Ben these days, though Ben doesn't respond to counter-arguments, as he often seems under the impression a counter-argument disagrees with Ben in a way he doesn't himself agree with, his interlocutors are persistently acting in bad faith. I haven't interacted directly with Ben myself as much for a while until he wrote the OP this week. So, I haven't been following as closely how Ben construes 'bad faith', and I haven't taken the opportunity to discover, if he were willing to relay it, what his model of bad faith is. I currently find some of his feelings of EAs he discusses with as acting in bad faith confusing. At least I don't find them a compelling account of people's real motivations in discourse.

I haven't been following as closely how Ben construes 'bad faith', and I haven't taken the opportunity to discover, if he were willing to relay it what his model of bad faith is.

I think the most relevant post by Ben here is "Bad Intent Is a Disposition, Not a Feeling". (Highly recommended!)

Recently I've often found myself wishing for better (widely-understood) terminology for phenomena that it's otherwise tempting to call "bad faith", "intellectual dishonesty", &c. I think it's pretty rare for people to be consciously, deliberately lying, but motivated bad reasoning is horrifyingly ubiquitous and exhibits a lot of the same structural problems as deliberate dishonesty, in a way that's worth distinguishing from "innocent" mistakes because of the way it responds to incentives. (As Upton Sinclair wrote, "It is difficult to get a man to understand something when his salary depends upon his not understanding it.")

If our discourse norms require us to "assume good faith", but there's an important sense in which that assumption isn't true (because motivated misunderstandings resist correction in a way that simple mistakes don't), but we can't talk about the ways it isn't true without

... (read more)
6Evan_Gaensbauer
So, I've read the two posts on Benquo's blog you've linked to. The first one "Bad Intent Is a Disposition, Not a Feeling", depended on the claim he made that mens rea is not a real thing. As was pointed out in comments that he himself acknowledged those comments made some good points that would cause him to rethink the theme he was trying to impart with his original post. I looked up both the title of that post, and 'mens rea' on his blog to see if he had posted any updated thoughts on the subject. There weren't results from the date of publication of that post onward on either of those topics on his blog, so it doesn't appear he has publicly updated his thoughts on these topics. That was over 2 years ago. The second post on the topic was more abstract and figurative, and was using some analogy and metaphor to get its conclusion across. So, I didn't totally understand the relevance of all that in the second post to the first post, even though the second was intended as a sequel to the first. It seemed to me the crux of resolving the problem was: Benquo's conclusion that for public discourse and social epistemology, at least in his experience, that to be honest about your sense someone else is arguing in bad faith is Officially Not OK because it is always construed as a grave and inappropriate personal attack. So, resolving the issue appears socially or practically impossible. My experience is that just isn't the case. It can lend itself to better modes of public discourse. One thing is it can move communities to states of discourse that are much different than where the EA and rationality communities currently are at. One problem is I'm not sure even those rationalists and EAs who are aware of such problems would prefer the options available, which would be just hopping onto different platforms with very different discourse norms. I would think that would be the most practical option, since the other viable alternative would be for these communities to adopt other
2Evan_Gaensbauer
I'll take a look at these links. Thanks.
4Evan_Gaensbauer
I understand the "Vassar Crowd" to be a group of Michael Vassar's friends who: * were highly critical of EA. * were critical of somewhat less so of the rationality community. * were partly at odds with the bulk of the rationality community in not being as hostile to EA as they thought they should have been. Maybe you meet those qualifications, but as I understand it the "Vassar Crowd" started publishing blog posts on LessWrong and their own personal blogs, as well as on social media, over the course of a few months starting in the latter half of 2016. It was part of a semi-coordinated effort. While I wouldn't posit a conspiracy, it seems like a lot of these criticisms of EA were developed in conversations within this group, and, given the name of the group, I assume different people were primarily nudged by Vassar. This also precipitated of Alyssa Vance's Long-Term World Improvement mailing list. It doesn't seem to have continued as a crowd to the present, as the lives of the people involved have obviously changed a lot, and it doesn't appear from the outside it is as cohesive anymore, I assume in large part because of Vassar's decreased participation in the community. Ben seems to be one of the only people who is sustaining the effort to criticize EA as the others were before. So while I appreciate the disclosure, I don't know if in my previous comment was precise enough, as far as I understand it was that the Vassar Crowd was more a limited clique that was manifested much more in the past than present.

The point I was trying to make is that much of the rationality community has nothing to do with the community’s stated values.

Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn't think that?)

Geeks, Mops, Sociopaths happened to the rationality community, not just EA.

So, in stating as though a fact about EA your personal impression of it based on Sarah’s blog post as if that means something unique about EA that isn’t true about other human communities, you’ve argued for too much.

I don't think it's unique! I think it's extremely, extremely common for things to become aesthetic identity movements! This makes the phenomenon matter more, not less!

I have about as many beefs with the rationality movement as I do with the EA movement. I am commenting on this post because Ben already wrote it and I had things to add.

It's possible that I should feel more moral pressure than I currently do to actively (not just, as a comment on other people's posts) say what's wrong about the current state of the rationality community publicly. I've already been saying things privately. (This is an invitation to try morally pressuring me, using arguments, if you think it would actually be good for me to do this)

3Evan_Gaensbauer
Thanks for acknowledging my point about the rationality community. However, I was trying to get across more generally that I think the 'aesthetic identity movement' model might be lacking. If a theory makes the same predictions everywhere, it's useless. I feel like the 'aesthetic identity movement' model might be one of those theories that is too general and not specific enough for me to understand what I'm supposed to take away from its use. For example: Maybe if all kinds of things are aesthetic identity movements instead of being what htey actually say they are, I wouldn't be as confused, if I knew what I am supposed to do with this information.
6jessicata
An aesthetic identity movement is one where everything is dominated by how things look on the surface, not what they actually do/mean in material reality. Performances of people having identities, not actions of people in reality. To some extent this is a spectrum, but I think there are attractor states of high/low performativity. It's possible for a state not to be an aesthetic identity movement, e.g. by having rule of law, actual infrastructure, etc. It's possible for a movement not to be an aesthetic identity movement, by actually doing the thing, choosing actions based on expected value rather than aesthetics alone, having infrastructure that isn't just doing signalling, etc. Academic fields have aesthetic elements, but also (some of the time) do actual investigation of reality (or, of reasoning/logic, etc) that turns up unexpected information. Mass movements are more likely to be aesthetic identity movements than obscure ones. Movements around gaining resources through signalling are more likely to be aesthetic identity movements than ones around accomplishing objectives in material reality. (Homesteading in the US is an example of a historical movement around material reality) (Note, EA isn't only as aesthetic identity movement, but it is largely one, in terms of percentage of people, attention, etc; this is an important distinction) It seems like the concept of "aesthetic identity movement" I'm using hasn't been communicated to you well; if you want to see where I'm coming from more in more detail, read the following. * Geeks, MOPs, and sociopaths * Identity and its Discontents * Naming the Nameless * On Drama * Optimizing for Stories (vs. Optimizing Reality) * Excerpts from a larger discussion about simulacra (no need to read all of these if it doesn't seem interesting, of course)
2Evan_Gaensbauer
I will take a look at them. Thanks.
2Evan_Gaensbauer
I don't think you didn't think that. My question was to challenge you to answer why you, and the others if you would feel comfortable speaking to their perspectives, focus so much of your attention on EA instead of the rationality community (or other communities perhaps presenting the same kind and degree of problems), if you indeed understand they share similar problems, and posing similarly high stakes (e.g., failure modes of x-risk reduction). I asked because it's frustrating to me how inconsistent with your own efforts here to put way more pressure on EA than rationality. I'm guessing part of the reason for your trepidation in the rationality community is because you feel a sense of how much disruption it could cause, and how much risk nothing would change either. The same thing has happened when, not so much you, but some of your friends have criticized EA in the past. I was thinking it was because you are socially closer to the rationality community that you wouldn't be as willing to criticize them. I am not as invested in the rationality as a community as I was in the past. So, while I feel some personal responsibility to seek to analyze the intellectual failure modes of rationality, I don't feel much of a moral urge anymore for correcting its social failure modes. So, I lack motivation to think through if it would be "good" or not for you to do it, though.
6jessicata
I think I actually do much more criticism of the rationality community than the EA community nowadays, although that might be invisible to you since most of it is private. (Anyway, I don't do that much public criticism of EA either, so this seems like a strange complaint about me regardless)
2Evan_Gaensbauer
Well, this was a question more about your past activity than the present activity, and also the greater activity of the same kind of some people you seem to know well, but I thought I would take the opportunity to ask you about it now. At any rate, thanks for taking the time to humour me.
4Benquo
It doesn't seem to me like anyone I interact with is still honestly confused about whether and to what extent e.g. CFAR can teach rationality, or rationality provides the promised superpowers. Whereas some people still believe a few core EA claims (like the one the OP criticizes) which I think are pretty implausible if you just look at them in conjunction and ask yourself what else would have to be true. If you or anyone else want to motivate me to criticize the Rationality movement more, pointing me at people who continue to labor under the impression that the initial promises were achievable is likely to work; rude and condescending "advice" about how the generic reader (but not any particular person) is likely to feel the wrong way about my posts on EA is not likely to work.
6Raemon
So, I agree with the claim that EA has a lot of aesthetic-identity-elements going on that compound (and in many cases cause) the problem. I think that's really important to acknowledge (although it's not obvious that the solution needs to include starting over) But I also think, in the case of this particular post, though, that the answer is simpler. The OP says: Which... sure uses language that sounds like it's an attack on Givewell to me. seems: [edit] The above paragraph a) dishonest and/or false, in that it claims Givewell publishes such cost-per-life numbers, but at the moment AFAICT Givewell goes to great lengths to hide those numbers (i.e. to find the numbers of AMF you get redirected to a post about how to think about the numbers which links to a spreadsheet, which seems like the right procedure to me for forcing people to actually think a bit about the numbers) b) uses phrases like "hoarding" and "wildly exaggerated" that I generally associate with coalition politics rather than denotive-language-that-isn't-trying-to-be-enacting, while criticizing others for coalition politics, which seems a) like bad form, b) not like a process that I expect to result in something better-than-EA at avoiding pathologies that stem from coalition politics. [double edit] to be clear, I do think it's fair to criticize CEA and/or the EA community collectively for nonetheless taking the numbers as straightforward. And I think their approach to OpenAI deserves, at the very least, some serious scrutiny. (Although I think Ben's claims about how off they are are overstated. This critique by Kelsey seems pretty straightforwardly true to me. AFAICT in this post Ben has made a technical error approximately of the same order of magnitude of what he's claiming others are making)
5jessicata
My comment was a response to Evan's, in which he said people are reacting emotionally based on identity. Evan was not explaining people's response by referring to actual flaws in Ben's argumentation, so your explanation is distinct from Evan's. a) GiveWell does publish cost-effectiveness estimates. I found them in a few clicks. So Ben's claim is neither dishonest nor false. b) So, the fact that you associate these phrases with coalitional politics, means Ben is attacking GiveWell? What? These phrases have denotative meanings! They're pretty clear to determine if you aren't willfully misinterpreting them! The fact that things that have clear denotative meanings get interpreted as attacking people is at the core of the problem! To say that Ben creating clarity about what GiveWell is doing is an attack on GiveWell, is to attribute bad motives to GiveWell. It says that GiveWell wants to maintain a positive impression of itself, regardless of the facts, i.e. to defraud nearly everyone. (If GiveWell wants correct information about charities and charity evaluations to be available, then Ben is acting in accordance with their interests [edit: assuming what he's saying is true], i.e. the opposite of attacking them). Perhaps you endorse attributing bad motives to GiveWell, but in that case it would be hypocritical to criticize Ben for doing things that could be construed as doing that.

a) GiveWell does publish cost-effectiveness estimates. I found them in a few clicks. So Ben's claim is neither dishonest nor false.

While I agree that this is a sufficient rebuttal of Ray's "dishonest and/or false" charge (Ben said that GiveWell publishes such numbers, and GiveWell does, in fact, publish such numbers), it seems worth acknowleding Ray's point about context and reduced visibility: it's not misleading to publish potentially-untrustworthy (but arguably better than nothing) numbers surrounded by appropriate caveats and qualifiers, even when it would be misleading to loudly trumpet the numbers as if they were fully trustworthy.

That said, however, Ray's "GiveWell goes to great lengths to hide those numbers" claim seems false to me in light of an email I received from GiveWell today (the occasion of my posting this belated comment), which reads, in part:

GiveWell has made a lot of progress since your last recorded gift in 2015. Our current top charities continue to avert deaths and improve lives each day, and are the best giving opportunities we're aware of today. To illustrate, right now we estimate that for every $2,400 donated to Malaria Consortium for its seasonal malaria chemoprevention program, the death of a child will be averted.

(Bolding mine.)

Further update on this. Givewell has since posted this blogpost. I haven't yet reviewed this enough have a strong opinion on it, but I think it at least explains some of the difference in epistemic state I had at the time of this discussion.

Relevant bit:

Although we don’t advise taking our cost-effectiveness estimates literally, we do think they are one of the best ways we can communicate about the rough magnitude of expected impact of donations to our recommended charities.
A few years ago, we decided not to feature our cost-effectiveness estimates prominently on our website. We had seen people using our estimates to make claims about the precise cost to save a life that lost the nuances of our analysis; it seemed they were understandably misinterpreting concrete numbers as conveying more certainty than we have. After seeing this happen repeatedly, we chose to deemphasize these figures. We continued to publish them but did not feature them prominently.
Over the past few years, we have incorporated more factors into our cost-effectiveness model and increased the amount of weight we place on its outputs in our reviews (see the contrast between our 2014 cost-effectiveness model ve
... (read more)
4Raemon
A friend also recently mentioned getting this email to me, and yes, this does significantly change my outlook here.
6Zack_M_Davis
I wonder if it would help to play around with emotive conjugation? Write up the same denotative criticism twice, once using "aggressive" connotations ("hoarding", "wildly exaggerated") and again using "softer" words ("accumulating", "significantly overestimated"), with a postscript that says, "Look, I don't care which of these frames you pick; I'm trying to communicate the literal claims common to both frames."
4Evan_Gaensbauer
When he wrote: In most contexts when language liked this is used, it's usually pretty clear that you are implying someone is doing something closer to deliberately lying than some softer kind of deception. I am aware Ben might have some model about how Givewell or others in EA are acting in bad faith in some other manner, involving self-deception. If that is what he is implying that Givewell or Good Ventures are doing instead of deliberately lying, that isn't clear from the OP. He could have also stated the organizations in question are not fully aware they're just marketing obvious nonsense, and had been immune to his attempts to point this out to them. If that is the case, but he didn't state that in the OP either. So, based on their prior experience, I believe it would appear to many people like he was implying Givewell, Good Ventures, and EA are deliberately lying. Deliberate lying is generally seen as a bad thing. So, to imply someone is deliberately lying seems to clearly be an attribution of bad motives to others. So if Ben didn't expect or think that is how people would construe part of what he was trying to say, I don't know what he was going for.
2Raemon
I think the current format isn't good venue for me to continue the current discussion. For now, roughly, I disagree with the framing in your most recent comment, and stand by my previous comment. I'll try to write up a top level post that outlines more of my thinking here. I'd have some interest in a private discussion that gets turned into a google doc that gets turned into a post, or possibly some other format. I think public discussion threads are a uniquely bad format for this sort of thing.

I agree with a lot of this (maybe not to the same degree) but I'm not sure what's causing it. You link to something about villagers vs werewolves. Does that mean you think GiveWell has been effectively taken over by werewolves or was run by werewolves from the beginning?

Assuming some version of "yes", I'm pretty sure the people running GiveWell do not think of themselves as werewolves. How can I rule out the possibility that I myself am a werewolf and not aware of it?

ETA: How much would it help to actually play the game? I got a copy of The Resistance a few days ago which is supposed to be the same kind of game as Werewolf, but I don't have a ready group of people to play with. Would it be worth the extra effort to find/make a group just to get this experience?

I found that it made a big subjective difference in my threat assessment for this kind of thing, when I'd had the subjective experience of figuring out how to play successfully as a werewolf. YMMV.

I don't think many people have a self-image as a "werewolf" trying to sabotage building of shared maps. I don't think anyone in GiveWell sees themselves that way.

I do think that many people are much more motivated to avoiding being blamed for things, than to create clarity about credit-assignment, and that this is sufficient to produce the "werewolf" pattern. If I ask people what their actual anticipations are about how they and others are likely to behave, in a way that is grounded and concrete, it usually seems like they agree with this assessment. I've had several conversations in which one person has said both of the following:

  • It's going too far to accuse someone of werewolfy behavior.
  • Expecting nonwerewolfy behavior from people is an unreasonably high expectation that's setting people up to fail.

As far as I can tell, the "werewolf" thing is how large parts of normal, polite society work by default, and most people trying to do something that requires accurate collective credit-assignment in high stakes situations just haven't reflected on how far a departure that would require from normal behavior.

As far as I can tell, the "werewolf" thing is how large parts of normal, polite society work by default

This is true, and important. Except "werewolf" is a misleading analogy for it - they're not intentionally colluding with other secret werewolves, and it's not a permanent attribute of the participants. It's more that misdirection and obfuscation are key strategies for some social-competitive games, and these games are part of almost all humans motivation sets, both explicitly (wanting to have a good job, be liked, etc.) and implicitly (trying to win every status game, whether it has any impact on their life or not).

The ones who are best at it (most visibly successful) firmly believe that the truth is aligned with their winning the games. They're werewolf-ing for the greater good, because they happen to be convincing the villagers to do the right things, not because they're eating villagers. And as such, calling it "werewolf behavior" is rejected.


6Benquo
I'm pretty sure this varies substantially depending on context - in contexts that demand internal coordination on simulacrum level 1 (e.g. a marginal agricultural community, or a hunting or raiding party, or a low-margin business in a very competitive domain), people often do succeed at putting the shared enterprise ahead of their egos.
4Dagon
This may be true - desperation encourages in-group cooperation (possibly with increased out-group competition) and wealth enables more visible social competition. Or it may be a myth, and there's just different forms of domination and information obfuscation in pursuit of power, based on different resources and luxuries to be competed over. We don't have much evidence either way of daily life in pre-literate societies (or illiterate subgroups within technically-literate "civilizations"). We do know that groups of apes have many of the same behaviors we're calling "werewolf", which is some indication that it's baked in rather than contextual.

I do think that many people are much more motivated to avoiding being blamed for things, than to create clarity about credit-assignment, and that this is sufficient to produce the “werewolf” pattern.

I think I myself am often more motivated to avoiding being blamed for things, than to create clarity about credit-assignment. I feel like overall I'm still doing more good than harm (and therefore "it would be better if they just stopped", as you put it in another comment, doesn't apply to me). How can I tell if I'm wrong about this?

As far as I can tell, the “werewolf” thing is how large parts of normal, polite society work by default, and most people trying to do something that requires accurate collective credit-assignment in high stakes situations just haven’t reflected on how far a departure that would require from normal behavior.

How optimistic are you that "accurate collective credit-assignment in high stakes situations" can be greatly improved from what people are currently doing? If you're optimistic, can you give some evidence or arguments for this, aside from the fact that villagers in Werewolf can win if they know what to do?

I'm more confident that we can't solve AI alignment without fixing this, than I am that we can fix it.

Accounts of late-20th-Century business practices seem like they report much more Werewolfing than accounts of late-19th-Century business practices - advice on how to get ahead has changed a lot, as have incidental accounts of how things work. If something's changed recently, we should have at least some hope of changing it back, though obviously we need to understand the reasons. Taking a longer view, new civilizations have emerged from time to time, and it looks to me like often rising civilizations have superior incentive-alignment and information-processing than the ones they displace or conquer. This suggests that at worst people get lucky from time to time.

Pragmatically, a higher than historically usual freedom of speech and liberalism more generally seem like they ought to both make it easier to think collectively about political problems than it's been in the past, and make it more obviously appealing, since public reason seemed to do really well at improving a lot of people's lives pretty recently.

I’m more confident that we can’t solve AI alignment without fixing this, than I am that we can fix it.

Can you give some examples of technical AI alignment efforts going wrong as a result of bad credit assignment (assuming that's what you mean)? To me it seems that to the extent things in that field aren't headed in the right direction, it's more a result of people underestimating the philosophical difficulty, or being too certain about some philosophical assumptions, or being too optimistic in general, that kind of thing.

Accounts of late-20th-Century business practices seem like they report much more Werewolfing than accounts of late-19th-Century business practices—advice on how to get ahead has changed a lot, as have incidental accounts of how things work.

This seems easily explainable by the fact that businesses have gotten a lot bigger to take advantage of economies of scale offered by new technologies, so coordination / principal-agent problems have gotten a lot worse as a result.

Taking a longer view, new civilizations have emerged from time to time, and it looks to me like often rising civilizations have superior incentive-alignment and information-processing than the

... (read more)
8Benquo
No, just saying that while I agree the problem looks quite hard - like, world-historically, a robust solution would be about as powerful as, well, cities - current conditions seem like they're unusually favorable to people trying to improve social coordination via explicit reasoning. Conditions are slightly less structurally favorable than the Enlightenment era, but on the other hand we have the advantage of being able to look at the Enlightenment's track record and try to explicitly account for its failures.
8Benquo
If orgs like OpenAI and Open Philanthropy Project are sincerely trying to promote technical AI alignment efforts, then they're obviously confused about the fundamental concept of differential intellectual progress. If, on the other hand, we think they're just being cynical and collecting social credit for labeling things AI safety rather than making a technical error, then the honest AI safety community seems to have failed to create clarity about this fact among their supporters. Not only would this level of coordination fail to create an FAI due to "treacherous turn" considerations, it can't even be bothered to try to deny resources to optimizing processes that are already known to be trying to deceive us!

If orgs like OpenAI and Open Philanthropy Project are sincerely trying to promote technical AI alignment efforts, then they’re obviously confused about the fundamental concept of differential intellectual progress.

I think I have more uncertainty than you do about whether OpenAI/OpenPhil is doing the right thing, but conditional on them not doing the right thing, and also not just being cynical, I don't think being confused about the fundamental concept of differential intellectual progress is the best explanation of why they're not doing the right thing. It seems more likely that they're wrong about how much of a broad base of ML expertise/capability is needed internally in an organization to make progress in AI safety, or about what is the best strategy to cause differential intellectual progress or bring about an aligned AGI or prevent AI risk.

If, on the other hand, we think they’re just being cynical and collecting social credit for labeling things AI safety rather than making a technical error, then the honest AI safety community seems to have failed to create clarity about this fact among their supporters.

I personally assign less than 20% probability that "they’re just

... (read more)
But in that case I probably have similar biases and I don't see a strong reason to think I'm less affected by them than OpenAI/OpenPhil, so it doesn't seem right to accuse them of that when I'm trying to argue for my own positions.

This seems backwards to me. Surely, if you're likely to make error X which you don't want to make, it would be helpful to build shared models of the incidence of error X and help establish a norm of pointing it out when it occurs in others, so that others will be willing and able to correct you in the analogous situation.

It doesn't make any sense to avoid trying to help someone by pointing out their mistake because you might need the same kind of help in the future, at least for nonrivalrous goods like criticism. If you don't think of correcting this kind of error as help, then you're actually just declaring intent to commit fraud. And if you'd find it helpful but expect others to think of it as unwanted interference, then we've found an asymmetric weapon that helps with honesty but not with dishonesty.

Accusing others of bias also seems less effective in terms of changing minds (i.e., it seems likely
... (read more)

This seems backwards to me. Surely, if you’re likely to make error X which you don’t want to make, it would be helpful to build shared models of the incidence of error X and help establish a norm of pointing it out when it occurs in others, so that others will be willing and able to correct you in the analogous situation.

I think that would make sense if I had a clear sense of how exactly biases related to social credit is causing someone to make a technical error, but usually it's more like "someone disagrees with me on a technical issue and we can't resolve the disagreement, it seems pretty likely that one or both of us is affected by some sort of bias that's related to social credit and that's the root cause of the disagreement, but it could also be something else like being naturally optimistic vs pessimistic, or different past experiences/backgrounds". How am I supposed to "create clarity" in that case?

That’s a reason to be more clear, not less clear, about what’s going on—as long as people who understand what’s going on obscure the issue to be polite, this strategy will continue to work.

As I mentioned before, I don't entirely understand what is going on, in other words

... (read more)

In a competitive attention market without active policing of the behavior pattern I'm describing, it seems wrong to expect participants getting lots of favorable attention and resources to be honest, as that's not what's being selected for.

There's a weird thing going on when, if I try to discuss this, I either get replies like Raemon's claim elsewhere that the problem seems intractable at scale (and it seems like you're saying a similar thing at times), or replies to the effect that there are lots of other good reasons why people might be making mistakes, and it's likely to hurt people's feelings if we overtly assign substantial probability to dishonesty, which will make it harder to persuade them of the truth. The obvious thing that's missing is the intermediate stance of "this is probably a big pervasive problem, and we should try at all to fix it by the obvious means before giving up."

It doesn't seem very surprising to me that a serious problem has already been addressed to the extent that it's true that both 1) it's very hard to make any further progress on the problem and 2) the remaining cost from not fully solving the problem can be lived with.

The obvious thing that’s missing is the intermediate stance of “this is probably a big pervasive problem, and we should try at all to fix it by the obvious means before giving up.”

It seems to me that people like political scientists, business leaders, and economists have been attacking the problem for a while, so it doesn't seem that likely there's a lot of low hanging fruit to be found by "obvious means". I have some more hope that the situation with AI alignment is different enough from what people thought about in the past (e.g., a lot of people involved are at least partly motivated by altruism compared to the kinds of people described in Moral Mazes) that you can make progress on credit assigning as applied to AI alignment, but you still seem to be too optimistic.

5Benquo
What are a couple clear examples of people trying to fix the problem locally in an integrated way, rather than just talking about the problem or trying to fix it at scale using corrupt power structures for enforcement? It seems to me like the nearest thing to a direct attempt was the Quakers. As far as I understand, while they at least tried to coordinate around high-integrity discourse, they put very little work into explicitly modeling the problem of adversarial behavior or developing robust mechanisms for healing or routing around damage to shared information processing. I'd have much more hope about existing AI alignment efforts if it seemed like what we've learned so far had been integrated into the coordination methods of AI safety orgs, and technical development were more focused on current alignment problems.
2Benquo
I generally have a bias towards strong upvote or strong downvote, and I don't except my own comments from this.
But in that case I probably have similar biases and I don't see a strong reason to think I'm less affected by them than OpenAI/OpenPhil, so it doesn't seem right to accuse them of that when I'm trying to argue for my own positions.

You're not, as far as I know, promoting an AI safety org raising the kind of funds, or attracting the kind of attention, that OpenAI is. Likewise you're not claiming mainstream media attention or attracting a large donor base the way GiveWell / Open Philanthropy Project is. So there's a pretty strong reason to expect that you haven't been selected for cognitive distortions that make you better at those things anywhere near as strongly as people in those orgs have.

I'm confused about the argument you're trying to make here (I also disagree with some things, but I want to understand the post properly before engaging with that). The main claims seem to be

There are simply not enough excess deaths for these claims to be plausible.

and, after telling us how many preventable deaths there could be,

Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated.

But I don't understand how these claims interconnect. If there were more people dying from preventable diseases, how would that dissolve the dilemma that the second claim poses?

Also, you say that $125 billion is well within the reach of the GF, but their website says that their present endowment is only $50.7 billion. Is this a mistake, or do you mean something else with "within reach"?

I still have no idea of how the total amount of dying people is relevant, but my best reading of your argument is:

  • If givewells cost effectiveness estimates were correct, foundations would spend their money on them.
  • Since the foundations have money that they aren't spending on them, the estimates must be incorrect.

According to this post, OpenPhil intends to spend rougly 10% of their money on "straightforward charity" (rather than their other cause areas). That would be about $1B (though I can't find the exact numbers right now), which is a lot, but hardly unlimited. Their worries about displacing other donors, coupled with the possibility of learning about better opportunities in the future, seems sufficient to justify partial funding to me.

That leaves the Gates Foundation (at least among the foundations that you mentioned, of course there's a lot more). I don't have a good model of when really big foundations does and doesn't grant money, but I think Carl Shulman makes some interesting points in this old thread.

4Benquo
Right now a major excuse for not checking outcomes is that effect sizes are too small relative to noise. This is plainly incompatible with the belief that there's a large funding gap at cost-per-life-saved numbers close to the current GiveWell estimates, because if you believe the latter, it should be possible to bring excess deaths down to literally zero. Gates and Buffett have stated intent to give a lot more via the Gates Foundation.
1Lukas Finnveden
I don't think anyone has claimed that "there's a large funding gap at cost-per-life-saved numbers close to the current GiveWell estimates", if "large" means $50B. GiveWell seem to think that their present top charities' funding gaps are in the tens of millions.
3Benquo
Gates has stated intent to give more $ away (he still has $100B) and Warren Buffett also promised to give away his fortune (some tens of billions) via GF.

In some sense, EA is arbitrage between the price of life in rich and poor countries, and such price will eventually become more equal.

Another point is that saving life locally sometimes is possible almost for free, if you happen to be in right place and thus have unique information. For example, calling 911 if you see a drawing child may be very effective and cost you almost nothing. There were several cases in my life when I had to put attention of a taxi driver to a pedestrian ahead - not sure if I actually saved life. But to save life locally one needs to pay attention on what is going around him and know how to react effectively.

4Benquo
That's in the "best-case" scenario where this particular claim, made by parties making other incompatible claims, happens to be the true one. I no longer believe such arbitrage is reliably available and haven't seen a persuasive argument to the contrary.
7jefftk
Do you not believe GiveDirectly represents this kind of arbitrage?

Given the kinds of information gaps and political problems I describe here, it seems to me that while in expectation they’re a good thing to do with your surplus, the expected utility multiplier should be far less than the very large one implied by a straightforward “diminishing marginal utility of money” calculation.

6avturchin
Sure, in the best case; moreover, as poor countries are becoming less poor, the price of saving life in them is growing. (And one may add that functional market economy is the only known way to make them less poor eventually.) However, I also think that EA could reach even higher efficiency in saving lives in other cause areas, like fighting aging and preventing global risks.
If we assume that all of this is treatable at current cost per life saved numbers - the most generous possible assumption for the claim that there's a funding gap - then at $5,000 per life saved (substantially higher than GiveWell's current estimates), that would cost about $50 Billion to avert.
Of course, that’s an annual number, not a total number. But if we think that there is a present, rather than a future, funding gap of that size, that would have to mean that it’s within the power of the Gates Foundation alone to wipe out all fatal communic
... (read more)
0Benquo
Gates and Buffett have pledged a lot more than that.

Hmm, what's your source for that? They have a total net-wealth of something like $160 billion, so it can't be more than a factor of 3. And it seems quite likely to me that both of them have at least some values that are not easily captured by "save as many lives as possible" such that I don't expect all of that $160 billion to go towards that goal (e.g. I expect a significant fraction of that money to go things like education, scientific achievement and other things that don't have the direct aim of saving lives but are pursuing other more nebulous goals).

I have one or more comments I'd like to make, but I'd like to know what sorts of comments you consider to be either 'annoying' or 'counterproductive' before I make them. I agree with some aspects of this article, but I disagree with others. I've checked, and I think my disagreements will be greater in both number and degree than other comments here. I wouldn't expect you to find critical engagement based on some strong disagreement to "be annoying or counterproductive", but I'd like to get a sense if y... (read more)

Did you know that if a comment gets deleted, the author of it is notified via PM and given a copy of the deleted comment? So if you don't mind the experience of having a comment deleted, you can just post your comment here, and repost it as an article later if it does get deleted.

I didn't know that, but neither do I mind the experience of having a comment deleted. I would mind:

  • that Benquo might moderate this thread to a stringent degree according to a standard he might fail to disclose, and thus can use moderation as a means to move the goal posts, while under the social auspices of claiming to delete my comment because he is saw it as wilfully belligerent, without substantiating that claim.
  • that Benquo will be more motivated to do this than he otherwise would be with on other discussions he would moderate on LW, as he has initiated this discussion with an adversarial frame, and is one that Benquo feels personally quite strongly about (e.g., it is based on a long-lasting public dispute he has had with his former employer, and Benquo here is not shy here about his hostility to at least large portions of the EA movement).
  • that were he to delete my comment on such grounds, there would be no record by which anyone reading this discussion would be able to hold Benquo accountable to the standards he used to delete my comments, unduly stacking the deck against an appeal I could make that in deleting my comment Benquo had been inconsistent in his moderation.

Were... (read more)

This does seem like something we should telegraph better than we currently do, although I'm not sure how.

In general, I'd very much like a permanent neat-things-to-know-about-LW post or page, which receives edits when there's a significant update (do tell me if there's already something like this). For example, I remember trying to find information about the mapping between karma and voting power a few months ago, and it was very difficult. I think I eventually found an announcement post that had the answer, but I can't know for sure, since there might have been a change since that announcement was made. More recently, I saw that there were footnotes in the sequences, and failed to find any reference whatsoever on how to create footnotes. I didn't learn how to do this until a month or so later, when the footnotes came to the EA forum and aaron wrote a post about it.

5habryka
I agree with this. We are working on an updated About/Welcome page which will have info in this reference class (or at least links to other posts that have all of that info).
2Evan_Gaensbauer
Strongly upvoted.
2Evan_Gaensbauer
Just to add user feedback, I did indeed have no idea this was what happens when comments are deleted.

The only comment I recall deleting here is this one, in which case as you can see I clearly asked for that line of discussion to be discontinued first.

Okay, thanks. Sorry for the paranoia. I just haven't commented on any LW posts with the 'reign of terror' commenting guidelines before, so I didn't know what to expect. That gives me enough context to feel confident my comment won't be like that one you deleted.


I picked Reign of Terror because I wasn’t sure I wanted to commit to the higher deletion thresholds (I think the comment I deleted technically doesn’t meet them), so I wanted to avoid making a false promise.

I do want to hold myself to the standard of welcoming criticism & only deleting stuff that seems like it’s destroying importance-weighted information on net. I don’t want to be held to the standard of having to pretend that everyone being conventionally polite and superficially relevant is really trying.

I have updated (partly in this thread, although it retroactively fits into past observations that were model-less at the time), that's it's probably best to have a moderation setting that clearly communicates what you've described here.

6Rob Bensinger
Can't individuals just list 'Reign of Terror' and then specify in their personalized description that they have a high bar for terror?
6Evan_Gaensbauer
As an aside, 'high bar for terror' is the best new phrase I've come across in a long while.
4Raemon
Yes, but the short-handle given to the description might radically change how people conceive of it.
Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths

My assumption before reading this has been that this is the case. Given that, does a reason remain to update away from the position that the GiveWell claim is basically correct?

For the rest of this post, let's suppose the true amount of money needed to save a life through GiveWell's top charities is 50.000$. I don't think anything about Singer's main point changes.

For one, it's my understanding that decreas... (read more)

Was your posting this inspired by another criticism of Givewell recently published by another former staffer at Givewell?

2Benquo
It was inspired by this comment.
2Evan_Gaensbauer
That's surprising. Have you been exposed to the other aforementioned critique of Givewell? I ask because it falls along very similar lines to yours, but it appears to have been written without reference to yours whatsoever.
4Benquo
I don’t think so. Mind linking to it? I also mostly didn’t mean this as a critique of GiveWell so much as a critique of a specific claim being made by many that sometimes references GiveWell numbers. Unfortunately it seems like making straightforward revealed preferences arguments reliably causes people to come up with ad hoc justifications for GiveWell’s behavior which (I claim) are incoherent, and I can’t respond to those without talking about GiveWell’s motives somewhat.
3Evan_Gaensbauer
Here is the link.
2Evan_Gaensbauer
Based on how you write, it is clear in your writing you understand the mistake may be made more by others referencing Givewell's numbers rather than Givewell itself. Yet the tones of your post seem to be holding Givewell, and not others, culpable for how others use Givewell's numbers. To make an ethical appeal to effective altruists drawing unjustified conclusions based on Givewell's numbers to them directly that what they're doing is misleading, dishonest, or wrong, may be less savoury than a merely rational appeal of how or why what they're doing is misleading, dishonest, or wrong, based on the expectation people are not fully cognizant of their own dishonesty. Yet your approach thus far doesn't appear to be working. You've been trying this for a few years now, so I'd suggest trying some new tactics or strategy. I think at this point it would be fair for you to be somewhat less charitable to those who make wildly exaggerated claims that make Givewell's numbers, and write as though you are explaining to Givewell and others as your audience that people who use Givewell's numbers are being dishonest, rather than explaining to people who aren't acting entirely in good faith that they are being dishonest. The way you write makes it seems as though you believe in this whole affair Givewell itself is the most dishonest actor, which I think readers find senseless enough they're less inclined to take the rest of what you're trying to say seriously. I think you should try talking more about the motives of the actors your'e referring to other than Givewell, in addition to Givewell's motives.
6Benquo
What are a couple examples of how this tone shows up in my writing, and how would you have written them to communicate the proper emphasis?

So, first of all, when you write this:

Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated.

It seems like what you're trying to accomplish for rhetorical effect, but not irrationally, is to demonstrate that the only alternative to "wildly exaggerated" cost-effectiveness estimates is that foundations like these are doing something even worse, that they are hoarding money. There are a few problems with this.

  • You're not distinguishing who the specific cost-effectiveness estimates you're talking about are coming from. While it's a bit of a nitpick to point out it's Givewell rather than Good Ventures that makes the estimate, when the 2 organizations are so closely connected, and Good Ventures can be held responsible for the grants they make on the basis of the estimates, if not the original analysis that informed them.
  • At least in the case of Good Ventures, there is a third alternative that they are reserving billions of dollars not at the price of millions of preventable deaths, because, for a variety of reasons, the
... (read more)
2Evan_Gaensbauer
I can't find the link right now, but I asked others for it. So, hopefully it'll come back up again. If I come across it again, I'll respond back here again with it.

Singer's argument is that

1) We have a moral obligation to try to do the most net good we can.

2) Your obligation to do so holds regardless of distance or the neglect of others.

3) This creates an unconventionally rigorous and demanding moral standard.


Benquo's is that

1) Even the best charity impact analysis is too opaque to be believable.

2) The rich have already pledged enough to solve the big problems.

3) Therefore, spend on yourself, spend locally, and on "specific concrete things that might have specific concrete benefits;" also, try to ... (read more)

7Benquo
I'm rejecting the claim that there exists an infinite pit of suffering that can be remedied cheaply per unit of suffering. If I don't make constructive suggestions, people claim that I'm just saying everything is terrible or something. If I do, they seem to think that the whole post is an argument for the constructive suggestions. I'm not sure what to do here.
5DirectedEvolution
This is meant as a constructive suggestion. I find some of your posts here to be ambiguous. For example, in your reply here, I can’t tell whether you’re complaining that I, too, am playing into this catch-22 that you describe, or whether instead you feel that my post is more sympathetic to you and thus a place where you can more safely vent your frustration. As you can see from my first comment in this chain, I was also unsure of how to interpret your original post. Was it an argument for giving up on a moral imperative of altruistic utility-maximization entirely, a re-evaluation how that imperative is best achieved, or a claim that maximization is good in theory but such opportunities don’t exist in practice? Although everyone should give others a sympathetic and careful reading, if I was in your shoes I might consider whether my writing is clear enough.

Doesn't this only hold if you abdicate all moral judgment to Gates/Good Ventures, such that if Gates Foundation/Good Ventures pass up an opportunity to save lives, it follows necessarily that the offer was fraudulent?

Edit note: Fixed the formatting.

I would not, in fact, save a drowning child.

Or rather, I'd save a central example of a drowning child, but I wouldn't save a drowning child under literally all circumstances, and I think most people wouldn't either. If a child was drowning in a scenario similar to one that Singer uses it as an analogy for, it would be something like a scenario where there is an endless series of drowning children in front of me with an individually small but cumulatively large cost to saving them. Under those circumstances, I would not save every drowning child, or even try to maximize the number of drowning children I do save.

There's 2 reasons people far away may not try to save these lives.

1) They don't know if others are donating, maybe they don't need to.

2) They can't see the foundation acting on their part, maybe they will fail rescuing them, or maybe they are only cashing in.

There's a large feedback issue here.