Comment author: MarkusRamikin 10 May 2012 03:47:35PM *  2 points [-]

Not a big deal, but for me your "more" links don't seem to be doing anything. Firefox 12 here.

EDIT: Yup, it's fixed. :)

Comment author: HoldenKarnofsky 10 May 2012 04:12:28PM 3 points [-]

Thanks for pointing this out. The links now work, though only from the permalink version of the page (not from the list of new posts).

Comment author: HoldenKarnofsky 12 November 2011 09:31:44PM 4 points [-]

A few quick notes:

  • As I wrote in my response to Carl on The GiveWell Blog, the conceptual content of this post does not rely on the assumption that the value of donations (as measured in something like "lives saved" or "DALYs saved") is normally distributed. In particular, a lognormal distribution fits easily into the above framework. .

  • I recognize that my model doesn't perfectly describe reality, especially for edge cases. However, I think it is more sophisticated than any model I know of that contradicts its big-picture conceptual conclusions (e.g., by implying "the higher your back-of-the-envelope [extremely error-prone] expected-value calculation, the necessarily higher your posterior expected-value estimate") and that further sophistication would likely leave the big-picture conceptual conclusions in place.

  • JGWeissman is correct that I meant "maximum" when I said "inflection point."

Maximizing Cost-effectiveness via Critical Inquiry

20 HoldenKarnofsky 10 November 2011 07:25PM

 

I am cross-posting this GiveWell Blog post, a followup to an earlier cross-post I made. Here I provide a slightly more fleshed-out model that helps clarify the implications of Bayesian adjustments to cost-effectiveness estimates. It illustrates how it can be rational to take a "threshold" approach to cost-effectiveness, asking that actions/donations meet a minimum bar for estimated cost-effectiveness but otherwise focusing on robustness of evidence rather than magnitude of estimated impact.

 

We've recently been writing about the shortcomings of formal cost-effectiveness estimation (i.e., trying to estimate how much good, as measured in lives saved, DALYs or other units, is accomplished per dollar spent). After conceptually arguing that cost-effectiveness estimates can't be taken literally when they are not robust, we found major problems in one of the most prominent sources of cost-effectiveness estimates for aid, and generalized from these problems to discuss major hurdles to usefulness faced by the endeavor of formal cost-effectiveness estimation.

Despite these misgivings, we would be determined to make cost-effectiveness estimates work, if we thought this were the only way to figure out how to allocate resources for maximal impact. But we don't. This post argues that when information quality is poor, the best way to maximize cost-effectiveness is to examine charities from as many different angles as possible - looking for ways in which their stories can be checked against reality - and support the charities that have a combination of reasonably high estimated cost-effectiveness and maximally robust evidence. This is the approach GiveWell has taken since our inception, and it is more similar to investigative journalism or early-stage research (other domains in which people look for surprising but valid claims in low-information environments) than to formal estimation of numerical quantities.

The rest of this post

  • Conceptually illustrates (using the mathematical framework laid out previously) the value of examining charities from different angles when seeking to maximize cost-effectiveness.
  • Discusses how this conceptual approach matches the approach GiveWell has taken since inception.

continue reading »
Comment author: HoldenKarnofsky 29 August 2011 04:31:00PM *  9 points [-]

Hello all,

Thanks for the thoughtful comments. Without responding to all threads, I'd like to address a few of the themes that came up. FYI, there are also interesting discussions of this post at The GiveWell Blog , Overcoming Bias , and Quomodocumque (the latter includes Terence Tao's thoughts on "Pascal's Mugging").

On what I'm arguing. There seems to be confusion on which of the following I am arguing:

(1) The conceptual idea of maximizing expected value is problematic.

(2) Explicit estimates of expected value are problematic and can't be taken literally.

(3) Explicit estimates of expected value are problematic/can't be taken literally when they don't include a Bayesian adjustment of the kind outlined in my post.

As several have noted, I do not argue (1). I do aim to give with the aim of maximizing expected good accomplished, and in particular I consider myself risk-neutral in giving.

I strongly endorse (3) and there doesn't seem to be disagreement on this point.

I endorse (2) as well, though less strongly than I endorse (3). I am open to the idea of formally performing a Bayesian adjustment, and if this formalization is well done enough, taking the adjusted expected-value estimate literally. However,

  • I have examined a lot of expected-value estimates relevant to giving, including those done by the DCP2 , Copenhagen Consensus , and Poverty Action Lab , and have never once seen a formalized adjustment of this kind.

  • I believe that often - particularly in the domains discussed here - formalizing such an adjustment in a reasonable way is simply not feasible and that using intuition is superior. This is argued briefly in this post, and Dario Amodei and Jonah Sinick have an excellent exchange further exploring this idea at the GiveWell Blog.

  • If you disagree with the above point, and feel that such adjustments ought to be done formally, then you do disagree with a substantial part of my post; however, you ought to find the remainder of the post more consequential than I do, since it implies substantial room for improvement in the most prominent cost-effectiveness estimates (and perhaps all cost-effectiveness estimates) in the domains under discussion.

All of the above applies to expected-value calculations that take relatively large amounts of guesswork, such as in the domain of giving. There are expected-value estimates that I feel are precise/robust enough to take literally.

Is it reasonable to model existential risk reduction and/or "Pascal's Mugging" using log-/normal distributions? Several have pointed out that existential risk reduction and "Pascal's Mugging" seem to involve "either-or" scenarios that aren't well approximated by log-/normal distributions. I wish to emphasize that I'm focused on the prior over expected value of one's actions and on the distribution of error in one's expected-value estimate. (The latter is a fuzzy concept that may be best formalized with the aid of concepts such as imprecise probability. In the scenarios under discussion, one often must estimate the probability of catastrophe essentially by making a wild guess with a wide confidence interval, leaving wide room for "estimate error" around the expected-value calculation.) Bayesian adjustments to expected-value estimates of actions, in this framework, are smaller (all else equal) for well-modeled and well-understood "either-or" scenarios than for poorly-modeled and poorly-understood "either-or" scenarios.

For both the prior and for the "estimate error," I think the log-/normal distribution can be a reasonable approximation, especially when considering the uncertainty around the impact of one's actions on the probability of catastrophe.

The basic framework of this post still applies, and many of its conclusions may as well, even when other types of probability distributions are assumed.

My views on existential risk reduction are outside the scope of this post. The only mention I make of existential risk reduction is to critique the argument that "charities working on reducing the risk of sudden human extinction must be the best ones to support, since the value of saving the human race is so high that 'any imaginable probability of success' would lead to a higher expected value for these charities than for others." Note that Eliezer Yudkowsky and Michael Vassar also appear to disapprove of this argument, so it seems clear that disputing this argument is not the same as arguing against existential risk reduction charities.

For the past few years we have considered catastrophic risk reduction charities to be lower on GiveWell's priority list for investigation than developing-world aid charities, but still relatively high on the list in the scheme of things. I've recently started investigating these causes a bit more, starting with SIAI (see LW posts on my discussion with SIAI representatives and my exchange with Jaan Tallinn). It's plausible to me that asteroid risk reduction is a promising area, but I haven't looked into it enough (yet) to comment more on that.

My informal objections to what I term EEV. Several have criticized the section of my post giving informal objections to what I term the EEV approach (by which I meant explicitly estimating expected value using a rough calculation and not performing a Bayesian adjustment). This section was intended only as a very rough sketch of what unnerves me about EEV; there doesn't seem to be much dispute over the more formal argument I made against EEV; thus, I don't plan on responding to critiques of this section.

Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased)

75 HoldenKarnofsky 18 August 2011 11:34PM

Note: I am cross-posting this GiveWell Blog post, after consulting a couple of community members, because it is relevant to many topics discussed on Less Wrong, particularly efficient charity/optimal philanthropy and Pascal's Mugging. The post includes a proposed "solution" to the dilemma posed by Pascal's Mugging that has not been proposed before as far as I know. It is longer than usual for a Less Wrong post, so I have put everything but the summary below the fold. Also, note that I use the term "expected value" because it is more generic than "expected utility"; the arguments here pertain to estimating the expected value of any quantity, not just utility.

While some people feel that GiveWell puts too much emphasis on the measurable and quantifiable, there are others who go further than we do in quantification, and justify their giving (or other) decisions based on fully explicit expected-value formulas. The latter group tends to critique us - or at least disagree with us - based on our preference for strong evidence over high apparent "expected value," and based on the heavy role of non-formalized intuition in our decisionmaking. This post is directed at the latter group.

We believe that people in this group are often making a fundamental mistake, one that we have long had intuitive objections to but have recently developed a more formal (though still fairly rough) critique of. The mistake (we believe) is estimating the "expected value" of a donation (or other action) based solely on a fully explicit, quantified formula, many of whose inputs are guesses or very rough estimates. We believe that any estimate along these lines needs to be adjusted using a "Bayesian prior"; that this adjustment can rarely be made (reasonably) using an explicit, formal calculation; and that most attempts to do the latter, even when they seem to be making very conservative downward adjustments to the expected value of an opportunity, are not making nearly large enough downward adjustments to be consistent with the proper Bayesian approach.

This view of ours illustrates why - while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible - every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good - a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably).

The rest of this post will:

  • Lay out the "explicit expected value formula" approach to giving, which we oppose, and give examples.
  • Give the intuitive objections we've long had to this approach, i.e., ways in which it seems intuitively problematic.
  • Give a clean example of how a Bayesian adjustment can be done, and can be an improvement on the "explicit expected value formula" approach.
  • Present a versatile formula for making and illustrating Bayesian adjustments that can be applied to charity cost-effectiveness estimates.
  • Show how a Bayesian adjustment avoids the Pascal's Mugging problem that those who rely on explicit expected value calculations seem prone to.
  • Discuss how one can properly apply Bayesian adjustments in other cases, where less information is available.
  • Conclude with the following takeaways:
    • Any approach to decision-making that relies only on rough estimates of expected value - and does not incorporate preferences for better-grounded estimates over shakier estimates - is flawed.
    • When aiming to maximize expected positive impact, it is not advisable to make giving decisions based fully on explicit formulas. Proper Bayesian adjustments are important and are usually overly difficult to formalize.
    • The above point is a general defense of resisting arguments that both (a) seem intuitively problematic (b) have thin evidential support and/or room for significant error.

continue reading »
Comment author: patrissimo 02 January 2011 07:04:23AM 4 points [-]

Completely agree with your general point on marginal analysis (although I'm a TDT skeptic), and am a fan of GiveWell, but this is trivially wrong:

It is not possible for everyone to behave this way in elections: no voter is able to consider the existing distribution of votes before casting their own.

This seems to assume away information about the size of the electorate as well as any predictive power about the outcome. Surely the marginal benefit of a Presidential vote in a small swing state is massively higher than in a large solidly Democratic state, for example. And in addition to historical results, there is polling data in advance of the election to improve predictions.

Besides this being theoretically true, we can see it empirically from the spending patterns of both Presidential campaigns and political parties on Congressional races. They allocate money to the states / races where they believe it will do the most marginal good, which is often a very inequal distribution. Thus they do, in fact "consider the existing distribution of votes before casting" their advertising dollars.

Comment author: HoldenKarnofsky 03 January 2011 09:43:55PM 3 points [-]

Patrissimo, fair enough. I was thinking that voters can't vote with the same degree of knowledge of the existing situation that they can have with blood donations. Arguments over TDT certainly seem more relevant to voting than to blood donations. But you are right that voters have lots of relevant information about the likely distribution of votes that can be productively factored into their decisions regardless of the TDT debate. Glad to hear you're a fan of GiveWell.

Comment author: Perplexed 25 December 2010 07:08:34AM *  5 points [-]

I take it that you're suggesting marginal analysis based on the standard correct classical causal decision theory (in which no one is responsible for saving a life by donating blood unless someone would have actually died had that donation not been made) out of either belated humility about the probability of an SIAI-originating decision theory being correct, or because you're planning to actually convince someone and you don't want to invoke Hofstadterian superrationality in place of the standard correct decision theory?

:)

My guess would be that at the margin, a blood donation saves less than 0.00001 lives. (Otherwise, compensation would be increased for the paid donors). But, if you want to use a TDT/UDT style analysis, here are some relevant statistics from the American Red Cross:

  • The number of blood donations collected in the U.S. in a year: 16 million (2006).
  • The number of patients who receive blood in the U.S. in a year: 5 million (2006).

Given these numbers, I would estimate that roughly 0.5 million (US) lives are saved (more accurately, extended) by blood products annually. If you adopt the assumption that all blood comes from voluntary, uncompensated donations, and divide those 0.5 million lives among the 16 million annual donations, you get one life saved for every 32 pints donated - not as much as jsteinhardt hoped, but still significant enough to earn a major warm-and-fuzzy.

Comment author: HoldenKarnofsky 29 December 2010 05:01:06PM 15 points [-]

This is Holden Karnofsky, the co-Executive Director of GiveWell, which is referenced in the top-level article and elsewhere on this thread.

I think there is an important difference between discussing the marginal impact of a blood donation and the marginal impact of a vote. When it comes to blood donations, it is possible for everyone to simultaneously follow the rule: "Give blood only when the supply of donations is low enough that an additional donation would have high expected impact", with a reasonable outcome. It is not possible for everyone to behave this way in elections: no voter is able to consider the existing distribution of votes before casting their own.

I am only casually familiar with TDT/UDT, but it seems to me that that "Give blood only when the supply of donations is low enough that an additional donation would have high expected impact" should get about the same amount of credit under TDT/UDT as giving blood, and thus the extra impact of actually giving blood (as opposed to following that rule) is small regardless of what decision theory one is using.

I bring this up because the discussion of marginal blood donations is parallel to analysis GiveWell often does of the marginal impact of donations. We do everything we can to understand the marginal (not average) impact of a donation and recommend organizations on this basis, and we believe this is a very important and unique element of what we offer (more on this issue). We try to push donors to underfunded charities and away from overfunded ones, and I do not think the validity of this depends on any controversial (even controversial-within-Less-Wrong) view on decision theory, though I am open to arguments that it does.

View more: Prev | Next