Apparently, at a recent EA summit Robin Hanson berated the attendees for giving to more than one charity. I think his critique is salient: given our human scope insensitivity, giving all your charity-money to one cause feels like helping with only *one* thing, even if that one organization does vastly more good, much more efficiently, than any other group, and so every dollar given to that organization does more good than an anything else that could be done with that dollar. More rational and more effective is to find the most efficient charity and give only to that charity, until it has achieved its goal so completely that it is no longer the most efficient charity.

That said, I feel that there are at least some circumstances under which it is appropriate to divide one's charity dollars: those that include risky investments.

If a positive singularity were to occur, the impact would be enormous: it would swamp any other good that I could conceivably do. Yet, I don't know how likely a positive singularity is; it seems to be a long shot. Furthermore, I don't know how much my charity dollars affect the probability one way or another. It may be that a p-singularity will either happen or it won't, and there's not much I can do about it. There's a huge pay-off but high uncertainty. In contrast, I could (for instance) buy mosquito nets for third world counties, which has a lower, but much more certain pay-off. 

Some people are more risk-seeking than others, and it seems to be a matter of preference whether one takes risky bets or more certain ones. However, there are "irrational" answers, since one can calculate the expected pay-off of a gambit by mere multiplication. It is true that it is imprudent to bet one's life savings on an unlikely chance of unimaginable  wealth, but this is because of quirks of human utility calculation: losses are more painful than gain are enjoyable, and there is a law of diminishing marginal returns in play (to most of us, a gift of a billion dollars is not very emotionally different than two billion, and we would not be indifferent between a 100% chance of getting a billion dollars and a 50% chance of getting two billion dollars on the one hand, and a 50% chance of getting nothing on the other. In fact, I would trade my 50/50 chance of a billion, for a 100% certainty of a 10 million). But, we would do well to stick to mathematically calculated expected-pay-offs, for any "games" that are small enough or frequent enough, that improbable flukes will be canceled out on the net.

Let's say you walk into the psychology department, Kahneman and Tversky offer you a trade off: you can save 50 lives, or you can "sell" some or all of those lives for a 0.005% increase in the probability of an outcome in which no one ever dies again and every problem that has ever plagued humanity is solved and post-humans impregnate the universe with life. That sounds fantastic, but at best you can only increase the probability of such an outcome by a quarter of a percent. Is any ratio of "lives saved" to "incremental increases in the probability of total awesomeness" rational? Is it just a matter of personal preference how much risk you personally decide to take on? Ought you to determine your conversion factor between human lives and increases in the probability of a p-singularity, and go all in based on whether the ratio that is offered you is above or below your own (i.e. you're getting a "good deal")?

I feel like there's a good chance that we'll screw it all up and be extinct in the next 200 years. I want to stop that, but I also want to hedge my bets. If it does all go boom, I want to have spent at least some of my resources making the time we have better for as many people as possible. It even seems selfish to to not help those in need so that I can push up the probability of an awesome, but highly uncertain future. That feels almost like making reckless investment with other people's money. But maybe I just haven't gotten myself out of the cognitive-trap that Robin accused us off.

 

New Comment
43 comments, sorted by Click to highlight new comments since: Today at 10:40 AM

Robin is correct. Here is an accessible explanation. Suppose you first give $1 to MIRI because you believe MIRI is the charity with the highest marginal utility in donations right now. The only reason you would then give the next $1 in your charity budget to anyone other than MIRI would be that MIRI is no longer the highest marginal utility charity. In other words, you'd have to believe that your first donation made a dent into the FAI problem, and hence lowered the marginal utility of a MIRI dollar by enough to make another charity come out on top. But your individual contributions can't make any such dent.

Some sensible reasons for splitting donations involve donations at different times (changes in room for more funding, etc.) and donations that are highly correlated with many other people's donations (e.g. the people giving to GiveWell top charities) and might therefore actually make dents.

Suppose you first give $1 to MIRI because you believe MIRI is the charity with the highest marginal utility in donations right now. The only reason you would then give the next $1 in your charity budget to anyone other than MIRI would be that MIRI is no longer the highest marginal utility charity.

You're assuming you're certain about your estimates of the charities' marginal utility. If you're uncertain about them, things change.

Compare this to investing in financial markets. Why don't you invest all your money in a single asset with the highest return? Because you're uncertain about returns and diversification is a useful thing to manage your risk.

diversification is a useful thing to manage your risk

But presumably you're risk-neutral to altruism, but not risk-neutral for your own personal finances.

I don't see being risk-neutral with respect to altruism as obvious. If it turns out that you misallocated your charity dollars, you have incurred opportunity costs. In general, people are nor risk-neutral with respect to things they care about.

Well, you're probably less risk averse with regard to altruism. I imagine most people would still be upset to see the charity they've been donating to for years go under.

No, I'm not relying on that assumption, though I admit I was not clear about this. The argument goes through perfectly well if we consider expected marginal utilities.

Investors are risk-averse because not-too-unlikely scenarios can affect your wealth sufficiently enough to make the concavity of your utility function over wealth matter. For FAI or world poverty, none of your donations at a given time will make enough of a dent.

I think the countervailing intuition comes from two sources: 1) Even when instructed about the definition of utility, certainty equivalents of gambles, and so on, people have a persistent intuition that utility has declining marginal utility. 2) We care not only about poor people being made better off (where our donations can't make a dent) but also about creating a feeling of moral satisfaction within ourselves (where donations to a particular cause can satiate that feeling, leading us to want to help some other folks, or cute puppies).

Investors are risk-averse because not-too-unlikely scenarios can affect your wealth sufficiently enough to make the concavity of your utility function over wealth matter.

You are wrong about this. See e.g. here or in a slightly longer version here.

But let's see how does your intuition work. Charity A is an established organization with well-known business practices and for years it steadily has been generating about 1 QALY for $1. Charity B is a newcomer that no one really knows much about. As far as you can tell, $1 given to it has a 1.1% chance to generate 100 QALYs or be wasted otherwise, but you're not sure about these numbers, they are just a low-credence guess. To whom do you donate?

Adding imprecise probability (a 1.1% credence that I'm not sure of) takes us a bit afield, I think. Imprecise probability doesn't have an established decision theory in the way probability has expected utility theory. But that aside, assuming that I'm calibrated in the 1% range and good at introspection and my introspection really tells me that my expected QALY/$ for charity B is 1.1, I'll donate to charity B. I don't know how else to make this decision. I'm curious to hear how much meta-confidence/precision you need for that 1.1% chance for you to switch from A to B (or go "all in" on B). If not even full precision (e.g. the outcome being tied to a RNG) is enough for you, then you're maximizing something other than expected QALYs.

(I agree with Gelman that risk-aversion estimates from undergraduates don't make any financial sense. Neither do estimates of their time preference. That just means that people compartmentalize or outsource financial decisions where the stakes are actually high.)

takes us a bit afield, I think

If you're truly risk neutral you would discount all uncertainty to zero, the expected value is all that you'd care about.

my introspection really tells me that my expected QALY/$ for charity B is 1.1

You introspection tells you that you're uncertain. Your best guess is 1.1 but it's just a guess. The uncertainty is very high.

I don't know how else to make this decision.

Oh, there are plenty of ways, just look at finance. Here's a possible starting point.

I agree with Gelman that risk-aversion estimates from undergraduates don't make any financial sense.

Gelman's point has nothing to do with whether undergrads have any financial sense or not. Gelman's point is that treating risk aversion as solely a function of the curvature of the utility function makes no sense whatsoever -- for all humans.

Let me try to refocus a bit. You seem to want to describe a situation where I have uncertainty about probabilities, and hence uncertainty about expected values. If this is not so, your points are plainly inconsistent with expected utility maximization, assuming that your utility is roughly linear in QALYs in the range you can affect. If you are appealing to imprecise probability, what I alluded to by "I have no idea" is that there are no generally accepted theories (certainly not "plenty") for decision making with imprecise credence. It is very misleading to invoke diversification, risk premia, etc. as analogous or applicable to this discussion. None of these concepts make any essential use of imprecise probability in the way your example does.

You seem to want to describe a situation where I have uncertainty about probabilities, and hence uncertainty about expected values.

Correct.

there are no generally accepted theories (certainly not "plenty") for decision making with imprecise credence.

Really? Keep in mind that in reality people make decisions on the basis of "imprecise probabilities" all the time. In fact, outside of controlled experiments, it's quite unusual to know the precise probability because real-life processes are, generally speaking, not that stable.

It is very misleading to invoke diversification, risk premia, etc. as analogous or applicable to this discussion.

On the contrary, I believe it's very illuminating to apply these concepts to the topic under discussion.

I did mention finance which is a useful example because it's a field where people deal with imprecise probabilities all the time and the outcomes of their decisions are both very clear and very motivating. You don't imagine that when someone, say, characterizes a financial asset as having the expected return of 5% with 20% volatility, these probabilities are precise, do you?

There are two very different sorts of scenarios with something like "imprecise probabilities".

The first sort of case involves uncertainty about a probability-like parameter of a physical system such as a biased coin. In a sense, you're uncertain about "the probability that the coin will come up heads" because you have uncertainty about the bias parameter. But when you consider your subjective credence about the event "the next toss will come up heads", and integrate the conditional probabilities over the range of parameter values, what you end up with is a constant. No uncertainty.

In the second sort of case, your very subjective credences are uncertain. On the usual definition of subjective probabilities in terms of betting odds this is nonsense, but maybe it makes some sense for boundedly introspective humans. Approximately none of the decision theory corpus applies to this case, because it all assumes that credences and expected values are constants known to the agent. Some decision rules for imprecise credence have been proposed, but my understanding is that they're all problematic (this paper surveys some of the problems). So decision theory with imprecise credence is currently unsolved.

Examples of the first sort are what gives talk about "uncertain probabilities" its air of reasonableness, but only the second case might justify deviations from expected utility maximization. I shall have to write a post about the distinction.

But when you consider your subjective credence about the event "the next toss will come up heads", and integrate the conditional probabilities over the range of parameter values, what you end up with is a constant. No uncertainty.

Really? You can estimate your subjective credence without any uncertainty at all? You integration of the conditional probabilities over the range of parameter values involves only numbers you are fully certain about?

I don't believe you.

Approximately none of the decision theory corpus applies to this case

So this decision theory corpus is crippled and not very useful. Why should we care much about it?

So decision theory with imprecise credence is currently unsolved.

Yes, of course, but life in general is "unsolved" and you need to make decisions on a daily basis, not waiting for a proper decision theory to mature.

I think you overestimate the degree to which abstractions are useful when applied to reality.

The fact that the assumptions of an incredibly useful theory of rational decisionmaking turn out not to be perfectly satisfied does not imply that we get to ignore the theory. If we want to do seemingly crazy things like diversifying charitable donations, we need an actual positive reason, such as the prescriptions of a better model of decisionmaking that can handle the complications. Just going with our intuition that we should "diversify" to "reduce risk", when we know that those intuitions are influenced by well-documented cognitive biases, is crazy.

This has been incredibly unproductive I can't believe I'm still talking to you kthxbai

I can't believe I'm still talking to you kthxbai

Ah.

Thank you for clarity.

I'm not sure what I'm should take away from that exchange.

Ignore the last sentence and take the rest for what it's worth :) I did the equivalent of somewhat tactlessly throwing up my hands after concluding that the exchange stopped being productive (for me at least, if not for spectators) a while ago.

Anything in particular you are wondering about? :-)

Just my original question. I'm not sure if diversification to mitigate charitable risk is a matter of preference or numeric objectivity.

Try making up your own mind..? :-)

Someone told me not to.

(this is a joke)

At this point you're supposed to fry your circuits and 'splode.

You don't imagine that when someone, say, characterizes a financial asset as having the expected return of 5% with 20% volatility, these probabilities are precise, do you?

Those are not even probabilities at all.

Such an expression usually implies a normal probability distribution with the given mean and standard deviation. How do you understand probabilities as applied to continuous variables?

The idea that efficient charity is about "doing good" and that "doing good" equates with "lives saved" is one giant availability bias. And of course there is absolutely no sense in going all in on any cause in a risky world with agent exhibiting time preference.

I'll argue the first and second claim seperately. In the conjunction we take the first statement as a premise, let efficient charity be about maximising good. It is not obvious that "doing good" equates to preventing a preventable, premature death, my interpretation of "saving a life", as everyone has to die at some point, cryonics not withstanding. A much better metric, which GiveWell in fact does use, is quality adjusted life years, QALY, independent of the exact choice of quality adjustments. It measures how many more "good" life years a person is expected from the intervention. And still it is difficult to see how this is absolutely equal to "good". We see people donating to their local high school sports team, the catholic church, WWF, UNICEF, Wikipedia, the Linux foundation and many more. All these people are said to "do good". Is "doing good" then not more about solving problems involving public goods? Of course if I can extend a life by 20 QALY for $1000 it is difficult to see how I could get the same amount of good from donating to the local high school sports team, but a couple of hours more Wikipedia uptime, especially in the developing world, could measure up.

Why then do people ride on and on about the QALY metric? Simple, it is a number that can be relatively easily calculated, instead of having to map all public goods to some kind of measure of good. It is more available, thus it is used.

Now to the second claim. We live in a risky world and we exhibit time preference. Good now is more valuable than the same amount of good later, we have a discount function. So I have a very good reason to think about whether I want to donate to GiveWell to purchase some QALY now or donate to MIRI/FHI to purchase a lot of QALY in a hundred years. But what is worse, any intervention, any donation is inherently risky as the conversion from donated money to actual good can fail, if only for failiures of the organisation receiving money. Take GiveWell's clear fund to be a certain conversion and take their second highest rated charity to be risky with a 50% chance of donated $1000 being turned into double the QALY of the clear fund - and one QALY more - and 50% of nothing happening. (Whether this is per donation or for all donations at once is irrelevant here) Should I go all in on either? Basic betting theory says no. I'll have to mix. And I'll have to mix according to my values, how much I care about the world in any given future and how risky I am willing to be.

Also, I am human. I am donating not because I am altruistic, but because it gives me a good feeling. Moreover, I can only decide for myself how to donate, not for all the world at once, so there's that.

Edit: I'm happy you linked Yvain's article because there is a point in it about investing that makes my argument even more complicated. But I'm even more happy because the point about investing the money is something I thought about before and have to research more, again.

I don't actually value lives saved very much. Death's just not that big a deal. I'm more interested in producing states of wonder and joy. I want to bring as many people as can to the level of eduction and self-awareness where they can appreciate the incredibleness of the world. The saddest thing that I can think of is that there are are hundreds of thousands of people who are just as smart as I am who were never given the opportunities or encouragement that I was to come to love the world. I'm in the business of poverty elevation and disease eradication, because removing those constraints allows us to maximize fully flourished human lives. Similarly, I care about a singularity because the amount of insight to which the species has access will go through the roof.

I use "live-saved" since, as you point out, it's a sort of hot-word to which people react and associate with "doing good."

Should I go all in on either? Basic betting theory says no.

This is my issue. I'm not sure what justification we have for ignoring the theory, assuming we actually want to be maximally helpful. Can you elaborate?

There is absolutely no justification in ignoring betting theory. It was formulated for turning money into more money, but applies equally well for turning any one cardinal quantity into another cardinal quantity. Some time ago there was an absurdly long article on here about why one should not diversify their donations assuming there is no risk, which made the point moot.

And even if there was no risk, my utility is marginal. I'll donate some to one cause until that desire is satisfied, I'll then donate to another cause until that desire is satisfied and so on. This has the dual benefit of benefiting multiple causes I care about and of hedging against potentially bad metrics like QALY.

I don't understand.

There is absolutely no justification in ignoring betting theory.

and

And even if there was no risk, my utility is marginal. I'll donate some to one cause until that desire is satisfied, I'll then donate to another cause until that desire is satisfied and so on. This has the dual benefit of benefiting multiple causes I care about and of hedging against potentially bad metrics like QALY.

Aren't these mutually exclusive statements or am I misunderstanding? What is your position?

What is your position?

Diversify, that is my position.

Aren't these mutually exclusive statements or am I misunderstanding?

Misunderstanding. Assuming risk, we have to diversify. But even when we assume no risk we exhibit marginal utility from any cause, so we should diversify there too, just as you don't put all your money above poverty into any one good.

But the reason why I don't put all my money into one good (that said, I'm pretty close, after food and rent, its just books, travel, and charity), because my utility function has built in diminishing marginal returns. I don't get as much enjoyment out of doing somthign that I've already been doing a lot. If I am sincerely concerned about the well being of others and effective charity, then there is no significant change in marginal impact per dollar I spend. While it is a fair critique that I may not actually care, I want to care, meaning I have a second-order term on my utility function that is not satisfied unless I am being effective with my altruism.

If I am sincerely concerned about the well being of others and effective charity, then there is no significant change in marginal impact per dollar I spend.

Oh, you are sincerely concerned? Then of course any contribution you make to any efficient cause like world poverty will be virtually zero relative to the problem and spend away. But personally I can see people go "ten lives saved is good enough, let's spend the rest on booze". Further arguments could be made that it is unfair that only people in Africa get donations but not people in India, or similar.

But that is only the marginal argument knocked down. The risk argument still stands is way stronger anyway.

While it is a fair critique that I may not actually care, I want to care, meaning I have a second-order term on my utility function that is not satisfied unless I am being effective with my altruism.

Signaling, signaling, signaling all the way down.

Ok. Fine maybe it's signaling. I'm ok with that since the part of me that does really care thinks "if my desire to signal leads me to help effectively, then it's fine in my book", but then I'm fascinated because that part of me may, actually, be motivated by the my desire to signal my kindness. It may be signalling "all the way down", but it seems to be alternating levels of signaling motivated by altruism motivated by signalling. Maybe it eventually stabilizes at one or the other.

I don't care. Whether I'm doing it out of altruism or doing it for signaling (or, as I personally think, neither, but rather something more complex, involving my choice of personal identity which I suspect uses the neural architecture that was developed for playing status games, but has been generalized to be compared to an abstract ideal instead of other agent), I do want to be maximally effective.

If I know what my goals are, what motivates them is not of great consequence.

I think the idea that you are deciding for sufficiently similar minds as well as your own here may help in some way. If you and everyone who thinks like you is trading not-saved humans now for slight increased chance of saved everyone in the future, what would you decide?

(Note, if there are 400 people who think like you, and you're using multiplicative increases you've just increased the chance of success by four orders of magnitude. If you're using additive increases, you've gone over unity. Stick to odds for things like this maybe!)

Well, the marginal impact of a life-not-saved on the probability of a p-sing (can I call it that? What I really want is a convenient short-hand for "tiny incremental increase in the probability of a positive singularity.") probably goes down as more we put more effort into achieving a p-sing, but not significantly for the problem. The law of diminishing marginal returns gets you every time.

Let's not get to caught up in the numbers (which I do think are useful for considering a real trade-off). I don't know how likely a p-sing is, nor how much my efforts can contribute to one. I am interested in analysis of this question, but I don't think we can have high confidence in an prediction that goes out 20 years or more, especially if the situation requires the introduction of such world-shaping technologies as would lead up to a singularity. If everyone acts as I do, but we're massively wrong about how much impact our efforts have (which is likely), then we all waste enormous effort on nothing.

Given that you are only one individual, the increase in the chance of a p-sing for each unit of money you give is roughly linear, so diminishing marginal returns shouldn't be an issue.

Also, I am human. I am donating not because I am altruistic, but because it gives me a good feeling.

and

And even if there was no risk, my utility is marginal. I'll donate some to one cause until that desire is satisfied, I'll then donate to another cause until that desire is satisfied and so on.

Both of these points are founded on the idea of a philanthropist, giving solely for the fuzzies, which ignores the whole concept of effective altruism. (Technically, it's true that my utility function depends on how my actions effect me, but I have higher order desires which modulate my desire for fuzzies. Now that I know that most charities are ineffective, I won't feel good about my giving unless I'm giving effectively.)

By the way, can anyone tell me why my syntax in the first line isn't working?

If I recall correctly, articles don't support markdown syntax, which you used. They only support actual HTML tags.

Anyone care to give me the proper syntax? The things (from Google) I tried didn't work.

If my hypothesis is correct, the syntax would be

Robin Hanson berated the attendees

nope. no dice.

fixed it.