Steven Landsburg argued, in an oft-quoted article, that the rational way to donate to charity is to give everything to the charity you consider most effective, rather than diversify; and that this is always true when your contribution is much smaller than the charities' endowments. Besides an informal argument, he provided a mathematical addendum for people who aren't intimidated by partial derivatives. This post will bank on your familiarity with both.

I submit that the math is sloppy and the words don't match the math. This isn't to say that the entire thing must be rejected; on the contrary, an improved set of assumptions will fix the math and make the argument whole. Yet it is useful to understand the assumptions better, whether you want to adopt or reject them. 

And so, consider the math. We assume that our desire is to maximize some utility function U(X, Y, Z), where X, Y and Z are total endowments of three different charities. It's reasonable to assume U is smooth enough so we can take derivatives and apply basic calculus with impunity. We consider our own contributions Δx, Δy and Δz, and form a linear approximation to the updated value U(X+Δx, Y+Δy, Z+Δz). If this approximation is close enough to the true value, the rest of the argument goes through: given that the sum Δx+Δy+Δz is fixed, it's best to put everything into the charity with the largest partial derivative at (X,Y,Z).

The approximation, Landsburg says, is good "assuming that your contributions are small relative to the initial endowments". Here's the thing: why? Suppose Δx/X, Δy/Y and Δz/Z are indeed very small - what then? Why does it follow that the linear approximation works? There's no explanation, and if you think this is because it's immediately obvious - well, it isn't. It may sound plausible, but the math isn't there. We need to go deeper.

We don't need to go all that deep, actually. The tool which allows us to estimate our error is Taylor's theorem for several variables. If you stare into that for a bit, taking n=1, you'll see that the leftovers from U(X+Δx, Y+Δy, Z+Δz), after we take out U(X,Y,Z) and the linear terms with the partial derivatives, are a bunch of terms that are basically second derivatives and mixed derivatives of U times quadratic contributions. In other, scarier words, things like ∂U2/∂x2*(Δx)2 and ∂U2/∂x∂y*(ΔxΔy). The values of the second/mixed derivatives to be used are their maximal values over all the region from (X,Y,Z) to (X+Δx, Y+Δy, Z+Δz). And I've also ignored some small constant factors there, to keep things simple.

These leftovers are to be compared with the linear terms like ∂U(X, Y, Z)/∂x*(Δx). If the leftovers are much smaller than the linear terms, then the approximation is pretty good. When will that happen?

Let's look at the mixed derivatives like ∂U2/∂x∂y first. A partial derivative like ∂U/∂x measures the effectiveness of charity X - how much utility it brings per dollar, currently. A mixed derivative measures how much the effectiveness of X changes as we donate money to Y. If charities X and Y operate in different domains, this is likely to be very close to zero, and the mixed derivative terms can be ignored. If X and Y work on something related, this isn't likely to be zero at all, and the overall argument fails. So, we need a new assumption: X and Y operate in different domains, or more generally do not affect each other's effectiveness.

(Here's an artificial example: say you have X working on buying food for hungry Eurafsicans, and Y working on delivering food aid to Eurafsica. If things are perfectly balanced between them, then your $100 contribution to either is not of much use, but a $50 contribution to both is helpful. This is because X's effectiveness rises if Y gets a little more money, and vice versa. Note that it may also happen that the interference between X and Y hinders rather than helps their common goal, and in that case it may still be better to give money to only one of them, but not because of Landsburg's argument).

Now that the mixed derivate terms are gone, let's look at the second derivative terms like ∂U2/∂x2*(Δx)2  and compare them to the linear terms ∂U(X, Y, Z)/∂x*(Δx). Cancelling out the common factor Δx, we see that for the linear approximation to be good, the effectiveness of the charity (first derivative) must be much greater than maximum rate of effectiveness change (over the given interval) times the contribution. This gives us the practical criterion to follow. If our contribution is very large, or if it is liable to influence the charity's effectiveness considerably - and "large", "considerably" here are words that can in principle be measured and made precise - then the linear approximation isn't so good, and we can't infer automatically that banking everything on one charity is the right thing to do. It may still be, but that requires a deeper analysis, one that may or may not be feasible with our amount of uncertainty.

(Here's an artificial example: say X is a non-profit that has a program to match your contribution until they reach a set goal of donations, and they're $50 short of that goal. Say Y is another non-profit or charity that you consider to be 1.5 more effective than X, normally. Then, if your budget is $100, it's optimal to give $50 to X and $50 to Y, rather than everything to one of them. This is because X will undergo a radical change in effectiveness after the first $50, and the second derivative will be large).

Where is the original criterion "keep Δx/X small", then? It failed to materialize; instead, the math tells us that we need to keep Δx small and ∂U2/∂x2 small. The endowment size, as expected, is irrelevant; but let's try to make a connection with it anyhow. We can do it with a heuristic argument that goes something like this. The second derivative measures the way our donations influence effectiveness of the charity, rather than the utility. If the charity is large and well-established, probably the way your money splits into administrative costs and actual good has stabilized and is unlikely to change; whereas if the charity is small and especially just starting out, your money may help it set up or change its infrastructure which will change its effectiveness. So the size of the endowment probably correlates well with the second derivative being small. And then the recipe becomes "keep Δx small and X large", which can be rephrased as the original "keep Δx/X small". 

Does this re-establish the correctness of the original argument, then? To some extent, yet to my mind not a large one. For one thing, the correlation is not ideal, it's easy to think of exceptions, and in fact if you're dealing with real charities that tell you something about how they operate, it may be easier for you to estimate that the rate of effectiveness change is or isn't zero, than it is to look at endowment size. But more importantly, the heuristic jump through the correlative hoop strips the argument of any numeric force that the correct version does have. Since we don't know how exactly X and ∂U2/∂x2 are related, e.g. what is the correlation factor, we can't give any specific meaning to "keep Δx/X small". We can't say how small, even approximately: 1/10? 1/1000? 1/106? This makes me suspect that the heuristic argument is not a good way to approach the truth, but may be a good way to convince someone to put everything into one charity, because normally Δx/X will appear to be rather small.

 


 

Once we've worked out the math, some deficiencies of the original informal article become clear. For instance, the talk about "making a dent" in the problem is a little off the mark:

So why is charity different? Here's the reason: An investment in Microsoft can make a serious dent in the problem of adding some high-tech stocks to your portfolio; now it's time to move on to other investment goals. Two hours on the golf course makes a serious dent in the problem of getting some exercise; maybe it's time to see what else in life is worthy of attention. But no matter how much you give to CARE, you will never make a serious dent in the problem of starving children. The problem is just too big; behind every starving child is another equally deserving child.

But it isn't making the dent in the problem that's the issue; it's making the dent in effectiveness. It's conceivable that my donation doesn't make a noticeable dent in the problem, but changes the rate of dent-making enough so the argument falls through. The words don't match the math. Similarly, consider the analogy with investment. Why doesn't it in fact work - why doesn't the math argument work on investment portfolios? If you want to maximize profit, your small stock purchase is unlikely on its own to influence the stock (change its effectiveness) much. Landsburg's answer - putting it in terms of "making a dent in the problem of adding high-tech stocks" - is flawed: it presupposes that diversification is good by cordoning off different investment areas from each other - "high-tech stocks". The real reason is, of course, risk aversion - our utility function for investment is likely to be risk-averse. You may want to apply risk aversion to your charity donation as well, but in that case, Eliezer's advice to purchase fuzzies and utilons separately is persuasive. 

 


 

To sum up, these assumptions will let you conclude that the rational thing to do is donate all your charity budget to the one charity you consider most effective currently:

 

  1. Assume that a unified single-currency utility function describes your idea of these charities' utilities. The crucial thing you're assuming here is that X, Y and Z can be compared in terms of the good they're doing; that their amounts of good can be computed in the same "utilons". This is a strong assumption that may work for some and not others. Certainly most people reject it when all goals are at stake, and not just charity-giving; Complexity of Value discusses this. Why are things different when we restrict to helping others, and are they necessarily different? Yvain has presented an excellent argument for the latter position in Efficient Charity, while Phil Goetz provided an excellent argument against it in the comments. 
  2. Assume that the charities you're choosing between operate in different domains, or more generally speaking do not affect each other's effectiveness. If they do, putting all into one may still be the right thing to do, but finer analysis is needed.
  3. Assume that the effectiveness of the charities is influenced very little or not at all by your donation, and the donation itself is not too large. In case of doubt, apply more precise math: rate of effectiveness change times donation must be much smaller - say, a few percent of - effectiveness itself.
New Comment
59 comments, sorted by Click to highlight new comments since:

Complexity of value doesn't necessarily imply that people have more than one kind of utilon, it just says that people can value 1 unit of food + 1 unit of sex higher than 2 units of either. In other words, it says the second derivative terms are significant :-)

Thanks! Until I saw your comment, the third-from-last paragraph (combined with the impression of credibility created by the rest of the post) was making me very sad because of what it implies about the effectiveness of decision theory and the usefulness of formal goal systems. My probability of the success of the FAI project dropped sharply in a few minutes, then recovered when I read your comment.

[-][anonymous]50

Do "finite charitable causes" require special consideration here?

For example, Bill Gates is pushing hard for polio eradication. Polio is a special kind of problem: once solved, it will stay solved forever. I see two special features of finite problems: first, solving them earlier is better than solving them later. The eradications of smallpox and rinderpest are delivering benefits forever while consuming zero additional resources. Second, the marginal utility of donation is pretty unusual - for disease eradication, donating more helps to suppress it more (which is good in its own right), donating lots delivers the huge benefit you're looking for, and donating even more than that would do nothing. Other finite problems, like research, will feature different curves (notably without the property of "slipping away" if insufficient resources are devoted to them for a time) but still seem unusual.

Note: By "finite", I mean that the solution is within sight. World hunger is a finite problem in the sense that bringing the world up to a post-scarcity economy, or even to the level of the industrialized economies (without changing the climate from original recipe to extra crispy), would solve it - but nobody has achieved the former yet, and even the latter is very far away. Commercial nuclear fusion is on the threshold of what I consider finite: we know it's possible, we know solving it is a matter of engineering and not discovering new physics, but at the same time it's very difficult and very expensive and will take a long time.

Let's look at some relevant quotes I noticed when I read that a few days ago and was posting excerpts into #lesswrong:

By contrast, the 14-year drive to wipe out smallpox, according to Dr. Donald A. Henderson, the former World Health Organization officer who began it, cost only $500 million in today’s dollars.

...

Right now, there are fewer than 2,000. The skeptics acknowledge that they are arguing for accepting more paralysis and death as the price of shifting that $1 billion to vaccines and other measures that prevent millions of deaths from pneumonia, diarrhea, measles, meningitis and malaria.

“And think of all the money that would be saved,” Mr. Gates went on, turning sarcastic. “It’d be like 5 percent of the dog food market in the United States.”

(I believe there is a fine Eliezer post on exactly the fallacious argument Mr. Gates is using there.)

One injection stops smallpox, but in countries with open sewers, children need polio drops up to 10 times.

Only one victim in every 200 shows symptoms, so when there are 500 paralysis cases, as in the recent Congo Republic outbreak, there are 100,000 more silent carriers.

Other causes of paralysis, from food poisoning to Epstein-Barr virus, complicate surveillance.

Also, in roughly one of every two million vaccinations, the live vaccine strain can mutate and paralyze the child getting it. And many poor families whose children are dying of other diseases are fed up with polio drives.

Smallpox has no natural reservoirs; its vaccine is made from cowpox, not weakened or dead smallpox, while polio vaccines are made from weakened or dead polio viruses and may themselves undo eradication. No one speaks of eradicating anthrax because it's impossible to reach all the natural sources of anthrax spores in deserts in Mongolia or where-ever. Nor do we speak of eradicating ebola because we don't want to extinguish various primate species.

Does squeezing polio jell-o offer the best marginal returns? Are we appealing to sunk-costs here? Eradicating polio may offer permanent benefits (though this is dubious for previously mentioned reasons), but still be a bad investment - similar to how one rarely invests in perpetual bonds.

I see no reason why the usual apparatus of highest marginal return and discount rates do not cover your finite charity distinction.

[-][anonymous]00

Correcting factual errors:

Smallpox has no natural reservoirs

Neither does polio. "Some diseases have no non-human reservoir: poliomyelitis and smallpox are prominent examples."

polio vaccines are made from weakened or dead polio viruses and may themselves undo eradication.

First, the inactivated/killed polio vaccine cannot come back to life. Second, that risk for the attenuated vaccine is well known.

Eradicating polio may offer permanent benefits

Will.

You can disagree as to whether the benefits are worth the costs, of course. And perhaps finite problems can be analyzed within the usual framework - but I wanted to bring them up.

[-]gwern-10

No, may. Even if you have zero known cases, you have not eradicated polio for sure because of the carriers and natural reservoirs and obscure little hidden villages. So you still need vaccines. And your own links point out that the attenuated vaccine can still be infectious!

This is believed to be a rare event, but outbreaks of vaccine-associated paralytic poliomyelitis (VAPP) have been reported, and tend to occur in areas of low coverage by OPV, presumably because the OPV is itself protective against the related outbreak strain.

To quote from one of the references:

The rate and pattern of VP1 divergence among the circulating vaccine-derived poliovirus (cVDPV) isolates suggested that all lineages were derived from a single OPV infection that occurred around 1983 and that progeny from the initiating infection circulated for approximately a decade within Egypt along several independent chains of transmission.

[-][anonymous]00

What is a "contaminated natural source"? I am genuinely curious.

[-]gwern-10

A variation on natural reservoir.

Your entire point seems to be that it's better to give to multiple charities when the joint utility of giving to those charities exceeds the benefit of giving all the money to one charity.

This circumstance exists in the real world for most individuals so infrequently as to be properly ignored. It is extremely unlikely that there is some combination of charities such that giving $5,000 to each of them will generate substantially better returns than giving $10,000 to the best available charity. Unless I'm ignoring important evidence, charities just don't work together that comprehensively, and non-huge sums of money do not have dramatic enough effects that it would be efficient to split them up.

Also, you chose an incredibly dense and inefficient way to make what seems like a very simple point.

Also, you chose an incredibly dense and inefficient way to make what seems like a very simple point

In general, I would caution against criticisms of this form for several reasons:

  • different thinking styles: what seems unnecessarily convoluted to one person may seem utterly natural to another;
  • hindsight bias: something may appear simple after you've worked it out, but that doesn't mean that working it out was easy while you were doing it;
  • incentives: one should think very carefully before writing any comment that sounds like "this post really ought not to have been written".

incentives: one should think very carefully before writing any comment that sounds like "this post really ought not to have been written".

While I don't necessarily hold that opinion of this particular post, it's a defensible position. I think that posts that use relatively complicated math where simple English would suffice substantially and negatively affect the quality of discourse. If someone has a OK point to make, it is arguably better that they not make it at all than that they make it in a convoluted manner, because that suggests to other people that it's OK to make such posts. It's certainly better that they start it out in the discussion section so that it turns into a more comprehensible post on the main page. Of course, the more original or interesting their actual idea, the more the benefits outweigh the costs.

Your different thinking styles criticism is absolutely on point though, I admit, assuming it actually applies.

Off the top of my head, one charity might be worse overall but might need a small amount of funding to attempt an experimental strategy aimed at improving it. If the likelihood of finding a better way than the most efficient charity pursues is high enough, then small funding to the experimental charity and the rest of your donation to the other could be optimal.

In general I am uneasy with suggestions that one should focus all their charitable energy in one direction, because people are far too prone to finding a local maximum then ceasing exploration for higher maximums.

one charity might be worse overall but might need a small amount of funding to attempt an experimental strategy aimed at improving it.

If this were true, that would mean that that charity had extremely high but rapidly diminishing marginal returns, in which case you should give it money until those diminishing MRs brought it below your next best option. I'm pretty confident that diverse investment is only proper where charities exhibit interactive returns (which is probably extremely rare for most people's value of charitable contributions) or whether you are trying to maximize something other than effective charity.

I agree; I'd be quite surprised if it were at all common for separate charities working in the same field to be so well-balanced in scale that proportional contributions to both outweigh contributions to just one. Since there's no clear feedback mechanism to help people maximize expected utility in giving, there's no reason to expect the MUs to be anywhere close. Therefore we should strongly expect that a "bullet" strategy will outperform diversification. I dub this the Inefficient Charity Markets Hypothesis.

On the other hand, I wonder what percentage of charity contributions are given by the top 1% of donors, people who really can make a dent in these problems? Their impact probably dwarfs anything the vast majority of small donors in the audience would do. But I'd bet they're smart enough to realize this stuff doesn't apply to them.

But I'd bet they're smart enough to realize this stuff doesn't apply to them.

Or they've never heard Landsburg's argument anyway!

I don't think there's much need for heuristics like "rate of effectiveness change times donation must be much smaller - say, a few percent of - effectiveness itself."

If you're really using a Landsburg-style calculation to decide where to donate to, you've already estimated the effectiveness of the second-most effective charity, so you can just say that effectiveness drop must be no greater than the corresponding difference.

That's an excellent point that I managed to completely miss. Thank you. I'll try to add an endnote to that effect.

I wondered for a while how the math would change if you assumed that a number of other agents had the same decision function as you. Even if you individual contribution is small, n rational agents see that charity X is optimal and give money to it might change the utility per dollar significantly.

I haven't worked through the math though.

Yes, but that only poses a problem if a large number of agents make large contributions at the same time. If they make individually large contributions at different times or if they spread their contributions out over a period of time, they will see the utility per dollar change and be able to adjust accordingly. Presumably some sort of equilibrium will eventually emerge.

Anyway, this is probably pretty irrelevant to the real world, though I agree that the math is interesting.

Yes, but that only poses a problem if a large number of agents make large contributions at the same time.

You mean, like donating to a funding drive with a specific aim?

Point taken.

With perfect information. and infinity flexible charities (that could borrow off future giving if they weren't optimal that time period), then yep.

I'd agree it is irrelevant to the real world because most people aren't following the "giving everything to one charity" strategy. If everyone followed givewell then things might get hairy for charities as they became and lost being flavour of the time period.

I'm not sure it's settled how to even do that math.

There is a variety of math that could be done. It is relatively easy to show that certain strategies may not be optimal, which is what I was thinking about.

I wasn't touching how to make optimal decisions, which would very much be in the TDT realm I think.

[-][anonymous]00

This should only matter to the extent that the agents have to act simultaneously or near-simultaneously. Otherwise, whoever goes second maximized utility conditioned on the choices of the first, and so on, so it's no worse than if a single person sought the local maximum for their giving.

Of course, the difference between local and global maxima is important, but that has nothing to do with the OP, and everything to do with TDT.

Once everyone who gives is acting on the principle of "give each marginal charitable dollar so it does the most good", then you can worry about diversification, and only then if you're sure that the total contributions by everyone are actually over-concentrated.

You may want to apply risk aversion to your charity donation as well, but in that case, Eliezer's advice to purchase fuzzies and utilons separately is persuasive.

That link presents a curious argument. I figure the reason most people give to charity is to affect their image - for signalling reasons. So, for example, we have Bill Gates - ex-boss one of the most unpopular IT companies ever (widely known as the evil empire) - trying to use his money to clean up his image by donating some of that money to charity. That such things result in good being done in the world is due to the entanglement of "fuzzies" and "utilons" - in the terminology of that post. If these become disentangled, surely most people would just buy the "fuzzies" - and fewer good deeds would be performed overall.

I disagree. I think that individuals such as Gates have adopted making the world a better place as a terminal or near-terminal value. I see no evidence that he is acting in anything but the best of faith. I think he is sincerely trying to direct his money wherever it will gain the most utilons for the world, not the most utilons for him.

Status-seeking charitable works look considerably different to me. They exhibit all the normal biases of people's emotional moral compass: they're not forward looking enough, they're too local, they focus on things the endower and their friends enjoy or make use of, such as the arts.

You might say that the adoption of the value of doing good in the world is a status seeking behaviour. Maybe, but this is irrelevant as long as the value is to do good, rather than seem to do good. So long as the effort is in good faith, the advice to seek utilons and fuzzies separately applies.

Speaking from a physical perspective, assuming that "$\Delta x$ is small" is a meaningless statement. Whenever we state that something is large or small, unless it's a nondimensionalised number, there is something against which we are comparing it.

Simple example, which isn't the best example but is fast to construct. Comparing $1 to $(mean GDP from country to country)

*$1 is a small amount of money in the USA. Even homeless people can scrape together a dollar, and it's not even enough to buy a cup of coffee from Starbucks. It's almost worthless.

*$1 is a large amount of money in Nigeria. The GNI is around $930 per capita per year[1], so if you're lucky enough to make the mean income, you'd better not be frittering away that $1; it's vital if you want to pay your rent and buy food.

So we can't say $\Delta x$ is a "small amount of money" without qualification; it seems like when you conclude that, we are actually concluding $\Delta x / X$ is small, the original proposal. A better measure might be $\Delta x / \sqrt{XYZ,3}$, so that the scale in each direction doesn't change (but that's just choosing a different coordinate system, so not that relevant).

Your argument seeks to confirm the original proposal, not refute it, and you've pointed out that sometimes higher derivatives can be important.

(Incidentally, your second example - about nonmixed second derivatives - became clear to me only after some thought. You might want to include a clause like "Because after the first $50, the second derivatives represent a sudden jump down in net utility as we get less bang for our individual buck".)

[1] http://hivinsite.ucsf.edu/global?page=cr09-ni-00&post=19&cid=NI

I take your point about the meaninglessness of sizing up dimensional quantities without a referent. But sometimes the referent is inherently specified in different units. If you want to travel, with constant speed, no more than 10 miles - less is OK - then the time of your travel must be small - how small? - well, its product with your speed shouldn't exceed 10 miles. You could say, just divide 10 miles by the speed and use that as the upper bound, but that only works if the speed is fixed. If you're choosing between traveling on foot, on a bicycle, and in a car, you really are choosing on two different axes that are jointly constrained. So it is in my post: the second derivative times the donation is constrained, and the units work out. You can say "this works when the donation is small enough and the 2nd derivative is small enough" without comparing them to something in their own units, because the meaning of "small enough" is in that dimensional equation.

Besides, consider the following: why is it X that you're comparing Δx to? Sure, it's in the same units, but how is it relevant? In your analogy, GNI per capita is relevant to $1 because it represents the mean income I could expect to generate over the year. But note that you're not comparing $1 to the total GNI of the country, even though it's in the same unit, dollars, because the total population size, which drives that number, is not very relevant to the effect of $1 on one single person. With charities, how is the current endowment relevant to the contribution I hope to make with my own donation? It is not, after all, as if my goal was to maximize my donation's utility relative to other donors' in the same charity - because we stipulated that I'm only caring about the total absolute good I contribute...

Thanks for the suggestion about my wording - I'll try to make that example a bit clearer along the lines you propose.

Consider those charities that expect their mission to take years rather than months. These charities will rationally want to spread their spending out over time. Particularly for charities with large endowments, they will attempt to use the interest on their money rather than depleting the principal, although if they expect to receive more donations over time they can be more liberal.

This means that a single donation slightly increases the rate at which such a charity does good, rather than enabling it to do things which it could not otherwise do. So the scaling factor of the endowment is restored: donating $1000 to a charity with a $10m endowment increases the rate at which it can sustainably spend by 1000/10^7 = 0.1%.

This does not mean that a charity will say, look, if our sustainable spending rate was 0.1% higher we'd have enough available this year to fund the 'save a million kids from starvation' project, oh well. They'll save the million kids and spend a bit less next year, all other things being equal. In other words, the charity, by maximising the good it does with the money it has, smooths out the change in its utility for small differences in spending relative to the size of its endowment, i.e. the higher order derivatives are low. So long as the utility you get from a charity comes from it fulfilling its stated mission, your utility will also vary smoothly with small spending differences.

Likewise, with rational collaborating charities, they will each adjust their spending to increase any mutually beneficial effects. So mixed derivatives are low, too.

The upshot is that unless your donation is of a size that it can permanently and significantly raise the spending power of such a charity, you won't be leaving the approximately linear neighbourhood in utility-space. So if you're looking for counterexamples, you'll need to find one of:

  • charities with both low endowments and low donation rates, which nevertheless can produce massive positive effects with a smallish amount of money
  • charities which must fulfil their mission in a short time and are just short of having the money to do so.

The approximation, Landsburg says, is good "assuming that your contributions are small relative to the initial endowments". Here's the thing: why? Suppose Δx/X, Δy/Y and Δz/Z are indeed very small - what then? Why does it follow that the linear approximation works? There's no explanation, and if you think this is because it's immediately obvious - well, it isn't. It may sound plausible, but the math isn't there. We need to go deeper.

What shapes for U(X,Y,Z) could make the linear approximation not work? It would have to be a curve that had sudden local changes. It would be kinked or fractal. That would be surprising. If U(X,Y,Z) is continuous, smooth, monotonic, and its first and second derivatives are monotonic, I can't imagine how the linear approximation could fail.

If U(X,Y,Z) is continuous, smooth, monotonic, and its first and second derivatives are monotonic, I can't imagine how the linear approximation could fail.

There's an example later in the post, with mixed derivatives. Everything could be smooth and monotonic including all derivatives. Basically think of U(X,Y,Z) as containing a 100XY component.

If U(X) follows a power-law distribution, that du/dx / d^2u/dx^2 is proportional to X.

If everyone was to take Landsburg's argument seriously, which would imply that all humans were rational, then everyone would solely donate to the SIAI. If everyone only donated to the SIAI, would something like Wikipedia even exist? I suppose the SIAI would have created Wikipedia if it was necessary. I'm just wondering how much important stuff out there was spawned by irrational contributions and how the world would look like if such contributions would have never been made. I'm also not sure how venture capitalist growth funding differs from the idea to diversify one's contributions to charity.

Note that I do not doubt the correctness of Landsburg's math. I'm just not sure if it would have worked out given human shortcomings (even if everyone was maximally rational). If nobody was to diversify, contributing to what seems to be the most rational option given the current data, then being wrong would be a catastrophe. Even maximally rational humans can fail after all. This wouldn't likely be a problem if everyone contributed to a goal that could be verified rather quickly, but something like the SIAI could eat up the resources of the planet and still turn out to be not even wrong in the end. Since everyone would have concentrated on that one goal (no doubt being the most rational choice at the moment), might such a counterfactual world have been better off diversifying its contributions or would the SIAI have turned into some kind of financial management allocating those contributions and subsequently become itself a venture capitalist?

People don't make their decisions simultaneously and instantaneously; once SIAI suffers diminishing returns to the extent that it's no longer the best option, people can observe this and donate elsewhere.

...once SIAI suffers diminishing returns to the extent that it's no longer the best option, people can observe this and donate elsewhere.

How would you observe that, what are the expected indications?

It's consistent with Landsburg's analysis that everyone has their own utility function that emphasizes what that particular person considers important. So if everyone were a Landsburgian and donated only to a single charity, they would still donate all over the map to different charities - because, even if they all knew about SIAI, they either wouldn't care as much about SIAI's goals as other goals, or they would estimate SIAI's effectiveness in reaching those goals as very low. There probably would still be adverse impact to many charities which are second-choice for most their donors - and I'm sure there are many such - but not as catastrophic as you're outlining, I think.

Personally, I believe that if everyone was presented with Landsburg's argument, most people would fail to be Landsburgians not because they couldn't stomach the math, or because they'd be wary of the more technical assumptions I wrote about in my post, but simply because they wouldn't agree to characterize their charitable utility in unified single-currency utilons.

You shouldn't take it as an axiom that the SIAI is the most-beneficial charity in the world. You imply that anyone who thinks otherwise is irrational.

[-]XiXiDu-20

I know, the Karma system made me overcompensate. I noticed that questions are often voted down so I tried to counter that by making it sound more agreeable. It was something that bothered me so I thought LW and this post would be the best place to get some feedback. I was unable to read the OP or Landsburg's proof but was still seeking answers before learning enough to come up with my own answers. I'm often trying to get feedback from experts without first studying that field myself. If I have an astronomy question I ask on an astronomy forum. Luckily most of the time people are kind enough to not demand that you first become an expert before you can ask questions. It would be pretty daunting if you would have to become a heart surgeon before you could ask about your heart surgery. But that's how it is on LW, I have to learn that the price you pay for any uninformed questions are downvotes. I acknowledge that the Karma system and general attitude here makes me dishonest in what I write and apologize for that, I know that it is wrong.

You do have over 2000 karma.

At this point, I figure you have earned the right to say more-or-less whatever you like, for quite a while, without bothering too much about keeping score.

When I'm reading comments, I often skip over the ones that have low or negative score. I imagine other people do the same thing. So if you think your point is important enough to be read by more than a few people, you do want to try to have it voted up (but of course you shouldn't significantly compromise your other values/interests to do so).

I'm curious why Tim's comment got downvoted 3 times.

Karma isn't a license to act like a dick, make bad arguments, be sloppy, or commit sins of laziness.

/checks karma; ~3469, good.

Which should be obvious, you purblind bescumbered fen-sucked measle.

Right - but the context was "the Karma system and general attitude here makes me dishonest".

If you are not short of Karma, sugar-coating for the audience at the expense of the truth seems to be largely unnecessary.

I looked at the context, but it seemed to me that Xi was just being sloppy. (Of course Landsburg's argument implies rational agents should donate solely to SIAI, if SIAI offers the greatest marginal return. A~>B, A, Q.E.D., B.)

If Xi is being sloppy or stupid, then he should pay attention to what his karma is saying. That's what it's for! If you want to burn karma, it ought to be for something difficult that you're very sure about, where the community is wrong and you're right.

Phil's:

You shouldn't take it as an axiom that the SIAI is the most-beneficial charity in the world. You imply that anyone who thinks otherwise is irrational.

...was questioning XiXiDu's:

If everyone was to take Landsburg's argument seriously, which would imply that all humans were rational, then everyone would solely donate to the SIAI.

...but it isn't clear that the SIAI is the best charity in the world!!! They are in an interesting space - but maybe they are attacking the problem all wrong, lacking in the required skills, occupying the niche of better players - or failing in other ways.

XiXiDu justified making this highly-dubious claim by saying he was trying to avoid getting down-voted - and so wrote something which made his post "sound more agreeable".

[-]FAWS00

SIAI would probably be at least in competition for best charity in the world even if their chance for direct success was zero and their only actual success raising awareness of the problem.

I did a wildly guessing back of the envelope type calculation on that a while ago and even with very conservative estimations of the chance of a negative singularity and completely discounting any effect on the far future as well as any possibility of a positive singularity SIAI scored about 1 saved life per $1000.

[-]gwern-20

Accepting the logical validity of an argument, and flatly denying its soundness, is not an interesting or worthwhile or even good contribution.

What? Where are you suggesting that someone is doing that?

If you are talking about me and your logical argument, that is just not what was being discussed.

The correctness of the axiom concerning charity quality was what was in dispute from the beginning - not any associated logical reasoning.

Downvoted.

For games where there are multiple agents interacting, the optimal strategy will usually involve some degree of weighted randomness. If there are noncommunicating rational agents A, B, C each with (an unsplittable) $1, and charities 1 and 2 - both of which fulfil a vital function but 1 requires $2 to function and 2 requires $1 to function, I would expect the agents to donate to 1 with p = 2/3.

A rational agent is aware that other rational agents exist, and will take account of their actions.

The entire resources of the world are somewhat large compared to a single person's donation. I expect the argument wouldn't apply in that situation (but you need TDT-like reasoning to realize that's relevant, or for the donations to be spread in time so each person can condition on what donations all previous people made.)

Using linear approximations is obviously a quick hack, and a better analysis could be done if we could make better approximations. That much is clear. You give examples where a better approximation might show that giving to a single charity is less optimal, but it is also possible that a better approximation would enhance the optimality of giving to a single charity. How can we be sure which way a better approximation will take us?

I have no useful information here, so a uniform prior seems reasonable, in which case the analysis from a linear model holds. This seems especially true when the differences in first derivatives between different charities is large, such that a second order correction would have to also be very large in order to sway the analysis.

I have no useful information here, so a uniform prior seems reasonable, in which case the analysis from a linear model holds.

I'm not sure what the uniform prior means in this case and how the conclusion follows - can you expand?

But anyway, granting this for the moment, in an actual real-life situation when you contemplate actual charities, you do have all sorts of useful information about them, for example the information that allows you to estimate their effectiveness. This information will probably also throw some light on how the effectiveness changes over time, and so let you determine whether the linear approximation is good.

This seems especially true when the differences in first derivatives between different charities is large, such that a second order correction would have to also be very large in order to sway the analysis.

I agree that when first derivatives are wildly different according to your utility function, it's a no-brainer (barring situations with huge second order effects that'll show up as very weird features of the landscape) to put all your budget into one of them. What I object to is slam-dunk arguing along the lines of "Landsburg has a solid math proof that the rational thing to do is to take first derivatives, compare them, and act on the result. If you don't agree, you're an obscurantist or you fail to grok the math".

But anyway, granting this for the moment, in an actual real-life situation when you contemplate actual charities, you do have all sorts of useful information about them, for example the information that allows you to estimate their effectiveness. This information will probably also throw some light on how the effectiveness changes over time, and so let you determine whether the linear approximation is good.

If you have additional information beyond the first derivatives then by all means use it. Use all the information you have. However, in general you need more information to get an equally good approximation to higher order derivatives. Cross terms especially seem like they would be very difficulty to gauge empirically. In light of that I would be very skeptical of high confidence estimates for higher order terms, especially if they conveniently twist the math to allow for a desirable outcome.

Consider the simpler case with only two charities and total utility U(X,Y). For simplicity assume the second order derivatives are constant, and that the probability that

is given by

=\phi(\mathbf{z}).)

Then the second order contribution to

)

is given by the integral over all possible second derivatives

%5E2z_0%20+%20\Delta%20X\Delta%20Yz_1%20+%20\frac{1}{2}(\Delta%20Y)%5E2z_2\right)\phi(\mathbf{z})%20d\mathbf{z},)

which equals

%5E2\int_{R%5E3}z_0%20\phi(\mathbf{z})%20d\mathbf{z}%20+%0A\Delta%20X\Delta%20Y%20\int_{R%5E3}z_1%20\phi(\mathbf{z})%20d\mathbf{z}%20+%0A\frac{1}{2}(\Delta%20Y)%5E2\int_{R%5E3}z_2%20\phi(\mathbf{z})%20d\mathbf{z})

where R is some finite interval symmetric about 0. We can actually take R to be the whole real line, but the math becomes hairier. Now, each of these integrals is 0, because the uniform distribution is symmetric about each axis. The symmetry is all that is needed actually, not uniformity, so you could weaken the assumptions.