You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

RichardKennaway comments on A Mathematical Explanation of Why Charity Donations Shouldn't Be Diversified - Less Wrong Discussion

2 Post author: Vladimir_Nesov 20 September 2012 11:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread.

Comment author: RichardKennaway 20 September 2012 11:22:10AM 10 points [-]

Does the analysis change if one is uncertain about the effectiveness of the charities, or can any uncertainty just be rolled up into a calculation of expected effectiveness?

To take an extreme example, given a charity that I am sure is producing one QALY per dollar given (I consider QALY's a better measure than "lives saved", since all lives are lost in the end), and one which I think might be creating 3 QALY/$ but might equally likely be a completely useless effort, which should I donate to? Assume I've already taken all reasonable steps to collect evidence and this is the best assessment I can make.

Comment author: RichardKennaway 20 September 2012 11:33:41AM 5 points [-]

Thinking further about my own question, it would depend on whether one values not just QALYs, but confidence that one had indeed bought some number of QALYs -- perhaps parameterised by the mean and standard deviation respectively of the effectiveness estimates. But that leads to an argument for diversification, for the same reasons as in investing: uncorrelated uncertainties tend to cancel out.

Comment author: benelliott 20 September 2012 03:14:18PM 7 points [-]

Thinking further about my own question, it would depend on whether one values not just QALYs, but confidence that one had indeed bought some number of QALYs

In other words, it depends whether you donate to help people, or to make yourself feel good.

Comment author: Decius 20 September 2012 04:56:50PM 3 points [-]

The function U need not be based on what a third party thinks they should be. Donating to make oneself feel good is a perfectly rational reason, provided one values the warm fuzzy feelings more than the money.

Comment author: benelliott 20 September 2012 11:09:46PM 0 points [-]

Fair enough, the argument does not hold in that case. If you are donating to make yourself feel good then you should diversify.

However, if you are donating to make yourself feel good, i.e. if you value confidence as well as QALYs, then your preference relation is no longer given by U, as this implies that you care differently depending on whether you bought the QALYs or someone else did, so your preferences is not a function solely of the number of antelope and the number of babies.

Comment author: Decius 20 September 2012 11:37:31PM 0 points [-]

The only qualification of U is that it's values map to my preferences and that it has transitive values, such that if U(a1,b1)>U(a2,b2)>U(a3,b3), then U(a3,b3)<U(a1,b1). There is no requirement that the arguments of U be measured in terms of dollars- the arguments could easily be the non-real sum of the monies provided by others and the monies provided by me.

Comment author: benelliott 21 September 2012 12:51:11AM 0 points [-]

U is a function of the number of antelope and the number of babies. By the law of transpancy, it doesn't care whether there are 100 antelope because you saved them or because someone else did. If you do care, then your preference function cannot be described as a function on this domain.

Comment author: Decius 21 September 2012 03:03:19PM 0 points [-]

As defined in the original post, U is a function of the total amount of money given to charities A and B. There is no restriction that more money results in more antelope or babies saved, nor that the domain of the function is limited to positive real numbers.

Or are you saying that if I care about whether I help do something important, then my preferences must be non-transitive?

Comment author: benelliott 21 September 2012 04:58:35PM 1 point [-]

He writes U(A, B), where A is the number of antelope saved and B is the number of babies saved. If you care about anything other than the number of antelope saved or the number of babies saved then U does not completely describe your preferences. Caring about whether you save the antelope or someone else does counts as caring about something other than the number of antelope saved. Unless you can exhibit a negative baby or a complex antelope, then you must accept this domain is limited to positive numbers.

He later gets, from U, a function from the amount of money given, strictly speaking this is a completely different function, it is only denoted by U for convenience. However, the fact that U was initially defined in the previous way means it may have constraints other than transitivity.

To give an example, let f be any function on the real numbers. f, currently has no constraints. We can make f into a function of vectors by saying f(x) = f(|x|), but it is not a fully general function of vectors, it has a constraint that it must satisfy, namely that it is constant on the surface of any sphere surrounding the origin.

Comment author: Decius 22 September 2012 08:26:14PM 1 point [-]

Fair cop- I was mistaken about the definition of U.

If there is no function U(a,b) which maps to my preferences across the region which I have control, then the entire position of the original post is void.

Comment author: jimmy 20 September 2012 08:13:52PM 0 points [-]

Only if you assume that you can't easily self modify.

If you're trying to optimize how you feel instead of something out there in the territory, then you're wireheading. If you're going to wirehead, then do it right and feel good without donating.

If you aren't going to wirehead, then realize that you aren't actually being effective, and self modify so that you feel good when you maximize expected QALYs instead

Comment author: Decius 20 September 2012 11:44:45PM 1 point [-]

How I feel IS real. The judgments about the value of my feelings are mostly consistent and transitive, and I choose not to change how my perceptions effect my feelings except for good reasons.

Comment author: DanielLC 20 September 2012 06:30:42PM 2 points [-]

If you care about confidence that you bought the QALYs, you should diversify. If you only care about confidence that the QALYs exist, you should not. This is because, due to the already high uncertainty, the utility of confidence changes linearly with the amount of money donated.

If you only care about the uncertainty of what you did, then that portion of utility would change with the square of the amount donated, since whether you donated to or stole from the charity it would increase uncertainty. If you care about total uncertainty, then the amount of uncertainty changes linearly with the donation, and since it's already high, your utility function changes linearly with it.

Of course, if you really care about all the uncertainty you do, you have to take into account the butterfly effect. It seems unlikely that saving a few lives or equivalent would compare with completely changing the future of the world.