2 min read

8

Mathematician and climate activist John Baez finally commented on charitable giving. I think the opinion of highly educated experts who are not closely associated with LessWrong or the SIAI but have read most of the available material is important to estimate the public and academic perception of risks from AI and the effectiveness with which the risks are communicated by LessWrong and the SIAI. 

Desertopa asked:

[...] if I asked what you would do with $100,000 if it were given to you on the condition that you donate it to a charity of your choice?

John Baez replied:

[...] it’s good that you added the clause “on the condition that you donate it to a charity of your own choice”, because I was all ready with the answer in case you left that out: I’d have said “I’ll save the money for my retirement”. Given the shaky state of California’s economy, I don’t trust the U.C. pension system very much anymore.

Since I haven’t ever been in the position to donate lots of money to a charity, I haven’t thought much about your question. I want to tackle it when I rewrite my will, but I haven’t yet. So, I don’t have an answer ready.

If you held a gun against my head and forced me to answer without further thought, I’d probably say Médecins Sans Frontières, because I’m pretty risk-averse. They seem to accomplish what they set out to accomplish, they seem financially transparent, and I think it’s pretty easy to argue that they’re doing something good (as opposed to squandering money, or doing something actively bad).

Of course, anyone associated with Less Wrong would ask if I’m really maximizing expected utility. Couldn’t a contribution to some place like the Singularity Institute of Artificial Intelligence, despite a lower chance of doing good, actually have a chance to do so much more good that it’d pay to send the cash there instead?

And I’d have to say:

1) Yes, there probably are such places, but it would take me a while to find the one that I trusted, and I haven’t put in the work. When you’re risk-averse and limited in the time you have to make decisions, you tend to put off weighing options that have a very low chance of success but a very high return if they succeed. This is sensible so I don’t feel bad about it.

2) Just to amplify point 1) a bit: you shouldn’t always maximize expected utility if you only live once. Expected values — in other words, averages — are very important when you make the same small bet over and over again. When the stakes get higher and you aren’t in a position to repeat the bet over and over, it may be wise to be risk averse.

3) If you let me put the $100,000 into my retirement account instead of a charity, that’s what I’d do, and I wouldn’t even feel guilty about it. I actually think that the increased security would free me up to do more risky but potentially very good things!

Hmm, here’s a better idea:

Could I get someone to create an institute, register it as a charity, and get the institute to hire me?

What can one learn from this?

  • That people value financial transparency.
  • That people value openness and trustworthiness.
  • Explain that openness isn't necessarily good.
  • Address the good reasons for SIAI not to publish AGI progress. 
  • Dealing with risk aversion.
  • Explain why one would decide to contribute to the SIAI under uncertainty.
  • Why it is important to consider charitable giving in the first place.
New Comment
8 comments, sorted by Click to highlight new comments since:

We get some evidence that people value openness and financial transparency. This is vaguely useful for SIAI (they get evidence that the derivative of donations with respect to openness is probably slightly higher than they previously thought), but useless to anyone whose considering donating. What donors need to know is how good openness is, not how much other people value it. It also does nothing to address the good reasons for SIAI not to publish AGI progress; reasons which don’t apply to normal charities.

Also, he says you shouldn't try to maximise expected utility. This could be a nit-pick, but your risk aversion should be factored into your utility function; you shouldn't be risk averse in utility.

I added your suggestions, thanks.

[-]Larks-10

=)

This could be a nit-pick, but your risk aversion should be factored into your utility function

Not according to Against Discount Rates ...and I agree - though this may be a tangent.

It is better if risk aversion is dynamically generated.

[-]ata30

This could be a nit-pick, but your risk aversion should be factored into your utility function

Not according to Against Discount Rates ...and I agree - though this may be a tangent.

I think these are two different things. I'd agree in opposing temporal discounting, but I'm not sure there's anything (normatively) problematic about being risk-averse with money.

(Also, I believe Larks's statement "your risk aversion should be factored into your utility function" didn't mean to imply necessarily "you should be risk averse"; I just read it to mean "if you're risk-averse, you don't need to put that outside the framework of expected utility maximization, and decide not to always maximize expected utility; rather, your utility function itself can represent however much risk aversion you have".)

[-][anonymous]00

you shouldn’t always maximize expected utility if you only live once. Expected values — in other words, averages — are very important when you make the same small bet over and over again. When the stakes get higher and you aren’t in a position to repeat the bet over and over, it may be wise to be risk averse.

Isn't that the same as maximizing expected utility, but with a different utility function?

I like this series of discussion articles you're doing. Good job!

[-][anonymous]00

you shouldn’t always maximize expected utility if you only live once

I thought that by the Von Neumann–Morgenstern expected utility hypothesis a rational person has to maximize his expected utility.