[Added 02/24/14: After writing this post, I discovered that I had miscommunicated owing to not spelling out my thinking in sufficient detail, and also realized that it carried unnecessary negative connotations (despite conscious effort on my part to avoid them). See Reflections on a Personal Public Relations Failure: A Lesson in Communication. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.]
Follow-up to: Existential Risk and Public Relations, Other Existential Risks, The Importance of Self-Doubt
Over the last few days I've made a string of posts levying strong criticisms against SIAI. This activity is not one that comes naturally to me. In The Trouble With Physics Lee Smolin writes
...it took me a long time to decide to write this book. I personally dislike conflict and confrontation [...] I kept hoping someone in the center of string-theory research would write an objective and detailed critique of exactly what has and has not been acheived by the theory. That hasn't happened.
My feelings about and criticisms of SIAI are very much analogous to Smolin's feelings about and criticisms of string theory. Criticism hurts feelings and I feel squeamish about hurting feelings. I've found the process of presenting my criticisms of SIAI emotionally taxing and exhausting. I fear that if I persist for too long I'll move into the region of negative returns. For this reason I've decided to cut my planned sequence of posts short and explain what my goal has been in posting in the way that I have.
Edit: Removed irrelevant references to VillageReach and StopTB, modifying post accordingly.
As Robin Hanson never ceases to emphasize, there's a disconnect between what humans say that what they're trying to do and what their revealed goals are. Yvain has written about this topic recently under his posting Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model. This problem becomes especially acute in the domain of philanthropy. Three quotes on this point:
(1) In Public Choice and the Altruist's Burden Roko says:
The reason that we live in good times is that markets give people a selfish incentive to seek to perform actions that maximize total utility across all humans in the relevant economy: namely, they get paid for their efforts. Without this incentive, people would gravitate to choosing actions that maximized their own individual utility, finding local optima that are not globally optimal. Capitalism makes us all into efficient little utilitarians, which we all benefit enormously from.
The problem with charity, and especially efficient charity, is that the incentives for people to contribute to it are all messed up, because we don't have something analogous to the financial system for charities to channel incentives for efficient production of utility back to the producer.
(2) In My Donation for 2009 (guest post from Dario Amodei) Dario says:
I take Murphy’s Law very seriously, and think it’s best to view complex undertakings as going wrong by default, while requiring extremely careful management to go right. This problem is especially severe in charity, where recipients have no direct way of telling donors whether an intervention is working.
(3) In private correspondence about career choice, Holden Karnofsky said:
For me, though, the biggest reason to avoid a job with low accountability is that you shouldn't trust yourself too much. I think people respond to incentives and feedback systems, and that includes myself. At GiveWell I've taken some steps to increase the pressure on me and the costs of I behave poorly or fail to add value. In some jobs (particularly in nonprofit/govt) I feel that there is no system in place to help you figure out when you're adding value and incent you to do so. That matters a lot no matter how altruistic you think you are.
I believe that the points that Robin, Yvain, Roko, Dario and Holden have made provide a compelling case for the idea that charities should strive toward transparency and accountability. As Richard Feynman has said:
The first principle is that you must not fool yourself – and you are the easiest person to fool.
Because it's harder to fool others than it is to fool oneself, I think that the case for making charities transparent and accountable is very strong.
SIAI does not presently exhibit high levels of transparency and accountability. I agree with what I interpret to be Dario's point above: that in evaluating charities which are not transparent and accountable, we should assume the worst. For this reason together with the concerns which I express about Existential Risk and Public Relations, I believe that saving money in a donor-advised-fund with a view toward donating to a transparent and accountable future existential risk organization has higher expected value than donating to SIAI now does.
Because I take astronomical waste seriously and believe in shutting up and multiplying, I believe that reducing existential risk is ultimately more important than developing world aid. I would very much like it if there were a highly credible existential risk charity. At present, I do not feel that SIAI is a credible existential risk charity. One LW poster sent me a private message saying:
I've suspected for a long time that the movement around EY might be a sophisticated scam to live off donations of nonconformists
I do not believe that Eliezer is consciously attempting to engage in a scam to live off of the donations but I believe that (like all humans) he is subject to subconscious influences which may lead him to act as though he were consciously running a scam to live off of the donations of nonconformists. In light of Hanson's points, it would not be surprising if this were the case. The very fact that I received such a message is a sign that SIAI has public relations problems.
I encourage LW posters who find this post compelling to visit and read the materials available at GiveWell which is, as far as I know, the only charity evaluator which places high emphasis on impact, transparency and accountability. I encourage LW posters who are interested in existential risk to contact GiveWell expressing interest in GiveWell evaluating existential risk charities. I would note that it may be useful for LW posters who are interested in finding transparent and accountable organizations to donate to GiveWell's recommended charities to signal seriousness to the GiveWell staff.
I encourage SIAI to strive toward greater transparency and accountability. For starters, I would encourage SIAI to follow the example set by GiveWell and put a page on its website called "Mistakes" publically acknowledging its past errors. I'll also note that GiveWell incentivizes charities to disclose failures by granting them a 1-star rating. As Elie Hassenfeld explains
As usual, we’re not looking for marketing materials, and we won’t accept “weaknesses that are really strengths” (or reports that blame failure entirely on insufficient funding/support from others). But if you share open, honest, unadulterated evidence of failure, you’ll join a select group of organizations that have a GiveWell star.
I believe that the fate of humanity depends on the existence of transparent and accountable organizations. This is both because I believe that transparent and accountable organizations are more effective and because I believe that people are more willing to give to them. As Holden says:
I must say that, in fact, much of the nonprofit sector fits incredibly better into Prof. Hanson’s view of charity as “wasteful signaling” than into the traditional view of charity as helping.
[...]
Perhaps ironically, if you want a good response to Prof. Hanson’s view, I can’t think of a better place to turn than GiveWell’s top-rated charities. We have done the legwork to identify charities that can convincingly demonstrate positive impact. No matter what one thinks of the sector as a whole, they can’t argue that there are no good charitable options - charities that really will use your money to help people - except by engaging with the specifics of these charities’ strong evidence.
Valid observations that the sector is broken - or not designed around helping people - are no longer an excuse not to give.
Because our Bayesian prior is so skeptical, we end up with charities that you can be confident in, almost no matter where you’re coming from.
I believe that at present the most effective way to reduce existential risk is to work toward the existence of a transparent and accountable existential risk organization.
Added 08/23:
- See my responses to Yvain's comment for my answer to Yvain's question "...why the feeling that they're not transparent and accountable?"
- See my response to Jordan's comment for an explanation of why I did not discuss FHI in the present posting.
Even then the expected lives saved by SIAI is ~10^28.
It's patently obvious that SIAI has an annual income of less than $10^6.
Suppose the marginal dollar is worth 10^3 times less than the average dollar.
Even then a yearly donation of $1 saves an expected10^18 lives.
A yearly donation of £1 to VillageReach saves 100,000,000,000,000,000,000 fewer people.
*Sorry Clippy, but multifoliaterose is damaging you here; SIAI is a lot more amenable to negotiation than anyone else.
A problem with Pascal's Mugging arguments is that once you commit yourself to taking seriously very unlikely events (because they are multiplied by huge potential utilities), if you want to be consistent, you must take into account all potentially relevant unlikely events, not just the ones that point in your desired direction.
To be sure, you can come up with a story in which SIAI with probability epsilon makes a key positive difference, for bignum expected lives saved. But by the same token you can come up with stories in which SIAI with probability epsil... (read more)