[Added 02/24/14: After writing this post, I discovered that I had miscommunicated owing to not spelling out my thinking in sufficient detail, and also realized that it carried unnecessary negative connotations (despite conscious effort on my part to avoid them). See Reflections on a Personal Public Relations Failure: A Lesson in Communication. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.]
Follow-up to: Existential Risk and Public Relations, Other Existential Risks, The Importance of Self-Doubt
Over the last few days I've made a string of posts levying strong criticisms against SIAI. This activity is not one that comes naturally to me. In The Trouble With Physics Lee Smolin writes
...it took me a long time to decide to write this book. I personally dislike conflict and confrontation [...] I kept hoping someone in the center of string-theory research would write an objective and detailed critique of exactly what has and has not been acheived by the theory. That hasn't happened.
My feelings about and criticisms of SIAI are very much analogous to Smolin's feelings about and criticisms of string theory. Criticism hurts feelings and I feel squeamish about hurting feelings. I've found the process of presenting my criticisms of SIAI emotionally taxing and exhausting. I fear that if I persist for too long I'll move into the region of negative returns. For this reason I've decided to cut my planned sequence of posts short and explain what my goal has been in posting in the way that I have.
Edit: Removed irrelevant references to VillageReach and StopTB, modifying post accordingly.
As Robin Hanson never ceases to emphasize, there's a disconnect between what humans say that what they're trying to do and what their revealed goals are. Yvain has written about this topic recently under his posting Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model. This problem becomes especially acute in the domain of philanthropy. Three quotes on this point:
(1) In Public Choice and the Altruist's Burden Roko says:
The reason that we live in good times is that markets give people a selfish incentive to seek to perform actions that maximize total utility across all humans in the relevant economy: namely, they get paid for their efforts. Without this incentive, people would gravitate to choosing actions that maximized their own individual utility, finding local optima that are not globally optimal. Capitalism makes us all into efficient little utilitarians, which we all benefit enormously from.
The problem with charity, and especially efficient charity, is that the incentives for people to contribute to it are all messed up, because we don't have something analogous to the financial system for charities to channel incentives for efficient production of utility back to the producer.
(2) In My Donation for 2009 (guest post from Dario Amodei) Dario says:
I take Murphy’s Law very seriously, and think it’s best to view complex undertakings as going wrong by default, while requiring extremely careful management to go right. This problem is especially severe in charity, where recipients have no direct way of telling donors whether an intervention is working.
(3) In private correspondence about career choice, Holden Karnofsky said:
For me, though, the biggest reason to avoid a job with low accountability is that you shouldn't trust yourself too much. I think people respond to incentives and feedback systems, and that includes myself. At GiveWell I've taken some steps to increase the pressure on me and the costs of I behave poorly or fail to add value. In some jobs (particularly in nonprofit/govt) I feel that there is no system in place to help you figure out when you're adding value and incent you to do so. That matters a lot no matter how altruistic you think you are.
I believe that the points that Robin, Yvain, Roko, Dario and Holden have made provide a compelling case for the idea that charities should strive toward transparency and accountability. As Richard Feynman has said:
The first principle is that you must not fool yourself – and you are the easiest person to fool.
Because it's harder to fool others than it is to fool oneself, I think that the case for making charities transparent and accountable is very strong.
SIAI does not presently exhibit high levels of transparency and accountability. I agree with what I interpret to be Dario's point above: that in evaluating charities which are not transparent and accountable, we should assume the worst. For this reason together with the concerns which I express about Existential Risk and Public Relations, I believe that saving money in a donor-advised-fund with a view toward donating to a transparent and accountable future existential risk organization has higher expected value than donating to SIAI now does.
Because I take astronomical waste seriously and believe in shutting up and multiplying, I believe that reducing existential risk is ultimately more important than developing world aid. I would very much like it if there were a highly credible existential risk charity. At present, I do not feel that SIAI is a credible existential risk charity. One LW poster sent me a private message saying:
I've suspected for a long time that the movement around EY might be a sophisticated scam to live off donations of nonconformists
I do not believe that Eliezer is consciously attempting to engage in a scam to live off of the donations but I believe that (like all humans) he is subject to subconscious influences which may lead him to act as though he were consciously running a scam to live off of the donations of nonconformists. In light of Hanson's points, it would not be surprising if this were the case. The very fact that I received such a message is a sign that SIAI has public relations problems.
I encourage LW posters who find this post compelling to visit and read the materials available at GiveWell which is, as far as I know, the only charity evaluator which places high emphasis on impact, transparency and accountability. I encourage LW posters who are interested in existential risk to contact GiveWell expressing interest in GiveWell evaluating existential risk charities. I would note that it may be useful for LW posters who are interested in finding transparent and accountable organizations to donate to GiveWell's recommended charities to signal seriousness to the GiveWell staff.
I encourage SIAI to strive toward greater transparency and accountability. For starters, I would encourage SIAI to follow the example set by GiveWell and put a page on its website called "Mistakes" publically acknowledging its past errors. I'll also note that GiveWell incentivizes charities to disclose failures by granting them a 1-star rating. As Elie Hassenfeld explains
As usual, we’re not looking for marketing materials, and we won’t accept “weaknesses that are really strengths” (or reports that blame failure entirely on insufficient funding/support from others). But if you share open, honest, unadulterated evidence of failure, you’ll join a select group of organizations that have a GiveWell star.
I believe that the fate of humanity depends on the existence of transparent and accountable organizations. This is both because I believe that transparent and accountable organizations are more effective and because I believe that people are more willing to give to them. As Holden says:
I must say that, in fact, much of the nonprofit sector fits incredibly better into Prof. Hanson’s view of charity as “wasteful signaling” than into the traditional view of charity as helping.
[...]
Perhaps ironically, if you want a good response to Prof. Hanson’s view, I can’t think of a better place to turn than GiveWell’s top-rated charities. We have done the legwork to identify charities that can convincingly demonstrate positive impact. No matter what one thinks of the sector as a whole, they can’t argue that there are no good charitable options - charities that really will use your money to help people - except by engaging with the specifics of these charities’ strong evidence.
Valid observations that the sector is broken - or not designed around helping people - are no longer an excuse not to give.
Because our Bayesian prior is so skeptical, we end up with charities that you can be confident in, almost no matter where you’re coming from.
I believe that at present the most effective way to reduce existential risk is to work toward the existence of a transparent and accountable existential risk organization.
Added 08/23:
- See my responses to Yvain's comment for my answer to Yvain's question "...why the feeling that they're not transparent and accountable?"
- See my response to Jordan's comment for an explanation of why I did not discuss FHI in the present posting.
Jonah,
Thanks for expressing an interest in donating to SIAI.
I assure you that we are very interested in getting the GiveWell stamp of approval. Michael Vassar and Anna Salamon have corresponded with Holden Karnofsky on the matter and we're trying figure out the best way to proceed.
If it were just a matter of SIAI becoming more transparent and producing a larger number of clear outputs I would say that it is only a matter of time. As it stands, GiveWell does not know how to objectively evaluate activities focused on existential risk reduction. For that matter, neither do we, at least not directly. We don't know of any way to tell what percentage of worlds that branch off from this one go on to flourish and how many go on to die. If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
We think that UFAI is the largest known existential risk and that the most complete solution - FAI - addresses all other known risks (as well as the goals of every other charitable cause) as a special case. I don't mean to imply that AI is the only risk worth addressing at the moment, but it certainly seems to us to be the best value on the margin. We are working to make the construction of UFAI less likely through outreach (conferences like the Summit, academic publications, blog posts like The Sequences, popular books and personal communication) and make the construction of FAI more likely through direct work on FAI theory and the identification and recruitment of more people capable of working on FAI. We've met and worked with several promising candidates in the past few months. We'll be informing interested folk about our specific accomplishments in our new monthly newsletter, the June/July issue of which was sent out a few weeks ago. You can sign up here.
It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu's summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate. Eliezer makes it very clear, over and over, that he is speaking about the value of contributions at the margin. As others have already pointed out, it should not be surprising that we think the best way to "help save the human race" is to contribute to FAI being built before UFAI. If we thought there was another higher-value project then we would be working on that. Really, we would. Everyone at SIAI is an aspiring rationalist first and singularitarian second.
Yes, I would consider an alternative set of criteria if this turns out to be the case.
I have long felt that GiveWell places too much emphasis on demonstrated impact and believe that in doing so GiveWell may be missing some of the highest expected value opportunities for donors.
... (read more)