Your posts on SIAI have had a veneer of evenhandedness and fairness, and that continues here. But given what you don’t say in your posts, I cannot avoid the impression that you started out with the belief that SIAI was not a credible charity and rather than investigating the evidence both for and against that belief, you have marshaled the strongest arguments against donating to SIAI and ignored any evidence in favor of donating to SIAI. I almost hesitate to link to EY lest you dismiss me as one of his acolytes, but see, for example, A Rational Argument.
In your top-level posts you have eschewed references to any of the publicly visible work that SIAI does such as the Summit and the presentation and publication of academic papers. Some of this work is described at this link to SIAI’s description of its 2009 achievements. The 2010 Summit is described here. As for Eliezer’s current project, at the 2009 achievements link, SIAI has publicized the fact that he is working on a book on rationality:
...Yudkowsky is now converting his blog sequences into the planned rationality book, which he hopes will significantly assist in attracting and inspiring talented individuals to effectively
I do believe that at the margin the issue worthy of greatest consideration is transparency and accountability and I believe that this justifies giving to VillageReach over SIAI.
What about everything else that isn't the margin? What is your expected value of SIAI's public accomplishments, to date, in human lives saved? What is that figure for VillageReach? Use pessimistic figures for SIAI and optimistic ones for VillageReach if you must, but come up with numbers and then multiply them. Your arguments are not consistent with expected utility maximization.
You would be much better off if you were directly offering SIAI financial incentives to improve the expected value of its work. Donating to VillageReach is not the optimal use of money for maximizing what you value.
The bulk of this is about a vague impression that SIAI isn't transparent and accountable. You gave one concrete example of something they could improve: having a list of their mistakes on their website. This isn't a bad idea, but AFAIK GiveWell is about the only charity that currently does this, so it doesn't seem like a specific failure on SIAI's part not to include this. So why the feeling that they're not transparent and accountable?
SIAI's always done a good job of letting people know exactly how it's raising awareness - you can watch the Summit videos yourself it you want. They could probably do a bit more to publish appropriate financial records, but I don't think that's your real objection. Besides that, what? Anti-TB charities can measure how much less TB there is per dollar invested; SIAI can't measure what percentage safer the world is, since the world-saving is still in basic research phase. You can't measure the value of the Manhattan Project in "cities destroyed per year" while it's still going on.
By the Outside View, charities that can easily measure their progress with statistics like "cases of TB prevented" are better than those that can't. By the...
I believe that at present GiveWell's top ranked charities VillageReach and StopTB are better choices than SIAI, even for donors like utilitymonster who take astronomical waste seriously and believe in the ideas expressed in the cluster of blog posts linked under Shut Up and multiply.
The invocation of VillageReach in addressing those aggregative utilitarians concerned about astronomical waste here seems baffling to me. Consider these three possibilities:
1) SIAI at the margin has a negative expected impact on our chances of avoiding existential risks, so shouldn't be donated to. VillageReach is irrelevant and adds nothing to the argument, you could have said "aggregative utilitarians would do better to burn their cash." Why even distract would-be efficient philanthropists with this rather than some actual existential-risk-focused endeavour, e.g. FHI, or a donor-advised fund for existential risk, or funding a GiveWell existential risk program, or conditioning donations based on some transparency milestones?
2) SIAI at the margin has significant positive expected impact on our chances of avoiding existential risks. VillageReach may very slightly and indirectly reduce existe...
Your points are fair, I have edited the top level post accordingly to eliminate reference to VillageReach and StopTB.
SIAI does not presently exhibit high levels of transparency and accountability... For this reason together with the concerns which I express about Existential Risk and Public Relations, I believe that at present GiveWell's top ranked charities VillageReach and StopTB are better choices than SIAI
I have difficulty taking this seriously. Someone else can respond to it.
agree with what I interpret to be Dario's point above: that in evaluating charities which are not transparent and accountable, we should assume the worst.
Assuming that much of the worst isn't rational. It would be a convenient soldier for your argument, but it's not the odds to bet at. Also, you don't make clear what constitutes a sufficient level of transparency and accountability, though of course you will now carefully look over all of SIAI's activities directed at transparency and accountability, and decide that the needed level is somewhere above that.
You say you assume the worst, and that other people should act accordingly. Would you care to state "the worst", your betting odds on it, how much you're willing to bet, and what neutral third party you would accept as providing the verdict if they...
Normally I refrain from commenting about the tone of a comment or post, but the discussion here revolves partly around the public appearance of SIAI, so I'll say:
this comment has done more to persuade me to stop being a monthly donor to SIAI than anything else I've read or seen.
This isn't a comment about the content of your response, which I think has valid points (and which multifoliaterose has at least partly responded to).
this comment has done more to persuade me to stop being a monthly donor to SIAI than anything else I've read or seen.
It is certainly Eliezer's responses and not multi's challenges which are the powerful influence here. Multi has effectively given Eliezer a platform from which to advertise the merits of SIAI as well as demonstrate that contrary to suspicions Eliezer is, in fact, able to handle situations in accordance to his own high standards of rationality despite the influences of his ego. This is not what I've seen recently. He has focussed on retaliation against multi at whatever weak points he can find and largely neglected to do what will win. Winning in this case would be demonstrating exactly why people ought to trust him to be able to achieve what he hopes to achieve (by which I mean 'influence' not 'guarantee' FAI protection of humanity.)
I want to see more of this:
With Michael Vassar in charge, SIAI has become more transparent, and will keep on doing things meant to make it more transparent
and less of this:
...I have to say that my overall impression here is of someone who manages to talk mostly LW language most of the time, but when his argument requires a step that
This sort of conversation just makes me feel tired. I've had debates before about my personal psychology and feel like I've talked myself out about all of them. They never produced anything positive, and I feel that they were a bad sign for the whole mailing list they appeared on - I would be horrified to see LW go the way of SL4. The war is lost as soon as it starts - there is no winning move. I feel like I'm being held to an absurdly high standard, being judged as though I were trying to be the sort of person that people accuse me of thinking I am, that I'm somehow supposed to produce exactly the right mix of charming modesty while still arguing my way into enough funding for SIAI... it just makes me feel tired, and like I'm being held to a ridiculously high standard, and that it's impossible to satisfy people because the standard will keep going up, and like I'm being asking to solve PR problems that I never signed up for. I'll solve your math problems if I can, I'll build Friendly AI for you if I can, if you think SIAI needs some kind of amazing PR person, give us enough money to hire one, or better yet, why don't you try being perfect and see whether it's as easy as it so...
I really appreciate this response. In fact, to mirror Jordan's pattern I'll say that this comment has done more to raise my confidence in SIAI than anything else in the recent context.
I'll solve your math problems if I can, I'll build Friendly AI for you if I can, if you think SIAI needs some kind of amazing PR person, give us enough money to hire one
I'm working on it, to within the limits of my own entrepreneurial ability and the costs of serving my own personal mission. Not that I would allocate such funds to a PR person. I would prefer to allocate it to research of the 'publish in traditional journals' kind. If I was in the business of giving advice I would give the same advice you have no doubt heard 1,000 times: the best thing that you personally could do for PR isn't to talk about SIAI but to get peer reviewed papers published. Even though academia is far from perfect, riddled with biases and perhaps inclined to have a certain resistance to your impingement it is still important.
or better yet, why don't you try being perfect and see whether it's as easy as it sounds while you're handing out advice?
Now, now, I think the 'give us the cash' helps you out rather a lot mor...
Yes, the standards will keep going up.
And, if you draw closer to your goal, the standards you're held to will dwarf what you see here. You're trying to build god, for christ's sake. As the end game approaches more and more people are going to be taking that prospect more and more seriously, and there will be a shit storm. You'd better believe that every last word you've written is going to be analysed and misconstrued, often by people with much less sympathy than us here.
It's a lot of pressure. I don't envy you. All I can offer you is: =(
Well said.
In the past I've seen Eliezer respond to criticism very well. His responses seemed to be in good faith, even when abrasive. I use this signal as a heuristic for evaluating experts in fields I know little about. I'm versed in the area of existential risk reduction well enough not to need this heuristic, but I'm not versed in the area of the effectiveness of SIAI.
Eliezer's recent responses have reduced my faith in SIAI, which, after all, is rooted almost solely in my impression of its members. This is a double stroke: my faith in Eliezer himself is reduced, and the reason for this is a public appearance which will likely prevent others from supporting SIAI, which is more evidence to me that SIAI won't succeed.
SIAI (and Eliezer) still has a lot of credibility in my eyes, but I will be moving away from heuristics and looking for more concrete evidence as I debate whether to continue to be a supporter.
BTW, when people start saying, not, "You offended me, personally" but "I'm worried about how other people will react", I usually take that as a cue to give up.
Yes. Your first line
I have difficulty taking this seriously. Someone else can respond to it.
was devoid of explicit information. It was purely negative.
Implicitly, I assume you meant that existential risk reduction is so important that no other 'normal' charity can compare in cost effectiveness (utility bought per dollar). While I agree that existential risk reduction is insanely important, it doesn't follow that SIAI is a good charity to donate to. SIAI may actually be hurting the cause (in one way, by hurting public opinion), and this is one of multi's points. Your implicit statement seems to me to be a rebuke of this point sans evidence, amounting to simply saying nuh-uh.
You say
you don't make clear what constitutes a sufficient level of transparency and accountability
which is a good point. But then go on to say
though of course you will now carefully look over all of SIAI's activities directed at transparency and accountability, and decide that the needed level is somewhere above that.
essentially accusing multi of a crime in rationality before he commits it. On a site devoted to rationality that is a serious accusation. It's understood on this site that there are a mil...
You would have done that by putting the money in a DAF and announcing your policy, rather than providing incentives in what you say is the wrong field. You're signaling that you will use money wastefully (in its direct effects) by your own standards rather than withhold it until a good enough recipient emerges by your standards.
Your objections focus on EY. SIAI > EY. Therefore, SIAI's transparency <> EY's transparency. SIAI's openness to criticism > EY's openness to criticism.
Go to SIAI's website, and you can find a list of their recent accomplishments. You can also get a detailed breakdown of funding by grant, and make an earmarked donation to a specific grant. That's pretty transparent.
OTOH, EY is non-transparent. Deliberately so; and he appears to intend to continue to be so. And you can't find a breakdown on SIAI's website of what fraction of their donations go into the "Eliezer Yudkowsky black ops" part of their budget. It would be nice to know that.
If you're worried about what EY is doing and don't want to give him money, you can earmark money for other purposes. Of course, money is fungible, so that capability has limited value.
This recentish page was a step towards transparency:
http://singinst.org/grants/challenge
Transparency is generally desirable - but it has some costs.
Okay, thinking it over for the last hour, I now have a concrete statement to make about my willingness to donate to SIAI. I promise to donate $2000 to SIAI in a year's time if by that time SIAI has secured a 2-star rating from GiveWell for donors who are interested in existential risk.
I will urge GiveWell to evaluate existential risk charities with a view toward making this condition a fair one. If after year's time GiveWell has not yet evaluated SIAI, my offer will still stand.
[Edit: Slightly rephrased, removed condition involving quotes which had been taken out of context.]
Jonah,
Thanks for expressing an interest in donating to SIAI.
(a) SIAI has secured a 2 star donation from GiveWell for donors who are interested in existential risk.
I assure you that we are very interested in getting the GiveWell stamp of approval. Michael Vassar and Anna Salamon have corresponded with Holden Karnofsky on the matter and we're trying figure out the best way to proceed.
If it were just a matter of SIAI becoming more transparent and producing a larger number of clear outputs I would say that it is only a matter of time. As it stands, GiveWell does not know how to objectively evaluate activities focused on existential risk reduction. For that matter, neither do we, at least not directly. We don't know of any way to tell what percentage of worlds that branch off from this one go on to flourish and how many go on to die. If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
We think that UFAI is the largest known existential risk and that the most complete solution - FAI - addresses all othe...
If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
Yes, I would consider an alternative set of criteria if this turns out to be the case.
I have long felt that GiveWell places too much emphasis on demonstrated impact and believe that in doing so GiveWell may be missing some of the highest expected value opportunities for donors.
It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu's summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate.
I was not sure that XiXiDu's summaries were accurate which is why I added a disclaimer to my original remark. I have edited my original comment accordingly.
I apologize to Eliezer for inadvertantly publicizing a misinterpretation of his views.
(b) You publically apologize for and qualify your statements quoted by XiXiDu here. I believe that these statements are very bad for public relations. Even if true, they are only true at the margin and so at very least need to be qualified in that way.
(Note: I have not had the chance to verify that XiXiDu is quoting you correctly because I have not had access to online video for the past few weeks - condition (b) is given on the assumption that the quotes are accurate)
It is always wrong to demand the retraction of a quote which you have not seen in context.
I've suspected for a long time that the movement around EY might be a sophisticated scam to live off donations of nonconformists...
Before someone brings it up. I further said, "but that's just another crazy conspiracy theory". I wouldn't say something like this and actually mean it without being reasonable confident. And I'm not, I'm just being a troll sometimes. Although other people will inevitably come to this conclusion. You just have to read this post, see the massive amount of writings about rationality and the little cherry on the cake ...
I must say that, in fact, much of the nonprofit sector fits incredibly better into Prof. Hanson’s view of charity as “wasteful signaling” than into the traditional view of charity as helping.
Charity is largely about signalling. Often, signalling has to be expensive to be credible. The "traditional view of charity" seems like naive stupidity. That is not necessarily something to lament. It is great that people want to signal their goodness, wealth, and other virtues via making the world a better place! Surely it is best if everyone involved understands what is going on in this area - rather than living in denial of human nature.
SIAI does not presently exhibit high levels of transparency and accountability... For this reason together with the concerns which I express about Existential Risk and Public Relations, I believe that at present GiveWell's top ranked charities VillageReach and StopTB are better choices than SIAI
A problem with Pascal's Mugging arguments is that once you commit yourself to taking seriously very unlikely events (because they are multiplied by huge potential utilities), if you want to be consistent, you must take into account all potentially relevant unlikely events, not just the ones that point in your desired direction.
To be sure, you can come up with a story in which SIAI with probability epsilon makes a key positive difference, for bignum expected lives saved. But by the same token you can come up with stories in which SIAI with probability epsilon makes a key negative difference (e.g. by convincing people to abandon fruitful lines of research for fruitless ones), for bignum expected lives lost. Similarly, you can come up with stories in which even a small amount of resources spent elsewhere, with probability epsilon makes a key positive difference (e.g. a child saved from death by potentially curable disease, may grow up to make a critical scientific breakthrough or play a role in preserving world peace), for bignum expected lives saved.
Intuition would have us reject Pascal's Mugging, but when you think it through in full detail, the logical conclusion is that we should... reject Pascal's Mugging. It does actually reduce to normality.
Wow. Do you really think this sort of agrument can turn people to SIAI, rather than against any cause that uses tiny probabilities of vast utilities to justify itself?
I really want to put up a post that is highly relevant to this topic. I've been working with a couple of friends on an idea to altar personal incentives to solve the kinds of public good provision problems that charities and other organizations face, and I want to get some feedback from this community. Is someone with enough points to post willing to read over it and post for me? Or can I get some upvotes? (I know that this might be a bit rude, but I really want to get this out there ASAP).
Thanks a bunch, Allen Wang
[Added 02/24/14: After writing this post, I discovered that I had miscommunicated owing to not spelling out my thinking in sufficient detail, and also realized that it carried unnecessary negative connotations (despite conscious effort on my part to avoid them). See Reflections on a Personal Public Relations Failure: A Lesson in Communication. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.]
Follow-up to: Existential Risk and Public Relations, Other Existential Risks, The Importance of Self-Doubt
Over the last few days I've made a string of posts levying strong criticisms against SIAI. This activity is not one that comes naturally to me. In The Trouble With Physics Lee Smolin writes
My feelings about and criticisms of SIAI are very much analogous to Smolin's feelings about and criticisms of string theory. Criticism hurts feelings and I feel squeamish about hurting feelings. I've found the process of presenting my criticisms of SIAI emotionally taxing and exhausting. I fear that if I persist for too long I'll move into the region of negative returns. For this reason I've decided to cut my planned sequence of posts short and explain what my goal has been in posting in the way that I have.
Edit: Removed irrelevant references to VillageReach and StopTB, modifying post accordingly.
As Robin Hanson never ceases to emphasize, there's a disconnect between what humans say that what they're trying to do and what their revealed goals are. Yvain has written about this topic recently under his posting Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model. This problem becomes especially acute in the domain of philanthropy. Three quotes on this point:
(1) In Public Choice and the Altruist's Burden Roko says:
(2) In My Donation for 2009 (guest post from Dario Amodei) Dario says:
(3) In private correspondence about career choice, Holden Karnofsky said:
I believe that the points that Robin, Yvain, Roko, Dario and Holden have made provide a compelling case for the idea that charities should strive toward transparency and accountability. As Richard Feynman has said:
Because it's harder to fool others than it is to fool oneself, I think that the case for making charities transparent and accountable is very strong.
SIAI does not presently exhibit high levels of transparency and accountability. I agree with what I interpret to be Dario's point above: that in evaluating charities which are not transparent and accountable, we should assume the worst. For this reason together with the concerns which I express about Existential Risk and Public Relations, I believe that saving money in a donor-advised-fund with a view toward donating to a transparent and accountable future existential risk organization has higher expected value than donating to SIAI now does.
Because I take astronomical waste seriously and believe in shutting up and multiplying, I believe that reducing existential risk is ultimately more important than developing world aid. I would very much like it if there were a highly credible existential risk charity. At present, I do not feel that SIAI is a credible existential risk charity. One LW poster sent me a private message saying:
I do not believe that Eliezer is consciously attempting to engage in a scam to live off of the donations but I believe that (like all humans) he is subject to subconscious influences which may lead him to act as though he were consciously running a scam to live off of the donations of nonconformists. In light of Hanson's points, it would not be surprising if this were the case. The very fact that I received such a message is a sign that SIAI has public relations problems.
I encourage LW posters who find this post compelling to visit and read the materials available at GiveWell which is, as far as I know, the only charity evaluator which places high emphasis on impact, transparency and accountability. I encourage LW posters who are interested in existential risk to contact GiveWell expressing interest in GiveWell evaluating existential risk charities. I would note that it may be useful for LW posters who are interested in finding transparent and accountable organizations to donate to GiveWell's recommended charities to signal seriousness to the GiveWell staff.
I encourage SIAI to strive toward greater transparency and accountability. For starters, I would encourage SIAI to follow the example set by GiveWell and put a page on its website called "Mistakes" publically acknowledging its past errors. I'll also note that GiveWell incentivizes charities to disclose failures by granting them a 1-star rating. As Elie Hassenfeld explains
I believe that the fate of humanity depends on the existence of transparent and accountable organizations. This is both because I believe that transparent and accountable organizations are more effective and because I believe that people are more willing to give to them. As Holden says:
I believe that at present the most effective way to reduce existential risk is to work toward the existence of a transparent and accountable existential risk organization.
Added 08/23: