Michael Anissimov posted the following on the SIAI blog:
Thanks to the generosity of two major donors; Jaan Tallinn, a founder of Skype and Ambient Sound Investments, and Edwin Evans, CEO of the mobile applications startup Quinly, every contribution to the Singularity Institute up until January 20, 2011 will be matched dollar-for-dollar, up to a total of $125,000.
Interested in optimal philanthropy — that is, maximizing the future expected benefit to humanity per charitable dollar spent? The technological creation of greater-than-human intelligence has the potential to unleash an “intelligence explosion” as intelligent systems design still more sophisticated successors. This dynamic could transform our world as greatly as the advent of human intelligence has already transformed the Earth, for better or for worse. Thinking rationally about these prospects and working to encourage a favorable outcome offers an extraordinary chance to make a difference. The Singularity Institute exists to do so through its research, the Singularity Summit, and public education.
We support both direct engagements with the issues as well as the improvements in methodology and rationality needed to make better progress. Through our Visiting Fellows program, researchers from undergrads to Ph.Ds pursue questions on the foundations of Artificial Intelligence and related topics in two-to-three month stints. Our Resident Faculty, up to four researchers from three last year, pursues long-term projects, including AI research, a literature review, and a book on rationality, the first draft of which was just completed. Singularity Institute researchers and representatives gave over a dozen presentations at half a dozen conferences in 2010. Our Singularity Summit conference in San Francisco was a great success, bringing together over 600 attendees and 22 top scientists and other speakers to explore cutting-edge issues in technology and science.
We are pleased to receive donation matching support this year from Edwin Evans of the United States, a long-time Singularity Institute donor, and Jaan Tallinn of Estonia, a more recent donor and supporter. Jaan recently gave a talk on the Singularity and his life at a entrepreneurial group in Finland. Here’s what Jaan has to say about us:
“We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines. After we do that, it will be them steering history rather than us. Since we have only one shot at getting the transition right, the importance of SIAI’s work cannot be overestimated. Not finding any organisation to take up this challenge as seriously as SIAI on my side of the planet, I conclude that it’s worth following them across 10 time zones.”
– Jaan Tallinn, Singularity Institute donor
Make a lasting impact on the long-term future of humanity today — make a donation to the Singularity Institute and help us reach our $125,000 goal. For more detailed information on our projects and work, contact us at institute@intelligence.org or read our new organizational overview.
-----
Kaj's commentary: if you haven't done so recently, do check out the SIAI publications page. There are several new papers and presentations, out of which I thought that Carl Shulman's Whole Brain Emulations and the Evolution of Superorganisms made for particularly fascinating (and scary) reading. SIAI's finally starting to get its paper-writing machinery into gear, so let's give them money to make that possible. There's also a static page about this challenge; if you're on Facebook, please take the time to "like" it there.
(Full disclosure: I was an SIAI Visiting Fellow in April-July 2010.)
As long as you presume that the SIAI saves a potential galactic civilization from extinction (i.e. from being created), and assign a high enough probably to that outcome, nobody is going to be able to inform you of a charity with an higher payoff. At least as long as no other organization is going to make similar claims (implicitly or explicitly).
If you don't mind I would like you to state some numerical probability estimates:
I'd also like you to tackle some problems I see regarding the SIAI in its current form:
Transparency
How do you know that they are trying to deliver what they are selling? If you believe the premise of AI going FOOM and that the SIAI is trying to implement a binding policy based on which the first AGI is going to FOOM, then you believe that the SIAI is an organisation involved in shaping the future of the universe. If the stakes are this high there does exist a lot of incentive for deception. Can you conclude that because someone writes a lot of ethical correct articles and papers that that output is reflective of their true goals?
Agenda and Progress
The current agenda seems to be very broad and vague. Can the SIAI make effective progress given such an agenda compared to specialized charities and workshops focusing on more narrow sub-goals?
As multifoliaterose implied here, at the moment the task to recognize humans as distinguished beings already seems to be too broad a problem to tackle directly. Might it be more effective, at this point, to concentrate on supporting other causes leading towards the general goal of AI associated existential risk mitigation?
Third Party Review
Without being an expert and without any peer review, how sure can you be about the given premises (AI going FOOM etc.) and the effectiveness of their current agenda?
Also what conclusion should one draw from the fact that at least 2 people who have been working for the SIAI, or have been in close contact with it, do disagree with some of the stronger claims. Robin Hanson seems not to be convinced that donating to the SIAI is an effective way to mitigate risks from AI? Ben Goertzel does not believe into the scary idea. And Katja Grace thinks AI is no big threat.
More
My own estimations
Therefore that a donation to the SIAI does pay off: 0.0000003%
This part is the one that seems the most different from my own probabilities:
So, do you think the default case is a friendly AI? Or at least innocuous AI? Or that friendly AI is easy enough so that whoever first makes a fooming AI will get the friendliness part right with no influence from the SIAI?