** cross-posted from http://singinst.org/2011winterfundraiser/ **
Contains detailed info about accomplishments and plans at SI. Thanks for supporting our work! -Louie
ARTIFICIAL INTELLIGENCE MORE RELEVANT THAN EVER
Recent books like Machine Ethics from Cambridge University Press and Robot Ethics from MIT Press, along with the U.S. military-funded research that resulted in Governing Lethal Behavior in Autonomous Robots show that the world is waking up to the challenges of building safe and ethical AI. But these projects focus on limited AI applications and fail to address the most important concern: how to ensure that smarter-than-human AI benefits humanity. The Singularity Institute has been working on that problem longer than anybody, a full decade before the Singularity landed on the cover of TIME magazine.
ACCOMPLISHMENTS IN 2011
2011 was our biggest year yet. Since the year began, we have:
- Held our annual Singularity Summit in New York City, with more than 900 in attendance. Speakers included inventor and futurist Ray Kurzweil, economist Tyler Cowen, PayPal co-founder Peter Thiel, Skeptic publisher Michael Shermer, Mathematica and WolframAlpha creator Stephen Wolfram, neuroscientist Christof Koch, MIT physicist Max Tegmark, and famed Jeopardy! contestant Ken Jennings.
- Held a smaller Singularity Summit in Salt Lake City.
- Held a one-week Rationality Minicamp and a ten-week Rationality Boot Camp.
- Created the Research Associates program, which currently has 7 researchers coordinating with Singularity Institute.
- Published our Singularity FAQ, IntelligenceExplosion.com, and Friendly-AI.com.
- Wrote three chapters for Springer's upcoming volume The Singularity Hypothesis, along with four other research papers.
- Began work on a new, clearer website design with lots of new content, which should go live Q1 2012.
- Began outlining open problems in Singularity research to help outside collaborators better understand our research priorities.
FUTURE PLANS YOU CAN HELP SUPPORT
In the coming year, we plan to do the following:
- Hold our annual Singularity Summit, in San Francisco this year.
- Improve organizational transparency by creating a simpler, easier-to-use website that includes Singularity Institute planning and policy documents.
- Publish a document of open research problems in Singularity Research, to clarify the research space and encourage other researchers to contribute to our mission.
- Add additional skilled researchers to our Research Associates program.
- Publish a well-researched document making the case for existential risk reduction as optimal philanthropy.
- Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.
We appreciate your support for our high-impact work. As Skype co-founder Jaan Tallinn said:
We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines… Since we have only one shot at getting this transition right, the importance of Singularity Institute's work cannot be overestimated.
Now is your last chance to make a tax-deductible donation in 2011.
If you'd like to support our work: please donate now!
Thanks for the helpful comments! I was uninformed about all those details above.
One of the posts has the sub-heading "The GiveWell approach" and all of the analysis in both posts use examples of charities you're comparing. I agree you weren't just talking about the GiveWell process... you were talking about a larger philosophy of science you have that informs things like the GiveWell process.
I recognize that you're making sophisticated arguments for your points. Especially the assumptions that you claim simply must be true to satisfy your intuition that charities should be rewarded for transparency and punished otherwise. Those seem wise from a "getting things done" point of view for an org like GiveWell -- even when there is no mathematical reason those assumptions should be true -- but only a human-level tit-for-tat shame/enforcement mechanism you hope eventually makes this circularly "true" through repeated application. Seems fair enough.
But adding regression adjustments to cancel out the effectiveness of any charity which looks too effective to be believed (based on the common sense of the evaluator) seems like a pretty big finger on the scale. Why do so much analysis in the beginning if the last step of the algorithm is just "re-adjust effectiveness and expected value to equal what feels right"? Your adjustment factor amounts to a kind of Egalitarian Effectiveness Assumption: We are all created equal at turning money into goodness. Or perhaps it's more of a negative statement, like, "None of us is any better than the best of us at turning money into goodness" -- where the upper limit on the best is something like 1000x or whatever the evaluator has encountered in the past. Any claims made above the best limit gets adjusted back down -- those guys were trying to Pascal's Mug us! That's the way in which there's a blinding effect. You disbelieve the claims of any groups who claims to be more effective per capita than you think is possible.
Louie, I think you're mischaracterizing these posts and their implications. The argument is much closer to "extraordinary claims require extraordinary evidence" than it is to "extraordinary claims should simply be disregarded." And I have outlined (in the conversation with SIAI) ways in which I believe SIAI could generate the evidence needed for me to put greater weight on its claims.
I wrote more in my comment followup on the first post about why an aversion to arguments that seem similar to "Pascal's Mugging" does not entail ... (read more)