** cross-posted from http://singinst.org/2011winterfundraiser/ **
Contains detailed info about accomplishments and plans at SI. Thanks for supporting our work! -Louie
ARTIFICIAL INTELLIGENCE MORE RELEVANT THAN EVER
Recent books like Machine Ethics from Cambridge University Press and Robot Ethics from MIT Press, along with the U.S. military-funded research that resulted in Governing Lethal Behavior in Autonomous Robots show that the world is waking up to the challenges of building safe and ethical AI. But these projects focus on limited AI applications and fail to address the most important concern: how to ensure that smarter-than-human AI benefits humanity. The Singularity Institute has been working on that problem longer than anybody, a full decade before the Singularity landed on the cover of TIME magazine.
ACCOMPLISHMENTS IN 2011
2011 was our biggest year yet. Since the year began, we have:
- Held our annual Singularity Summit in New York City, with more than 900 in attendance. Speakers included inventor and futurist Ray Kurzweil, economist Tyler Cowen, PayPal co-founder Peter Thiel, Skeptic publisher Michael Shermer, Mathematica and WolframAlpha creator Stephen Wolfram, neuroscientist Christof Koch, MIT physicist Max Tegmark, and famed Jeopardy! contestant Ken Jennings.
- Held a smaller Singularity Summit in Salt Lake City.
- Held a one-week Rationality Minicamp and a ten-week Rationality Boot Camp.
- Created the Research Associates program, which currently has 7 researchers coordinating with Singularity Institute.
- Published our Singularity FAQ, IntelligenceExplosion.com, and Friendly-AI.com.
- Wrote three chapters for Springer's upcoming volume The Singularity Hypothesis, along with four other research papers.
- Began work on a new, clearer website design with lots of new content, which should go live Q1 2012.
- Began outlining open problems in Singularity research to help outside collaborators better understand our research priorities.
FUTURE PLANS YOU CAN HELP SUPPORT
In the coming year, we plan to do the following:
- Hold our annual Singularity Summit, in San Francisco this year.
- Improve organizational transparency by creating a simpler, easier-to-use website that includes Singularity Institute planning and policy documents.
- Publish a document of open research problems in Singularity Research, to clarify the research space and encourage other researchers to contribute to our mission.
- Add additional skilled researchers to our Research Associates program.
- Publish a well-researched document making the case for existential risk reduction as optimal philanthropy.
- Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.
We appreciate your support for our high-impact work. As Skype co-founder Jaan Tallinn said:
We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines… Since we have only one shot at getting this transition right, the importance of Singularity Institute's work cannot be overestimated.
Now is your last chance to make a tax-deductible donation in 2011.
If you'd like to support our work: please donate now!
Louie, I think you're mischaracterizing these posts and their implications. The argument is much closer to "extraordinary claims require extraordinary evidence" than it is to "extraordinary claims should simply be disregarded." And I have outlined (in the conversation with SIAI) ways in which I believe SIAI could generate the evidence needed for me to put greater weight on its claims.
I wrote more in my comment followup on the first post about why an aversion to arguments that seem similar to "Pascal's Mugging" does not entail an aversion to supporting x-risk charities. (As mentioned in that comment, it appears that important SIAI staff share such an aversion, whether or not they agree with my formal defense of it.)
I also think the message of these posts is consistent with the best available models of how the world works - it isn't just about trying to set incentives. That's probably a conversation for another time - there seems to be a lot of confusion on these posts (especially the second) and I will probably post some clarification at a later date.