** cross-posted from http://singinst.org/2011winterfundraiser/ **
Contains detailed info about accomplishments and plans at SI. Thanks for supporting our work! -Louie
ARTIFICIAL INTELLIGENCE MORE RELEVANT THAN EVER
Recent books like Machine Ethics from Cambridge University Press and Robot Ethics from MIT Press, along with the U.S. military-funded research that resulted in Governing Lethal Behavior in Autonomous Robots show that the world is waking up to the challenges of building safe and ethical AI. But these projects focus on limited AI applications and fail to address the most important concern: how to ensure that smarter-than-human AI benefits humanity. The Singularity Institute has been working on that problem longer than anybody, a full decade before the Singularity landed on the cover of TIME magazine.
ACCOMPLISHMENTS IN 2011
2011 was our biggest year yet. Since the year began, we have:
- Held our annual Singularity Summit in New York City, with more than 900 in attendance. Speakers included inventor and futurist Ray Kurzweil, economist Tyler Cowen, PayPal co-founder Peter Thiel, Skeptic publisher Michael Shermer, Mathematica and WolframAlpha creator Stephen Wolfram, neuroscientist Christof Koch, MIT physicist Max Tegmark, and famed Jeopardy! contestant Ken Jennings.
- Held a smaller Singularity Summit in Salt Lake City.
- Held a one-week Rationality Minicamp and a ten-week Rationality Boot Camp.
- Created the Research Associates program, which currently has 7 researchers coordinating with Singularity Institute.
- Published our Singularity FAQ, IntelligenceExplosion.com, and Friendly-AI.com.
- Wrote three chapters for Springer's upcoming volume The Singularity Hypothesis, along with four other research papers.
- Began work on a new, clearer website design with lots of new content, which should go live Q1 2012.
- Began outlining open problems in Singularity research to help outside collaborators better understand our research priorities.
FUTURE PLANS YOU CAN HELP SUPPORT
In the coming year, we plan to do the following:
- Hold our annual Singularity Summit, in San Francisco this year.
- Improve organizational transparency by creating a simpler, easier-to-use website that includes Singularity Institute planning and policy documents.
- Publish a document of open research problems in Singularity Research, to clarify the research space and encourage other researchers to contribute to our mission.
- Add additional skilled researchers to our Research Associates program.
- Publish a well-researched document making the case for existential risk reduction as optimal philanthropy.
- Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.
We appreciate your support for our high-impact work. As Skype co-founder Jaan Tallinn said:
We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines… Since we have only one shot at getting this transition right, the importance of Singularity Institute's work cannot be overestimated.
Now is your last chance to make a tax-deductible donation in 2011.
If you'd like to support our work: please donate now!
FWIW, the "Future Plans" list seems to me somewhat understating the value of a donation. I realize it's fairly accurate in that it represents the activities of SI. Yet it seems like it could be presented better.
For example, the first item is "hold the Summit". But I happen to know that the Summit generally breaks even or makes a little money, so my marginal dollar will not make or break the Summit. Similarly, a website redesign, while probably important, isn't exciting enough to be listed as the second item. The third item, publish the open problems document, is a good one, though you should make it seem more exciting.
I think the donation drive page should thoroughly make the case that SI is the best use of someone's charity dollars -- that it's got a great team, great leadership, and is executing a plan with the highest probability of working at every step. That page should probably exist on its own, assuming the reader hasn't read any of the rest of the site, with arguments for why working explicitly on rationality is worthwhile; why transparency matters; why outreach to other researchers matters; what the researchers are currently spending time on and why those are the correct things for them to be working on; and so on. It can be long: long-form copy is known to work, and this seems like a correct application for it.
In fact, since you probably have other things to do, I'll do a little bit of copywriting myself to try to discover if this is really a good idea. I'll post some stuff here tomorrow after I've worked on it a bit.
I shall not complain. :)