** cross-posted from http://singinst.org/2011winterfundraiser/ **
Contains detailed info about accomplishments and plans at SI. Thanks for supporting our work! -Louie
ARTIFICIAL INTELLIGENCE MORE RELEVANT THAN EVER
Recent books like Machine Ethics from Cambridge University Press and Robot Ethics from MIT Press, along with the U.S. military-funded research that resulted in Governing Lethal Behavior in Autonomous Robots show that the world is waking up to the challenges of building safe and ethical AI. But these projects focus on limited AI applications and fail to address the most important concern: how to ensure that smarter-than-human AI benefits humanity. The Singularity Institute has been working on that problem longer than anybody, a full decade before the Singularity landed on the cover of TIME magazine.
ACCOMPLISHMENTS IN 2011
2011 was our biggest year yet. Since the year began, we have:
- Held our annual Singularity Summit in New York City, with more than 900 in attendance. Speakers included inventor and futurist Ray Kurzweil, economist Tyler Cowen, PayPal co-founder Peter Thiel, Skeptic publisher Michael Shermer, Mathematica and WolframAlpha creator Stephen Wolfram, neuroscientist Christof Koch, MIT physicist Max Tegmark, and famed Jeopardy! contestant Ken Jennings.
- Held a smaller Singularity Summit in Salt Lake City.
- Held a one-week Rationality Minicamp and a ten-week Rationality Boot Camp.
- Created the Research Associates program, which currently has 7 researchers coordinating with Singularity Institute.
- Published our Singularity FAQ, IntelligenceExplosion.com, and Friendly-AI.com.
- Wrote three chapters for Springer's upcoming volume The Singularity Hypothesis, along with four other research papers.
- Began work on a new, clearer website design with lots of new content, which should go live Q1 2012.
- Began outlining open problems in Singularity research to help outside collaborators better understand our research priorities.
FUTURE PLANS YOU CAN HELP SUPPORT
In the coming year, we plan to do the following:
- Hold our annual Singularity Summit, in San Francisco this year.
- Improve organizational transparency by creating a simpler, easier-to-use website that includes Singularity Institute planning and policy documents.
- Publish a document of open research problems in Singularity Research, to clarify the research space and encourage other researchers to contribute to our mission.
- Add additional skilled researchers to our Research Associates program.
- Publish a well-researched document making the case for existential risk reduction as optimal philanthropy.
- Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.
We appreciate your support for our high-impact work. As Skype co-founder Jaan Tallinn said:
We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines… Since we have only one shot at getting this transition right, the importance of Singularity Institute's work cannot be overestimated.
Now is your last chance to make a tax-deductible donation in 2011.
If you'd like to support our work: please donate now!
Hi, here are the details of whom I spoke with and why:
A couple of other comments:
And a tangential comment/question for Louie: I do not understand why you link to my two LW posts using the anchor text you use. These posts are not about GiveWell's process. They both argue that standard Bayesian inference indicates against the literal use of non-robust expected value estimates, particularly in "Pascal's Mugging" type scenarios. Michael Vassar's response to the first of these was that I was attacking a straw man. There are unresolved disagreements about some of the specific modeling assumptions and implications of these posts, but I don't see any way in which they imply a "limited process" or "blinding to the possibility of SIAI's being a good giving opportunity." I do agree that SIAI hasn't been a fit for our standard process (and is more suited to GiveWell Labs) but I don't see anything in these posts that illustrates that - what do you have in mind here?
Hi Holden,
I just read this thread today. I made a clarification upthread about the description of my comment above, under Louie's. Also, I'd like to register that I thought your characterization of that interview as such was fine, even without the clarifications you make here.
As a technical point, I don't think these posts address "Pascal's Mugging" scenarios in ... (read more)