Cross-posted here.
(The Singularity Institute maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried SI staff members.)
Thanks to the generosity of several major donors,† every donation to the Singularity Institute made now until January 20t (deadline extended from the 5th) will be matched dollar-for-dollar, up to a total of $115,000! So please, donate now!
Now is your chance to double your impact while helping us raise up to $230,000 to help fund our research program.
(If you're unfamiliar with our mission, please see our press kit and read our short research summary: Reducing Long-Term Catastrophic Risks from Artificial Intelligence.)
Now that Singularity University has acquired the Singularity Summit, and SI's interests in rationality training are being developed by the now-separate CFAR, the Singularity Institute is making a major transition. Most of the money from the Summit acquisition is being placed in a separate fund for a Friendly AI team, and therefore does not support our daily operations or other programs.
For 12 years we've largely focused on movement-building — through the Singularity Summit, Less Wrong, and other programs. This work was needed to build up a community of support for our mission and a pool of potential researchers for our unique interdisciplinary work.
Now, the time has come to say "Mission Accomplished Well Enough to Pivot to Research." Our community of supporters is now large enough that qualified researchers are available for us to hire, if we can afford to hire them. Having published 30+ research papers and dozens more original research articles on Less Wrong, we certainly haven't neglected research. But in 2013 we plan to pivot so that a much larger share of the funds we raise is spent on research.
Accomplishments in 2012
- Held a one-week research workshop on one of the open problems in Friendly AI research, and got progress that participants estimate would be the equivalent of 1-3 papers if published. (Details forthcoming. The workshop participants were Eliezer Yudkowsky, Paul Christiano, Marcello Herreshoff, and Mihaly Barasz.)
- Produced our annual Singularity Summit in San Francisco. Speakers included Ray Kurzweil, Steven Pinker, Daniel Kahneman, Temple Grandin, Peter Norvig, and many others.
- Launched the new Center for Applied Rationality, which ran 5 workshops in 2012, including Rationality for Entrepreneurs and SPARC (for young math geniuses), and also published one (early-version) smartphone app, The Credence Game.
- Launched the redesigned, updated, and reorganized Singularity.org website.
- Achieved most of the goals from our August 2011 strategic plan.
- 11 new research publications.
- Eliezer published the first 12 posts in his sequence Highly Advanced Epistemology 101 for Beginners, the precursor to his forthcoming sequence, Open Problems in Friendly AI.
- SI staff members published many other substantive articles on Less Wrong, including How to Purchase AI Risk Reduction, How to Run a Successful Less Wrong Meetup, a Solomonoff Induction tutorial, The Human's Hidden Utility Function (Maybe), How can I reduce existential risk from AI?, AI Risk and Opportunity: A Strategic Analysis, and Checklist of Rationality Habits.
- Launched our new volunteers platform, SingularityVolunteers.org.
- Hired two new researchers, Kaj Sotala and Alex Altair.
- Published our press kit to make journalists' lives easier.
- And of course much more.
Future Plans You Can Help Support
In the coming months, we plan to do the following:
- As part of Singularity University's acquisition of the Singularity Summit, we will be changing our name and launching a new website.
- Eliezer will publish his sequence Open Problems in Friendly AI.
- We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for many of our core materials, to make them more accessible: The Sequences, 2006-2009, Facing the Singularity, and The Hanson-Yudkowsky AI Foom Debate.
- We will publish several more research papers, including "Responses to Catastrophic AGI Risk: A Survey" and a short, technical introduction to timeless decision theory.
- We will set up the infrastructure required to host a productive Friendly AI team and try hard to recruit enough top-level math talent to launch it.
(Other projects are still being surveyed for likely cost and strategic impact.)
We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed using either PayPal or Google Checkout. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.
† $115,000 of total matching funds has been provided by Edwin Evans, Mihaly Barasz, Rob Zahra, Alexei Andreev, Jeff Bone, Michael Blume, Guy Srinivasan, and Kevin Fischer.
I will mostly be traveling (for AGI-12) for the next 25 hours, but I will try to answer questions after that.
Just donated 400 €.
My new year's resolution is tithing, to be split roughly half-in-half between "serious" causes and things like supporting my favorite webcomics/fansubbers/whatever. As part of the former, I decided to add 1000 € to the above donation.