Cross-posted here.
(The Singularity Institute maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried SI staff members.)
Thanks to the generosity of several major donors,† every donation to the Singularity Institute made now until January 20t (deadline extended from the 5th) will be matched dollar-for-dollar, up to a total of $115,000! So please, donate now!
Now is your chance to double your impact while helping us raise up to $230,000 to help fund our research program.
(If you're unfamiliar with our mission, please see our press kit and read our short research summary: Reducing Long-Term Catastrophic Risks from Artificial Intelligence.)
Now that Singularity University has acquired the Singularity Summit, and SI's interests in rationality training are being developed by the now-separate CFAR, the Singularity Institute is making a major transition. Most of the money from the Summit acquisition is being placed in a separate fund for a Friendly AI team, and therefore does not support our daily operations or other programs.
For 12 years we've largely focused on movement-building — through the Singularity Summit, Less Wrong, and other programs. This work was needed to build up a community of support for our mission and a pool of potential researchers for our unique interdisciplinary work.
Now, the time has come to say "Mission Accomplished Well Enough to Pivot to Research." Our community of supporters is now large enough that qualified researchers are available for us to hire, if we can afford to hire them. Having published 30+ research papers and dozens more original research articles on Less Wrong, we certainly haven't neglected research. But in 2013 we plan to pivot so that a much larger share of the funds we raise is spent on research.
Accomplishments in 2012
- Held a one-week research workshop on one of the open problems in Friendly AI research, and got progress that participants estimate would be the equivalent of 1-3 papers if published. (Details forthcoming. The workshop participants were Eliezer Yudkowsky, Paul Christiano, Marcello Herreshoff, and Mihaly Barasz.)
- Produced our annual Singularity Summit in San Francisco. Speakers included Ray Kurzweil, Steven Pinker, Daniel Kahneman, Temple Grandin, Peter Norvig, and many others.
- Launched the new Center for Applied Rationality, which ran 5 workshops in 2012, including Rationality for Entrepreneurs and SPARC (for young math geniuses), and also published one (early-version) smartphone app, The Credence Game.
- Launched the redesigned, updated, and reorganized Singularity.org website.
- Achieved most of the goals from our August 2011 strategic plan.
- 11 new research publications.
- Eliezer published the first 12 posts in his sequence Highly Advanced Epistemology 101 for Beginners, the precursor to his forthcoming sequence, Open Problems in Friendly AI.
- SI staff members published many other substantive articles on Less Wrong, including How to Purchase AI Risk Reduction, How to Run a Successful Less Wrong Meetup, a Solomonoff Induction tutorial, The Human's Hidden Utility Function (Maybe), How can I reduce existential risk from AI?, AI Risk and Opportunity: A Strategic Analysis, and Checklist of Rationality Habits.
- Launched our new volunteers platform, SingularityVolunteers.org.
- Hired two new researchers, Kaj Sotala and Alex Altair.
- Published our press kit to make journalists' lives easier.
- And of course much more.
Future Plans You Can Help Support
In the coming months, we plan to do the following:
- As part of Singularity University's acquisition of the Singularity Summit, we will be changing our name and launching a new website.
- Eliezer will publish his sequence Open Problems in Friendly AI.
- We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for many of our core materials, to make them more accessible: The Sequences, 2006-2009, Facing the Singularity, and The Hanson-Yudkowsky AI Foom Debate.
- We will publish several more research papers, including "Responses to Catastrophic AGI Risk: A Survey" and a short, technical introduction to timeless decision theory.
- We will set up the infrastructure required to host a productive Friendly AI team and try hard to recruit enough top-level math talent to launch it.
(Other projects are still being surveyed for likely cost and strategic impact.)
We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed using either PayPal or Google Checkout. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.
† $115,000 of total matching funds has been provided by Edwin Evans, Mihaly Barasz, Rob Zahra, Alexei Andreev, Jeff Bone, Michael Blume, Guy Srinivasan, and Kevin Fischer.
I will mostly be traveling (for AGI-12) for the next 25 hours, but I will try to answer questions after that.
1) In the long run, for CFAR to succeed, it has to be supported by a CFAR donor base that doesn't funge against SIAI money. I expect/hope that CFAR will have a substantially larger budget in the long run than SIAI. In the long run, then, marginal x-risk minimizers should be donating to SIAI.
2) But since CFAR is at a very young and very vital stage in its development and has very little funding, it needs money right now. And CFAR really really needs to succeed for SIAI to be viable in the long-term.
So my guess is that a given dollar is probably more valuable at CFAR right this instant, and we hope this changes very soon (due to CFAR having its own support base)...
...but...
...SIAI has previously supported CFAR, is probably going to make a loan to CFAR in the future, and therefore it doesn't matter as much exactly which organization you give to right now, except that if one maxes out its matching funds you probably want to donate to the other until it also maxes...
...and...
...even the judgment about exactly where a marginal dollar spent is more valuable is, necessarily, extremely uncertain to me. My own judgment favors CFAR at the current margins, but it's a very tough decision. Obviously! SIAI has given money to CFAR. If it had been obvious that this amount should've been shifted in direction A or direction B to minimize x-risk, we would've necessarily been organizationally irrational, or organizationally selfish, about the exact amount. SIAI has been giving CFAR amounts on the lower side of our error bounds because of the hope (uncertainty) that future-CFAR will prove effective at fundraising. Which rationally implies, and does actually imply, that an added dollar of marginal spending is more valuable at CFAR (in my estimates).
The upshot is that you should donate to whichever organization gets you more excited, like Luke said. SIAI is donating/loaning round-number amounts to CFAR, so where you donate $2K does change marginal spending at both organizations - we're not going to be exactly re-fine-tuning the dollar amounts flowing from SIAI to CFAR based on donations of that magnitude. It's a genuine decision on your part, and has a genuine effect. But from my own standpoint, "flip a coin to decide which one" is pretty close to my own current stance. For this to be false would imply that SIAI and I had a substantive x-risk-estimate disagreement which resulted in too much or too little funding (from my perspective) flowing to CFAR. Which is not the case, except insofar as we've been giving too little to CFAR in the uncertain hope that it can scale up fundraising faster than SIAI later. Taking this uncertainty into account, the margins balance. Leaving it out, a marginal absolute dollar of spending at CFAR does more good (somewhat) (in my estimation).
Is this still your view?