(MIRI maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried MIRI staff members.)
Update 09-15-2013: The fundraising drive has been completed! My thanks to everyone who contributed.
The original post follows below...
Thanks to the generosity of several major donors,† every donation to the Machine Intelligence Research Institute made from now until (the end of) August 15th, 2013 will be matched dollar-for-dollar, up to a total of $200,000!
Now is your chance to double your impact while helping us raise up to $400,000 (with matching) to fund our research program.
This post is also a good place to ask your questions about our activities and plans — just post a comment!
If you have questions about what your dollars will do at MIRI, you can also schedule a quick call with MIRI Deputy Director Louie Helm: louie@intelligence.org (email), 510-717-1477 (phone), louiehelm (Skype).
Early this year we made a transition from movement-building to research, and we've hit the ground running with six major new research papers, six new strategic analyses on our blog, and much more. Give now to support our ongoing work on the future's most important problem.
Accomplishments in 2013 so far
- Released six new research papers: (1) Definability of Truth in Probabilistic Logic, (2) Intelligence Explosion Microeconomics, (3) Tiling Agents for Self-Modifying AI, (4) Robust Cooperation in the Prisoner's Dilemma, (5) A Comparison of Decision Algorithms on Newcomblike Problems, and (6) Responses to Catastrophic AGI Risk: A Survey.
- Held our 2nd and 3rd research workshops.
- Published six new analyses to our blog: The Lean Nonprofit, AGI Impact Experts and Friendly AI Experts, Five Theses..., When Will AI Be Created?, Friendly AI Research as Effective Altruism, and What is Intelligence?
- Published the Facing the Intelligence Explosion ebook.
- Published several other substantial articles: Recommended Courses for MIRI Researchers, Decision Theory FAQ, A brief history of ethically concerned scientists, Bayesian Adjustment Does Not Defeat Existential Risk Charity, and others.
- Published our first three expert interviews, with James Miller, Roman Yampolskiy, and Nick Beckstead.
- Launched our new website at intelligence.org as part of changing our name to MIRI.
- Relocated to new offices... 2 blocks from UC Berkeley, which is ranked 5th in the world in mathematics, and 1st in the world in mathematical logic.
- And of course much more.
Future Plans You Can Help Support
- We will host many more research workshops, including one in September in Berkeley, one in December (with John Baez attending) in Berkeley, and one in Oxford, UK (dates TBD).
- Eliezer will continue to publish about open problems in Friendly AI. (Here is #1 and #2.)
- We will continue to publish strategic analyses and expert interviews, mostly via our blog.
- We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for more of our materials, to make them more accessible: The Sequences, 2006-2009 and The Hanson-Yudkowsky AI Foom Debate.
- We will continue to set up the infrastructure (e.g. new offices, researcher endowments) required to host a productive Friendly AI research team, and (over several years) recruit enough top-level math talent to launch it.
- We hope to hire an experienced development director (job ad not yet posted), so that the contributions of our current supporters can be multiplied even further by a professional fundraiser.
(Other projects are still being surveyed for likely cost and strategic impact.)
We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward.
If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.
† $200,000 of total matching funds has been provided by Jaan Tallinn, Loren Merritt, Rick Schwall, and Alexei Andreev.
Just curious, whatever happened to EY's rationality books? You invested months of effort into them. Did you pull a sunk cost reversal? Or is publishing them not on the schedule till next year.
The drafts came out unexciting according to reader reports. I suspect that magical writing energy ['magic' = not understood] was diverted from the rationality book into the first 63 chapters of HPMOR which I was doing in my 'off time' while writing the book, and which does have Yudkowskian magic according to readers. HPMOR and CFAR between them used up a lot of the marginal utility I thought we would get from the book which diminishes the marginal utility of completing it.