Why CFAR?
Summary: We outline the case for CFAR, including:
CFAR is in the middle of our annual matching fundraiser right now. If you've been thinking of donating to CFAR, now is the best time to decide for probably at least half a year. Donations up to $150,000 will be matched until January 31st; and Matt Wage, who is matching the last $50,000 of donations, has vowed not to donate unless matched.[1]
Our workshops are cash-flow positive, and subsidize our basic operations (you are not subsidizing workshop attendees). But we can't yet run workshops often enough to fully cover our core operations. We also need to do more formal experiments, and we want to create free and low-cost curriculum with far broader reach than the current workshops. Donations are needed to keep the lights on at CFAR, fund free programs like the Summer Program on Applied Rationality and Cognition, and let us do new and interesting things in 2014 (see below, at length).[2]
MIRI's Winter 2013 Matching Challenge
Update: The fundraiser has been completed! Details here. The original post follows...
(Cross-posted from MIRI's blog. MIRI maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried MIRI staff members.)
Thanks to Peter Thiel, every donation made to MIRI between now and January 15th, 2014 will be matched dollar-for-dollar!
Also, gifts from "new large donors" will be matched 3x! That is, if you've given less than $5k to SIAI/MIRI ever, and you now give or pledge $5k or more, Thiel will donate $3 for every dollar you give or pledge.
We don't know whether we'll be able to offer the 3:1 matching ever again, so if you're capable of giving $5k or more, we encourage you to take advantage of the opportunity while you can. Remember that:
- If you prefer to give monthly, no problem! If you pledge 6 months of monthly donations, your full 6-month pledge will be the donation amount to be matched. So if you give monthly, you can get 3:1 matching for only $834/mo (or $417/mo if you get matching from your employer).
- We accept Bitcoin (BTC) and Ripple (XRP), both of which have recently jumped in value. If the market value of your Bitcoin or Ripple is $5k or more on the day you make the donation, this will count for matching.
- If your employer matches your donations at 1:1 (check here), then you can take advantage of Thiel's 3:1 matching by giving as little as $2,500 (because it's $5k after corporate matching).
Please email malo@intelligence.org if you intend on leveraging corporate matching or would like to pledge 6 months of monthly donations, so that we can properly account for your contributions towards the fundraiser.
Thiel's total match is capped at $250,000. The total amount raised will depend on how many people take advantage of 3:1 matching. We don't anticipate being able to hit the $250k cap without substantial use of 3:1 matching — so if you haven't given $5k thus far, please consider giving/pledging $5k or more during this drive. (If you'd like to know the total amount of your past donations to MIRI, just ask malo@intelligence.org.)

Now is your chance to double or quadruple your impact in funding our research program.

Accomplishments Since Our July 2013 Fundraiser Launched:
- Held three research workshops, including our first European workshop.
- Talks at MIT and Harvard, by Eliezer Yudkowsky and Paul Christiano.
- Yudkowsky is blogging more Open Problems in Friendly AI... on Facebook! (They're also being written up in a more conventional format.)
- New papers: (1) Algorithmic Progress in Six Domains; (2) Embryo Selection for Cognitive Enhancement; (3) Racing to the Precipice; (4) Predicting AGI: What can we say when we know so little?
- New ebook: The Hanson-Yudkowsky AI-Foom Debate.
- New analyses: (1) From Philosophy to Math to Engineering; (2) How well will policy-makers handle AGI? (3) How effectively can we plan for future decades? (4) Transparency in Safety-Critical Systems; (5) Mathematical Proofs Improve But Don’t Guarantee Security, Safety, and Friendliness; (6) What is AGI? (7) AI Risk and the Security Mindset; (8) Richard Posner on AI Dangers; (9) Russell and Norvig on Friendly AI.
- New expert interviews: Greg Morrisett (Harvard), Robin Hanson (GMU), Paul Rosenbloom (USC), Stephen Hsu (MSU), Markus Schmidt (Biofaction), Laurent Orseau (AgroParisTech), Holden Karnofsky (GiveWell), Bas Steunebrink (IDSIA), Hadi Esmaeilzadeh (GIT), Nick Beckstead (Oxford), Benja Fallenstein (Bristol), Roman Yampolskiy (U Louisville), Ben Goertzel (Novamente), and James Miller (Smith College).
- With Leverage Research, we held a San Francisco book launch party for James Barratt's Our Final Invention, which discusses MIRI's work at length. (If you live in the Bay Area and would like to be notified of local events, please tell malo@intelligence.org!)
How Will Marginal Funds Be Used?
- Hiring Friendly AI researchers, identified through our workshops, as they become available for full-time work at MIRI.
- Running more workshops (next one begins Dec. 14th), to make concrete Friendly AI research progress, to introduce new researchers to open problems in Friendly AI, and to identify candidates for MIRI to hire.
- Describing more open problems in Friendly AI. Our current strategy is for Yudkowsky to explain them as quickly as possible via Facebook discussion, followed by more structured explanations written by others in collaboration with Yudkowsky.
- Improving humanity's strategic understanding of what to do about superintelligence. In the coming months this will include (1) additional interviews and analyses on our blog, (2) a reader's guide for Nick Bostrom's forthcoming Superintelligence book, and (3) an introductory ebook currently titled Smarter Than Us.
Other projects are still being surveyed for likely cost and impact.
We appreciate your support for our work! Donate now, and seize a better than usual chance to move our work forward. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org. Screenshot Service provided by LinkPeek.com.
MIRI's 2013 Summer Matching Challenge
(MIRI maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried MIRI staff members.)
Update 09-15-2013: The fundraising drive has been completed! My thanks to everyone who contributed.
The original post follows below...
Thanks to the generosity of several major donors,† every donation to the Machine Intelligence Research Institute made from now until (the end of) August 15th, 2013 will be matched dollar-for-dollar, up to a total of $200,000!
Now is your chance to double your impact while helping us raise up to $400,000 (with matching) to fund our research program.
This post is also a good place to ask your questions about our activities and plans — just post a comment!
If you have questions about what your dollars will do at MIRI, you can also schedule a quick call with MIRI Deputy Director Louie Helm: louie@intelligence.org (email), 510-717-1477 (phone), louiehelm (Skype).
Early this year we made a transition from movement-building to research, and we've hit the ground running with six major new research papers, six new strategic analyses on our blog, and much more. Give now to support our ongoing work on the future's most important problem.
Accomplishments in 2013 so far
- Released six new research papers: (1) Definability of Truth in Probabilistic Logic, (2) Intelligence Explosion Microeconomics, (3) Tiling Agents for Self-Modifying AI, (4) Robust Cooperation in the Prisoner's Dilemma, (5) A Comparison of Decision Algorithms on Newcomblike Problems, and (6) Responses to Catastrophic AGI Risk: A Survey.
- Held our 2nd and 3rd research workshops.
- Published six new analyses to our blog: The Lean Nonprofit, AGI Impact Experts and Friendly AI Experts, Five Theses..., When Will AI Be Created?, Friendly AI Research as Effective Altruism, and What is Intelligence?
- Published the Facing the Intelligence Explosion ebook.
- Published several other substantial articles: Recommended Courses for MIRI Researchers, Decision Theory FAQ, A brief history of ethically concerned scientists, Bayesian Adjustment Does Not Defeat Existential Risk Charity, and others.
- Published our first three expert interviews, with James Miller, Roman Yampolskiy, and Nick Beckstead.
- Launched our new website at intelligence.org as part of changing our name to MIRI.
- Relocated to new offices... 2 blocks from UC Berkeley, which is ranked 5th in the world in mathematics, and 1st in the world in mathematical logic.
- And of course much more.
Future Plans You Can Help Support
- We will host many more research workshops, including one in September in Berkeley, one in December (with John Baez attending) in Berkeley, and one in Oxford, UK (dates TBD).
- Eliezer will continue to publish about open problems in Friendly AI. (Here is #1 and #2.)
- We will continue to publish strategic analyses and expert interviews, mostly via our blog.
- We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for more of our materials, to make them more accessible: The Sequences, 2006-2009 and The Hanson-Yudkowsky AI Foom Debate.
- We will continue to set up the infrastructure (e.g. new offices, researcher endowments) required to host a productive Friendly AI research team, and (over several years) recruit enough top-level math talent to launch it.
- We hope to hire an experienced development director (job ad not yet posted), so that the contributions of our current supporters can be multiplied even further by a professional fundraiser.
(Other projects are still being surveyed for likely cost and strategic impact.)
We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward.
If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.
† $200,000 of total matching funds has been provided by Jaan Tallinn, Loren Merritt, Rick Schwall, and Alexei Andreev.
= 783df68a0f980790206b9ea87794c5b6)

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)