Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

MIRI's 2016 Fundraiser

20 Post author: So8res 25 September 2016 04:55PM

Update December 22: Our donors came together during the fundraiser to get us most of the way to our $750,000 goal. In all, 251 donors contributed $589,248, making this our second-biggest fundraiser to date. Although we fell short of our target by $160,000, we have since made up this shortfall thanks to November/December donors. I’m extremely grateful for this support, and will plan accordingly for more staff growth over the coming year.

As described in our post-fundraiser update, we are still fairly funding-constrained. Donations at this time will have an especially large effect on our 2017–2018 hiring plans and strategy, as we try to assess our future prospects. For some external endorsements of MIRI as a good place to give this winter, see recent evaluations by Daniel DeweyNick BecksteadOwen Cotton-Barratt, and Ben Hoskin.



Our 2016 fundraiser is underway! Unlike in past years, we'll only be running one fundraiser in 2016, from Sep. 16 to Oct. 31. Our progress so far (updated live):  


Donate Now

Employer matching and pledges to give later this year also count towards the total. Click here to learn more.


MIRI is a nonprofit research group based in Berkeley, California. We do foundational research in mathematics and computer science that’s aimed at ensuring that smarter-than-human AI systems have a positive impact on the world. 2016 has been a big year for MIRI, and for the wider field of AI alignment research. Our 2016 strategic update in early August reviewed a number of recent developments:

We also published new results in decision theory and logical uncertainty, including “Parametric bounded Löb’s theorem and robust cooperation of bounded agents” and “A formal solution to the grain of truth problem.” For a survey of our research progress and other updates from last year, see our 2015 review. In the last three weeks, there have been three more major developments:

  • We released a new paper, “Logical induction,” describing a method for learning to assign reasonable probabilities to mathematical conjectures and computational facts in a way that outpaces deduction.
  • The Open Philanthropy Project awarded MIRI a one-year $500,000 grant to scale up our research program, with a strong chance of renewal next year.
  • The Open Philanthropy Project is supporting the launch of the new UC Berkeley Center for Human-Compatible AI, headed by Stuart Russell.

Things have been moving fast over the last nine months. If we can replicate last year’s fundraising successes, we’ll be in an excellent position to move forward on our plans to grow our team and scale our research activities.

The strategic landscape

Humans are far better than other species at altering our environment to suit our preferences. This is primarily due not to our strength or speed, but to our intelligence, broadly construed -- our ability to reason, plan, accumulate scientific knowledge, and invent new technologies. AI is a technology that appears likely to have a uniquely large impact on the world because it has the potential to automate these abilities, and to eventually decisively surpass humans on the relevant cognitive metrics. Separate from the task of building intelligent computer systems is the task of ensuring that these systems are aligned with our values. Aligning an AI system requires surmounting a number of serious technical challenges, most of which have received relatively little scholarly attention to date. MIRI's role as a nonprofit in this space, from our perspective, is to help solve parts of the problem that are a poor fit for mainstream industry and academic groups. Our long-term plans are contingent on future developments in the field of AI. Because these developments are highly uncertain, we currently focus mostly on work that we expect to be useful in a wide variety of possible scenarios. The more optimistic scenarios we consider often look something like this:

  • In the short term, a research community coalesces, develops a good in-principle understanding of what the relevant problems are, and produces formal tools for tackling these problems. AI researchers move toward a minimal consensus about best practices, normalizing discussions of AI’s long-term social impact, a risk-conscious security mindset, and work on error tolerance and value specification.
  • In the medium term, researchers build on these foundations and develop a more mature understanding. As we move toward a clearer sense of what smarter-than-human AI systems are likely to look like — something closer to a credible roadmap — we imagine the research community moving toward increased coordination and cooperation in order to discourage race dynamics.
  • In the long term, we would like to see AI-empowered projects (as described by Dewey [2015]) used to avert major AI mishaps. For this purpose, we’d want to solve a weak version of the alignment problem for limited AI systems — systems just capable enough to serve as useful levers for preventing AI accidents and misuse.
  • In the very long term, we can hope to solve the “full” alignment problem for highly capable, highly autonomous AI systems. Ideally, we want to reach a position where we can afford to wait until we reach scientific and institutional maturity -- take our time to dot every i and cross every t before we risk "locking in" design choices.

The above is a vague sketch, and we prioritize research we think would be useful in less optimistic scenarios as well. Additionally, “short term” and “long term” here are relative, and different timeline forecasts can have very different policy implications. Still, the sketch may help clarify the directions we’d like to see the research community move in. For more on our research focus and methodology, see our research page and MIRI’s Approach.  

Our organizational plans

We currently employ seven technical research staff (six research fellows and one assistant research fellow), plus two researchers signed on to join in the coming months and an additional six research associates and research interns.1 Our budget this year is about $1.75M, up from $1.65M in 2015 and $950k in 2014.2 Our eventual goal (subject to revision) is to grow until we have between 13 and 17 technical research staff, at which point our budget would likely be in the $3–4M range. If we reach that point successfully while maintaining a two-year runway, we’re likely to shift out of growth mode. Our budget estimate for 2017 is roughly $2–2.2M, which means that we’re entering this fundraiser with about 14 months’ runway. We’re uncertain about how many donations we'll receive between November and next September,3 but projecting from current trends, we expect about 4/5ths of our total donations to come from the fundraiser and 1/5th to come in off-fundraiser.4 Based on this, we have the following fundraiser goals:

Basic target - $750,000. We feel good about our ability to execute our growth plans at this funding level. We’ll be able to move forward comfortably, albeit with somewhat more caution than at the higher targets.

Growth target - $1,000,000. This would amount to about half a year’s runway. At this level, we can afford to make more uncertain but high-expected-value bets in our growth plans. There’s a risk that we’ll dip below a year’s runway in 2017 if we make more hires than expected, but the growing support of our donor base would make us feel comfortable about taking such risks.

Stretch target - $1,250,000. At this level, even if we exceed my growth expectations, we’d be able to grow without real risk of dipping below a year’s runway. Past $1.25M we would not expect additional donations to affect our 2017 plans much, assuming moderate off-fundraiser support.5

If we hit our growth and stretch targets, we’ll be able to execute several additional programs we’re considering with more confidence. These include contracting a larger pool of researchers to do early work with us on logical induction and on our machine learning agenda, and generally spending more time on academic outreach, field-growing, and training or trialing potential collaborators and hires. As always, you're invited to get in touch if you have questions about our upcoming plans and recent activities. I’m very much looking forward to seeing what new milestones the growing alignment research community will hit in the coming year, and I’m very grateful for the thoughtful engagement and support that’s helped us get to this point.  

Donate Now


Pledge to Give


1 This excludes Katja Grace, who heads the AI Impacts project using a separate pool of funds earmarked for strategy/forecasting research. It also excludes me: I contribute to our technical research, but my primary role is administrative. (back)

2 We expect to be slightly under the $1.825M budget we previously projected for 2016, due to taking on fewer new researchers than expected this year. (back)

3 We're imagining continuing to run one fundraiser per year in future years, possibly in September. (back)

4 Separately, the Open Philanthropy Project is likely to renew our $500,000 grant next year, and we expect to receive the final ($80,000) installment from the Future of Life Institute's three-year grants. For comparison, our revenue was about $1.6 million in 2015: $167k in grants, $960k in fundraiser contributions, and $467k in off-fundraiser (non-grant) contributions. Our situation in 2015 was somewhat different, however: we ran two 2015 fundraisers, whereas we’re skipping our winter fundraiser this year and advising December donors to pledge early or give off-fundraiser. (back)

5 At significantly higher funding levels, we’d consider running other useful programs, such as a prize fund. Shoot me an e-mail if you’d like to talk about the details. (back)

Comments (13)

Comment author: Furcas 24 September 2016 03:39:19PM 18 points [-]

Donated $500!

Comment author: So8res 26 September 2016 06:39:53PM 3 points [-]


Comment author: blogospheroid 02 November 2016 09:29:27AM 4 points [-]

Ouch! I donated $135 (and asked my employer to match as well) on Nov 2, India time. I had been on a brief vacation and just returned. Now I re-read and found it is too late for the fundraiser. Anyway, please take this as positive reinforcement for what it is worth. You're doing a good job. Take the money as part of fundraiser or off-fund raiser donations, whatever is appropriate.

Comment author: So8res 03 November 2016 10:24:06PM 0 points [-]

Thanks :-)

Comment author: Gyrodiot 31 October 2016 09:52:37PM 2 points [-]

Donated $20. Just in time!

Comment author: So8res 01 November 2016 11:30:26PM 0 points [-]


Comment author: Gunnar_Zarncke 14 October 2016 10:56:15PM 2 points [-]

I'm not sure this has the best visibility here in Main. I just noted it right now because I haven't looked in Main for ages. And it wasn't featured in discussions, or was it?

Comment author: RobbBB 24 October 2016 02:36:31AM 2 points [-]

There's a discussion post that mentions the fundraiser here, along with other news: http://lesswrong.com/r/discussion/lw/o0d/miri_ama_plus_updates/

Comment author: Plasmon 04 October 2016 04:21:31PM 1 point [-]

Clicking the "Donate now" button under "PayPal or Credit Card" does not seem to do anything other than refresh the page.

(browser Firefox 48.0 , OS Ubuntu)

Comment author: So8res 04 October 2016 08:41:49PM 2 points [-]

Huh, thanks for the heads up. If you use an ad-blocker, try pausing that and refreshing. Meanwhile, I'll have someone look into it.

Comment author: Plasmon 05 October 2016 05:17:56AM 1 point [-]

Ah yes, pausing ghostery seems to fix it.

Comment author: PhilGoetz 19 October 2016 02:51:06AM 0 points [-]

Please use a page break when you post an article, so we can easily scroll past it and see the previous articles.

Comment author: So8res 19 October 2016 11:21:01PM 1 point [-]

Fixed, thanks.