The Open Philanthropy Project recently bought a seat on the board of the billion-dollar nonprofit AI research organization OpenAI for $30 million. Some people have said that this was surprisingly cheap, because the price in dollars was such a low share of OpenAI's eventual endowment: 3%.

To the contrary, this seat on OpenAI's board is very expensive, not because the nominal price is high, but precisely because it is so low.

If OpenAI hasn’t extracted a meaningful-to-it amount of money, then it follows that it is getting something other than money out of the deal. The obvious thing it is getting is buy-in for OpenAI as an AI safety and capacity venture. In exchange for a board seat, the Open Philanthropy Project is aligning itself socially with OpenAI, by taking the position of a material supporter of the project. The important thing is mutual validation, and a nominal donation just large enough to neg the other AI safety organizations supported by the Open Philanthropy Project is simply a customary part of the ritual.

By my count, the grant is larger than all the Open Philanthropy Project's other AI safety grants combined.

(Cross-posted at my personal blog.)

New Comment
16 comments, sorted by Click to highlight new comments since:

because the price in dollars was such a low share of OpenAI's eventual endowment: 3%.

Jam tomorrow.

For others who didn't get the reference: https://en.wikipedia.org/wiki/Jam_tomorrow

a nominal donation just large enough to neg the other AI safety organizations supported by the Open Philanthropy Project is simply a customary part of the ritual.

Why would negging them be useful?

Otherwise OpenAI's status would be reduced towards their level, by accepting a similarly-sized grant from OpenPhil as though they were just another supplicant.

Nice to see that jockeying for status is still the most important thing evah :-/

But is this really even a "neg" to begin with? My understanding is that MIRI's approach to AI safety is substantially different and that they are primarily performing pure mathematical research as opposed to being based around software development and actual AI implementation. This would mean that their overhead costs are substantially lower than OpenAI's. Additionally, OpenAI might have a shot at attracting more of the big-shot AI researchers whose market value are extremely high at the moment - to do this it would need a great deal of money to offer the appropriate financial incentives. Whereas for MIRI to convince mathematicians to join would be based more on whether or not they can find or convince someone that working on their problem is both important and interesting, and my guess is that it would be a lot cheaper, since I would think that in general mathematicians are getting paid quite a bit less than ML researchers on average. So a $30 million grant might be able to accomplish a lot more at OpenAI than at MIRI, at least in the short term.

The grant writeup says that the main benefit of the grant is to buy influence, not to scale up OpenAI. I'm ready to believe OpenAI thinks it can do more with more money. I'm sure MIRI thinks it has uses for more money too (at least freeing up staff time from fundraising). If money's not especially scarce, and AI risk is so important, why not just give MIRI as much as it thinks it can use?

Hmm. I'm reading OPP's grant write up for MIRI from 8/2016 and I think in that context I can see why it seems a little odd. For one thing, they say:

this research agenda has little potential to decrease potential risks from advanced AI in comparison with other research directions that we would consider supporting.

This in particular strikes me as strange because 1) If MIRI's approach can be summarized as "Finding method(s) to ensure guaranteed safe AI and proving them rigorously", then technically speaking, that approach should have nearly unlimited "potential", although I suppose it could be argued that progress would be made slowly compared to the speed at which practical AI improves. 2) "Other research directions" is quite vague. Can they point to where these other directions are outlined, a summary of accomplishments in that direction, and why they might feel they have a better potential?

My feeling is that in order to feel that MIRI's overall approach lacks potential, given that all current approaches to AI safety are fairly speculative and that there is no general consensus on how the problem should specifically be approached, then the technical advisors at OPP must have a very specific approach to AI safety they are pushing very hard to get support for, but are unwilling or unable to articulate why they prefer theirs so strongly. I also speculate that they might have unstated reasons for being skeptical of MIRI's approach.

All of what I've said above is highly speculative and is based on my current, fairly uninformed outsider view.

then the technical advisors at OPP must have a very specific approach to AI safety they are pushing very hard to get support for, but are unwilling or unable to articulate why they prefer theirs so strongly.

I don't think there is consensus among technical advisors on what directions are most promising. Also, Paul has written substantially about his preferred approach (see here for instance), and I've started to do the same, although so far I've been mostly talking about obstacles rather than positive approaches. But you can see some of my writing here and here. Also my thoughts in slide form here, although those slides are aimed at ML experts.

I haven't seen that your approach nor Paul's necessarily conflicts with that of MIRI's. There may be some difference of opinion on which is more likely to be feasible, but seeing as how Paul works closely with MIRI researchers and they seem to have a favorable opinion of him, I would be surprised if it were really true that OpenPhil's technical advisors were that pessimistic about MIRI's prospects. If they aren't that pessimistic, then it would imply Holden is acting somewhat against the advice of his advisors, or that he has strong priors against MIRI that were not overcome by the information he was receiving from them.

I also speculate that they might have unstated reasons for being skeptical of MIRI's approach.

Holden spent a lot of effort stating reasons in http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/

"We think MIRI is literally useless" is a decent reason not to fund MIRI at all, and is broadly consistent with Holden's early thoughts on the matter. But it's a weird reason to give MIRI $500K but OpenAI $30M. It's possible that no one has the capacity to do direct work on the long-run AI alignment problem right now. In that case, backwards-chaining to how to build the capacity seems really important.

While I disagree with Holden that MIRI is near-useless, I think his stated reasons for giving MIRI $500k are pretty good reasons that I'd do myself if I had that money and thought MIRI was near-useless.

(Namely, that so far MIRI has had a lot of good impacts regardless of the quality of their research, in terms of general community building, and that this should be rewarded so that other orgs are incentivized to do things like that)

True, but 2012 might be long enough ago that many of the concerns he had then may now be irrelevant. In addition, based on my understanding of MIRI's current approach and their arguments for that approach, I feel that many of his concerns either represent fundamental misunderstandings or are based on viewpoints that have significantly changed within MIRI since that time. For example, I have a hard time wrapping my head around this objection:

Objection 1: it seems to me that any AGI that was set to maximize a "Friendly" utility function would be extraordinarily dangerous.

This seems to be precisely the same concern expressed by MIRI and one of the fundamental arguments that their Agent Foundations approach is based on, in particular, what they deem the Value Specification problem. And I believe Yudkowsky has used this as a primary argument for AI safety in general for quite a while, very likely before 2012.

There is also the "tool/agent" distinction cited as objection 2 that I think is well addressed in MIRI's publications as well as Bostrom's Superintelligence, where it's made pretty clear that the distinction is not quite that clear cut (and gets even blurrier the more intelligent the "tool AI" gets).

Given that MIRI has had quite some time to refine their views as well as their arguments, as well as having gone through a restructuring and hiring quite a few new researchers since that time, what is the likelihood that Holden holds the same objections that were stated in the 2012 review?

I fell a bit cynical about this open AI thing.