In the past, people like Eliezer Yudkowsky (see 1, 2, 3, 4, and 5) have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined?
I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.
-
(Meta: I don't think this deserves a discussion thread, but I posted this on the open thread and no-one responded, and I think it's important enough to merit a response.)
I'm eager to see Eliezer's planned reply to your "ETA2", but in the meantime, here are a few of my own thoughts on this...
My guess is that movement-building and learning are still the best things to do right now for AI risk reduction. CEA, CFAR, and GiveWell are doing good movement-building, though the GiveWell crowd tends to be less interested in x-risk mitigation. GiveWell is doing a large share of the EA-learning, and might eventually (via GiveWell labs) do some of the x-risk learning (right now GiveWell has a lot of catching up to do on x-risk).
The largest share of the "explicit" x-risk learning is happening at or near FHI & MIRI, including e.g. Christiano. Lots of "implicit" x-risk learning is happening at e.g. NASA where it's not clear that EA-sourced funding can have much marginal effect relative to the effect it could have on tiny organizations like MIRI and FHI.
My impression, which could be wrong, is that GiveWell's ability to hire more researchers is not funding-limited but rather limited by management's preference to offer lower salaries than necessary to ensure cause loyalty. (I would prefer GiveWell raise salaries and grow its research staff faster.) AMF could be fully funded relatively easily by Good Ventures or the Gates Foundation but maybe they're holding back because this would be discouraging to the EA movement: small-scale donors requiring the high-evidence threshold met by GiveWell's top charities would say "Well, I guess there's nothing for little 'ol me to do here." (There are other reasons they may be holding back, too.)
I think accelerating learning is more important right now than a DAF. Getting high-quality evidence about which x-risk mitigation efforts are worthwhile requires lots of work, but one thing we've learned in the past decade is that causes with high-quality evidence for their effectiveness tend to get funded, and this trend is probably increasing. The sooner we do enough learning to have high-quality evidence for the goodness of particular x-risk mitigation efforts, the sooner large funders will fund those efforts. Or, as Christiano writes:
And:
However, Paul thinks there are serious RFMF problems here:
In contrast, I think there is plenty of room for more funding here, even without resorting to "paying market wages for non-EAs to do EA strategy research":
MIRI could run more workshops and hire some able and willing FAI researchers, which I think is quite valuable for x-risk mitigation strategy learning apart from the object-level FAI progress it might produce. But even excluding this...
With more cash, FHI and CSER could host strategy-relevant conferences and workshops, and get people like Stuart Russell and Richard Posner to participate.
I have plenty of EAs capable of doing the labor-intensive data-gathering work needed for much of the strategy work, e.g. collecting data on how fast different parts of AI are progressing, how much money has gone into AI R&D each decade since the 60s, how ripple effects have worked historically, more IEM-relevant data like Katja's tech report, etc. I just don't have the money to pay them to do it.
FHI has lots more researcher-hours it could purchase if it had more cash.
Finally, a clarification: If I think movement-building and learning are most important right now, why is MIRI focused on math research this year? My views on this have shifted even since our 2013 strategy post, and I should note that Eliezer's reasons for focusing on math research are probably somewhat different from mine.
In my estimation, MIRI's focus on math research offers the following benefits to movement-building and learning:
Math research has better traction than strategic research with the world's top cognitive ability. And once top talent is engaged by the math research, some of these top thinkers turn their attention to the strategic issues, too. (Historically true, not just speculation.)
Without an object-level research program on the most important problem (beneficent superintelligence), many of the best people just "bounce off" because there's nothing for them to engage directly. (Historically true, not just speculation.)
And of course, FAI research tells us some things about how hard FAI research is, which lines of inquiry are tractable now, etc.
Your reasons for focusing on math research at MIRI seem sound, but I take it you've noticed the warning sign of finding that what you already decided to do turns out to be a good idea for different reasons than you originally thought?