In the past, people like Eliezer Yudkowsky (see 1, 2, 3, 4, and 5) have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined?
I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.
-
(Meta: I don't think this deserves a discussion thread, but I posted this on the open thread and no-one responded, and I think it's important enough to merit a response.)
People differ in their estimates within MIRI. Eliezer has not published a detailed explanation of his estimates, although he has published many of his arguments for his estimates.
For myself, I think the cause of AI risk reduction, in total and over time, has a worthwhile small-to-medium probability of making an astronomical difference on our civilization's future (and a high probability that the future will be very powerfully shaped by artificial intelligence in a way that can be affected by initial conditions). But the impact of MIRI in particular has to be a far smaller subset of the expected impact of the cause as a whole, in light of its limited scale and capabilities relative to the relevant universes (total AI research, governments, etc), the probability that AI is not close enough for MIRI to be very relevant, the probability that MIRI's approach turns out irrelevant, uncertainty over the sign of effects due to contributions to AI progress, future AI risk efforts/replaceability, and various other drag factors.
ETA: To be clear, I think that MIRI's existence, relative to the counterfactual in which it never existed, has been a good thing and reduced x-risk in my opinion, despite not averting a "medium probability," e.g. 10%, of x-risk.
ETA2: Probabilities matter because there are alternative uses of donations and human capital.
I have just spent a month in England interacting extensively with the EA movement here. Donors concerned with impact on the long-run future are considering donations to all of the following (all of these are from talks with actual people making concrete short-term choices; in addition to donations, people are also considering career choices post-university):
This Paul Christiano post discusses the virtues of the donor-advised fund/"Fund for the Future" approach; Giving What We Can has already set up a charitable trust to act as a donor-advised fund in the UK, with one coming soon in the US, and Fidelity already offers a standardized donor-advised fund in America (DAFs allow one to claim tax benefits of donation immediately and then allow the donation to compound); there was much discussion this month about the details of setting up a DAF dedicated to far future causes (the main logistical difficulties are setting up the decision criteria, credibility, and maximum protection from taxation and disruption)
Eliezer wrote this in 1999: