The full sentence reads: "MIRI exists to ensure that the creation of smarter-than-human intelligence has a positive impact." (emphasis added) Clearly, if smarter-than-human intelligence ends up having a positive impact independently (or in spite of) MIRI's efforts, that would count as a success only in a Pickwickian sort of sense. To succeed in the sense obviously intended by the authors of the mission statement, MIRI would have to be at least partially causally implicated in the process leading to the creation of FAI.
So the question remains: on what grounds do you believe that, if smarter-than-human intelligence ends up having a positive impact, this will be necessarily at least partly due to MIRI's efforts? I find that view implausible, and instead agree with Carl Shulman that "the impact of MIRI in particular has to be far smaller subset of the expected impact of the cause as a whole," for the reasons he mentions.
I subscribe to the view that AGI is bad by default, and don't see anyone else working on the friendliness problem.
In the past, people like Eliezer Yudkowsky (see 1, 2, 3, 4, and 5) have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined?
I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.
-
(Meta: I don't think this deserves a discussion thread, but I posted this on the open thread and no-one responded, and I think it's important enough to merit a response.)