In the past, people like Eliezer Yudkowsky (see 1, 2, 3, 4, and 5) have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined?
I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.
-
(Meta: I don't think this deserves a discussion thread, but I posted this on the open thread and no-one responded, and I think it's important enough to merit a response.)
Most fundamentally, it's based on taking at face value a world in which nobody appears to be doing similar work or care sufficiently to do so. In the world taken at face value, MIRI is the only organization running MIRI's workshops and trying to figure out things like tiling self-modifying agents and getting work started early on what is probably a highly serial time-sensitive task.
Success is defined most obviously as actually constructing an FAI, and it would be very dangerous to have any organizational model in which we were not trying to do this (someone who conceives of themselves as an ethicist whose duty it is to lecture others, and does not intend to solve the problem themselves, is exceedingly unlikely to confront the hardest problems). But of course if our work were picked up elsewhere and reused after MIRI itself died as an organization for whatever reason, or if in any general sense the true history as written in the further future says that MIRI mattered, I should not count my life wasted, nor feel that we had let down MIRI's donors.
This is astonishingly good evidence that MIRI's efforts will not be wasted via redundancy, de facto "failure" only because someone else will independently succeed first.
But it's actually (very weak) evidence against the proposition that MIRI's efforts will not be wasted because you've overestimated the problem, and it isn't evidence either way concerning the proposition that you haven't overestimated the problem but nobody will succeed at solving it.