In the past, people like Eliezer Yudkowsky (see 1, 2, 3, 4, and 5) have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined?
I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.
-
(Meta: I don't think this deserves a discussion thread, but I posted this on the open thread and no-one responded, and I think it's important enough to merit a response.)
Ok, but that doesn't increase the probability to 'medium' from the very low initial probability of MIRI or another organization benefiting from MIRI's work solving the extremely hard problem of Friendly AI before anyone else screws it up.
I've read all your posts in the threads linked by the OP, and if multiplying the high beneficial impact of Friendly AI by the low probability of success isn't allowed, I honestly don't see why I should donate to MIRI.
If this was a regular math problem and it wasn't world-shakingly important, why wouldn't you expect that funding workshops and then researchers would cause progress on it?
Assigning a very low probability to progress rests on a sort of backwards reasoning wherein you expect it to be difficult to do things because they are important. The universe contains no such rule. They're just things.
It's hard to add a significant marginal fractional pull to a rope that many other people are pulling on. But this is not a well-tugged rope!