In the past, people like Eliezer Yudkowsky (see 1, 2, 3, 4, and 5) have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined?
I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.
-
(Meta: I don't think this deserves a discussion thread, but I posted this on the open thread and no-one responded, and I think it's important enough to merit a response.)
If we are talking about goal definition evaluating AI (and Paul was probably thinking in the context of some sort of indirect normativity), "control" seems like a reasonable fit. The primary philosophical issue for that part of the problem is decision theory.
(I agree that it's a bad term for referring to FAI itself, if we don't presuppose a method of solution that is not Friendliness-specific.)