In the past, people like Eliezer Yudkowsky (see 1, 2, 3, 4, and 5) have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined?
I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.
-
(Meta: I don't think this deserves a discussion thread, but I posted this on the open thread and no-one responded, and I think it's important enough to merit a response.)
Could you give a more precise statement of what this is supposed to entail?
Not easily. Antiantiheroic epistemology might be a better term, i.e., I think that a merely accurate epistemology doesn't have a built-in mechanism which prevents people from thinking they can do things because the outside view says it's nonvirtuous to try to distinguish yourself within reference class blah. Antiantiheroic epistemology doesn't say that it's possible to distinguish yourself within reference class blah so much as it thinks that the whole issue is asking the wrong question and you should mostly be worrying about staying engaged with the obj... (read more)