In the past, people like Eliezer Yudkowsky (see 1, 2, 3, 4, and 5) have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined?
I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.
-
(Meta: I don't think this deserves a discussion thread, but I posted this on the open thread and no-one responded, and I think it's important enough to merit a response.)
To clear up the ambiguity, does this mean you agree that I can do anything short of what von Neumann did, or that you don't think it's possible to get as far as independent judges favorably evaluating MIRI output, or is there some other standard you have in mind? I'm trying to get something clearly falsifiable, but right now I can't figure out the intended event due to sheer linguistic ambiguity.
I also think that evaluation by academics is a terrible test for things that don't come with blatant overwhwelming unmistakable undeniable-even-to-humans evidence - e.g. this standard would fail MWI, molecular nanotechnology, cryonics, and would have recently failed 'high-carb diets are not necessarily good for you'. I don't particularly expect this standard to be met before the end of the world, and it wouldn't be necessary to meet it either.
As I said in my other comment, I would be quite surprised if your individual mathematical and AI contributions reach the levels of the best in their fields, as you are stronger verbally than mathematically, and discuss in more detail what I would find surprising and not there.
... (read more)