In the past, people like Eliezer Yudkowsky (see 1, 2, 3, 4, and 5) have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined?
I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.
-
(Meta: I don't think this deserves a discussion thread, but I posted this on the open thread and no-one responded, and I think it's important enough to merit a response.)
Thanks for those thoughts.
Nick Bostrom uses the term in his book, and it's convenient for separating out pre-existing problems with "we don't know what to do with our society long term, nor is it engineered to achieve that" and the particular issues raised by AI.
In the situation I mentioned, not vastly superintelligent initially (and capabilities can vary along multiple dimensions, e.g. one can have many compartmentalized copies of an AI system that collectively deliver a huge number of worker-years without any one of them possessing extraordinary capabilities.
What is your take on the strategy-swallowing point: if humans can do it, then not very superintelligent AIs can.
There is an ambiguity there. I'll mention it to Nick. But, e.g. Friendliness just sounds silly. I use "safe" too, but safety can be achieved just by limiting capabilities, which doesn't reflect the desire to realize the benefits.
I don't think that separation is a good idea. Not knowing what to do with our society long term is a relatively tolerable problem until an upcoming change raises a significant prospect of locking-in some particular vision of society's future. (Wei-Dai raises similar points in your exchange of replies, but I thought this framing might still be helpful.)