In the past, people like Eliezer Yudkowsky (see 1, 2, 3, 4, and 5) have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined?
I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.
-
(Meta: I don't think this deserves a discussion thread, but I posted this on the open thread and no-one responded, and I think it's important enough to merit a response.)
Multiplying small probabilities seems fine to me, whereas I really don't get "heroic epistemology".
You seem to be suggesting that "heroic epistemology" and "multiplying small probabilities" both lead to the same conclusion: support MIRI's work on FAI. But this is the case only if working on FAI has no negative consequences. In that case, "small chance of success" plus "multiplying small probabilities" warrants working on FAI, just as "medium probability of success" and "not multiplying small probabilities" does. But since working on FAI does have negative consequences, namely shortening AI timelines and (in the later stages) possibly directly causing the creation an UFAI, just allowing multiplication by small probabilities is not sufficient to warrant working on FAI if the probability of success is low.
I am really worried that you are justifying your current course of action through a novel epistemology of your own invention, which has not been widely vetted (or even widely understood). Most new ideas are wrong, and I think you ought to treat your own new ideas with deeper suspicion.
I'm a reactionary, not an innovator, dammit! Reacting against this newfangled antiheroic 'reference class' claim that says we ought to let the world burn because we don't have enough of a hero license!
Ahem.
I'm also really unconvinced by the claim that this work could reasonably have expected net negative consequences. I'm worried about the dynamics and evidence of GiveDirectly. But I don't think GD has negative consequences, that would be a huge stretch. It's possible maybe but it's certainly not the arithmetic expectation and with that said, I worry t... (read more)