CarlShulman comments on How does MIRI Know it Has a Medium Probability of Success? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (137)
If the year was 1960, which would you rather have?
At any given time there are many problems where solutions are very important, but the time isn't yet right to act on them, rather than on the capabilities to act on them, and also to deal with the individually unexpected problems that come along so regularly. Investment-driven and movement-building-driven discount rates are relevant even for existential risk.
GiveWell has grown in influence much faster than the x-risk community while working on global health, and are now in the process of investigating and pivoting towards higher leverage causes, with global catastrophic risk among the top three under consideration.
I'd rather have both, hence diverting some marginal resources to CFAR until it was launched, then switching back to MIRI. Is there a third thing that MIRI should divert marginal resources to right now?
I have just spent a month in England interacting extensively with the EA movement here (maybe your impressions from the California EA summit differ, I'd be curious to hear). Donors interested in the far future are also considering donations to the following (all of these are from talks with actual people making concrete short-term choices; in addition to donations, people are also considering career choices post-college):
That's why Peter Hurford posted the OP, because he's an EA considering all these options, and wants to compare them to MIRI.
That is a sort of discussion my brain puts in a completely different category. Peter and Carl, please always give me a concrete alternative policy option that (allegedly) depends on a debate, if such is available; my brain is then far less likely to label the conversation "annoying useless meta objections that I want to just get over with as fast as possible".
Can we have a new top-level comment on this?
I edited my top-level comment to include the list and explanation.
Cool, if MIRI keeps going, they might be able to show FAI as top focus with adequate evidence by the time all of this comes together.
Well, in collaboration with FHI. As soon as Bostrom's Superintelligence is released, we'll probably be building on and around that to make whatever cases we think are reasonable to make.