Eliezer_Yudkowsky comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 15 August 2010 03:21:51PM 13 points [-]

whpearson mentioned this already, but if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute.

Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction, and indeed they're focusing on persuading more people of this particular claim. As you say, by focusing on something specific, radical and absurd, they run more of a risk of being dismissed entirely than does FHI, but their strategy is still correct given the premise.

Comment author: Eliezer_Yudkowsky 18 August 2010 02:46:17PM 6 points [-]

if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute. Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction

Agreed. (Modulo a caveat about marginal ROI eventually balancing if FHI got large enough or SIAI got small enough.)