ciphergoth comments on How can I reduce existential risk from AI? - Less Wrong

46 Post author: lukeprog 13 November 2012 09:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread. Show more comments above.

Comment author: Pablo_Stafforini 12 November 2012 02:25:59PM *  6 points [-]

Suppose you think that reducing the risk of human extinction is the highest-value thing you can do. Or maybe you want to reduce "x-risk" because you're already a comfortable First-Worlder like me and so you might as well do something epic and cool, or because you like the community of people who are doing it already, or whatever.

I think this post is great: important, informative, concise, and well-referenced. However, my impression is that the opening paragraph trivializes the topic. If you were listing the things we could do to reduce or eliminate global poverty, would you preface your article by saying that "reducing global poverty is cool"? You probably wouldn't. Then why write that kind of preface when the subject is existential risk reduction, which is even more important?

Comment author: ciphergoth 15 November 2012 02:59:07PM 4 points [-]

I took that as anticipating a counter of "Hah, you think your donors really believe in your cause, when really loads of them are just trying to be cool!" - "That's fine, I've noticed their money works just as well."