Pablo_Stafforini comments on How can I reduce existential risk from AI? - Less Wrong

46 Post author: lukeprog 13 November 2012 09:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread.

Comment author: Pablo_Stafforini 12 November 2012 02:25:59PM *  6 points [-]

Suppose you think that reducing the risk of human extinction is the highest-value thing you can do. Or maybe you want to reduce "x-risk" because you're already a comfortable First-Worlder like me and so you might as well do something epic and cool, or because you like the community of people who are doing it already, or whatever.

I think this post is great: important, informative, concise, and well-referenced. However, my impression is that the opening paragraph trivializes the topic. If you were listing the things we could do to reduce or eliminate global poverty, would you preface your article by saying that "reducing global poverty is cool"? You probably wouldn't. Then why write that kind of preface when the subject is existential risk reduction, which is even more important?

Comment author: Dorikka 13 November 2012 06:51:55PM 6 points [-]

Hm. It's possible that I don't have an good model of people with things like this, but it seems likely that at least some of the people contributing to x-risk reduction might do it for one of these reasons, and this paragraph is making it abundantly clear that the author isn't going to be a jerk about people not supporting his cause for the right reasons. I liked it.

Comment author: ciphergoth 15 November 2012 02:59:07PM 4 points [-]

I took that as anticipating a counter of "Hah, you think your donors really believe in your cause, when really loads of them are just trying to be cool!" - "That's fine, I've noticed their money works just as well."

Comment author: [deleted] 13 November 2012 12:50:53PM 0 points [-]

I took that to be slightly tongue-in-cheek.