Kaj_Sotala comments on How can I reduce existential risk from AI? - Less Wrong

46 Post author: lukeprog 13 November 2012 09:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 12 November 2012 10:31:05AM 7 points [-]

The average person has zero interest in fighting existential risk. It's very easy to do better than average, if the average is zero. Even if you've only spent fifty hours (say) familiarizing yourself with the topic, that's already much better than most.

Comment author: [deleted] 12 November 2012 11:32:02AM *  1 point [-]

The average person has zero interest in fighting existential risk.

This strikes me as, ahem, an inflationary use of the term zero. Try negligible instead. :-)

EDIT: Turns out it was an inflationary use of the term average instead. :-) Retracted.

Comment author: Kaj_Sotala 12 November 2012 12:16:27PM 1 point [-]

Well, if we measure interest by what they actually do, then I hold by the "zero".

Comment author: [deleted] 12 November 2012 12:57:30PM *  5 points [-]

EY's interest in fighting existential risks is strictly greater than 0 as far as I can tell; is someone else cancelling that out in the average? (Or by average did you actually mean 'median'?) The number of arms the average human has is strictly less than 2.

Comment author: Kaj_Sotala 12 November 2012 02:18:26PM *  5 points [-]

I meant median.