RobbBB comments on The Future of Humanity Institute could make use of your money - Less Wrong

52 Post author: danieldewey 26 September 2014 10:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobbBB 05 October 2014 10:02:54AM 1 point [-]

A big part of the purpose of the Sequences is to kill likely mistakes and missteps from smart people trying to think about AI. 'Friendly AI' is a sufficiently difficult problem that it may be more urgent to raise the sanity waterline, filter for technical and philosophical insight, and amplify that insight (e.g., through CFAR), than to merely inform academia that AI is risky. Given people's tendencies to leap on the first solution that pops into their head, indulge in anthropomorphism and optimism, and become inoculated to arguments that don't fully persuade them on the first go, there's a case to be made for improving people's epistemic rationality, and honing the MIRI arguments more carefully, before diving into outreach.