Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

gwillen comments on CFAR’s new focus, and AI Safety - Less Wrong

30 Post author: AnnaSalamon 03 December 2016 06:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwillen 06 December 2016 05:06:20AM 8 points [-]

Feedback from someone who really enjoyed your May workshop (and I gave this same feedback then, too): Part of the reason I was willing to go to CFAR was that it is separate (or at least pretends to be separate, even though they share personnel and office space) from MIRI. I am 100% behind rationality as a project but super skeptical of a lot of the AI stuff that MIRI does (although I still follow it because I do find it interesting, and a lot of smart people clearly believe strongly in it so I'm prepared to be convinced.) I doubt I'm the only one in this boat.

Also, I'm super uncomfortable being associated with AI safety stuff on a social level because it has a huge image problem. I'm barely comfortable being associated with "rationality" at all because of how closely associated it is (in my social group, at least) with AI safety's image problem. (I don't exaggerate when I say that my most-feared reaction to telling people I'm associated with "rationalists" is "oh, the basilisk people?")