Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Alexei comments on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” - Less Wrong

35 Post author: AnnaSalamon 12 December 2016 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread.

Comment author: Alexei 10 December 2016 08:13:54AM 9 points [-]

It's likely you'll address this in future posts, but I'm curious now. To me it seems like CFAR played a very important role in attracting people to the bay. "Come for the rationality, stay for the x-risk." I have a feeling that with this pivot it'll be harder to attract people to the community. What are your thoughts on that?

Comment author: Qiaochu_Yuan 13 December 2016 04:17:49AM 2 points [-]

That seems fine to me as long as the people who do get attracted are selected harder for being relevant to AI safety; arguably this would be an improvement.

Comment author: wubbles 14 December 2016 03:46:18PM 0 points [-]

I'm not sure how much of this was CFAR and x-risk vs. programming and autism. Certainly a lot of the people at the SF meetup were not CFARniks based on my completely unscientific examination of my memory. The community's survival and growth is secondary to X-risk solving now, even if before the goal was to make a community devoted to these arts.