Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Qiaochu_Yuan comments on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” - Less Wrong

35 Post author: AnnaSalamon 12 December 2016 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: Qiaochu_Yuan 14 December 2016 07:23:23PM 0 points [-]

That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.

That seems fine to me. At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine; CFAR would probably try to help them out. (I don't have a good sense of CFAR's internal position on whether they should themselves spin off such an organization.)

Comment author: username2 14 December 2016 11:12:10PM *  1 point [-]

At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine

Incidentally, if someone decides to do this please advertise here. This change in focus has made me stop my (modest) donations to CFAR. If someone started a cause-neutral rationality building institute I'd fund it, at a higher(*) level than I funded CFAR.

(*) One of the things that restrained my CFAR charity in the last few years, other than lack of money until recently, was uncertainty over their cause neutrality. They seemed to be biased in the causes they pushed for, and that gave me hesitation against funding them further. Now that they've come out of the closet on the issue I'm against giving them even 1 cent.