Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

alex_zag_al comments on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” - Less Wrong

36 Post author: AnnaSalamon 12 December 2016 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread.

Comment author: alex_zag_al 23 February 2017 06:09:55AM 0 points [-]

iii. Emphasize all rationality use cases evenly. Cause all people to be evenly targeted by CFAR workshops.

We can’t do this one either; we are too small to pursue all opportunities without horrible dilution and failure to capitalize on the most useful opportunities.

This surprised me, since I think of rationality as the general principles of truth-finding.

What have you found about the degree to which rationality instruction needs to be tailored to a use-case?