Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Raemon comments on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” - Less Wrong

36 Post author: AnnaSalamon 12 December 2016 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread. Show more comments above.

Comment author: Raemon 13 December 2016 12:14:21PM *  9 points [-]

I think a lot of this is fair concern (I care about AI but am currently neutral/undecided on whether this change was a good one)

But I also note that "a couple research institutions" is sweeping a lot of work into deliberately innocuous sounding words.

First - we have lots of startups that aren't AI related that I think were in some fashion facilitated by the overall rationality community project (With CFAR playing a major role in pushing that project forward).

We also have Effective Altruism Global, and many wings of the EA community that have benefited from CFAR and Eliezer's original writings, which has had huge benefits to plenty of cause areas other than AI. We have your aforementioned young, awkward engineers with their 20% increase in productivity, often earning to give (often to non AI causes), or embarking on startups of their own.

Second, very credible progress has happened on AI as a result of the institutions working on AI. Elon Musk pledged $10 million to AI safety, and he did that because FLI held a conference bringing him and top AI people together, and FLI was able to do that because of a sizeable base of CFAR inspired volunteers as well as the FLI leadership having attended CFAR.

Even if everything MIRI does turns out to be worthless (which I also think is unlikely), FLI has demonstrably changed the landscape of AI safety.