Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Mass_Driver comments on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” - Less Wrong

36 Post author: AnnaSalamon 12 December 2016 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (42)

You are viewing a single comment's thread. Show more comments above.

Comment author: Mass_Driver 13 December 2016 08:25:10AM 4 points [-]

Yeah, that pretty much sums it up: do you think it's more important for rationalists to focus even more heavily on AI research so that their example will sway others to prioritize FAI, or do you think it's more important for rationalists to broaden their network so that rationalists have more examples to learn from?

Shockingly, as a lawyer who's working on homelessness and donating to universal income experiments, I prefer a more general focus. Just as shockingly, the mathematicians and engineers who have been focusing on AI for the last several years prefer a more specialized focus. I don't see a good way for us to resolve our disagreement, because the disagreement is rooted primarily in differences in personal identity.

I think the evidence is undeniable that rationality memes can help young, awkward engineers build a satisfying social life and increase their productivity by 10% to 20%. As an alum of one of CFAR's first minicamps back in 2011, I'd hoped that rationality would amount to much more than that. I was looking forward to seeing rationalist tycoons, rationalist Olympians, rationalist professors, rationalist mayors, rationalist DJs. I assumed that learning how to think clearly and act accordingly would fuel a wave of conspicuous success, which would in turn attract more resources for the project of learning how to think clearly, in a rapidly expanding virtuous cycle.

Instead, five years later, we've got a handful of reasonably happy rationalist families, an annual holiday party, and a couple of research institutes dedicated to pursuing problems that, by definition, will provide no reliable indicia of their success until it is too late. I feel very disappointed.

Comment author: Raemon 13 December 2016 12:14:21PM *  9 points [-]

I think a lot of this is fair concern (I care about AI but am currently neutral/undecided on whether this change was a good one)

But I also note that "a couple research institutions" is sweeping a lot of work into deliberately innocuous sounding words.

First - we have lots of startups that aren't AI related that I think were in some fashion facilitated by the overall rationality community project (With CFAR playing a major role in pushing that project forward).

We also have Effective Altruism Global, and many wings of the EA community that have benefited from CFAR and Eliezer's original writings, which has had huge benefits to plenty of cause areas other than AI. We have your aforementioned young, awkward engineers with their 20% increase in productivity, often earning to give (often to non AI causes), or embarking on startups of their own.

Second, very credible progress has happened on AI as a result of the institutions working on AI. Elon Musk pledged $10 million to AI safety, and he did that because FLI held a conference bringing him and top AI people together, and FLI was able to do that because of a sizeable base of CFAR inspired volunteers as well as the FLI leadership having attended CFAR.

Even if everything MIRI does turns out to be worthless (which I also think is unlikely), FLI has demonstrably changed the landscape of AI safety.

Comment author: Qiaochu_Yuan 13 December 2016 10:04:46PM *  4 points [-]

do you think it's more important for rationalists to focus even more heavily on AI research so that their example will sway others to prioritize FAI, or do you think it's more important for rationalists to broaden their network so that rationalists have more examples to learn from?

I think this question implicitly assumes as a premise that CFAR is the main vehicle by which the rationality community grows. That may be more or less true now, plausibly it can become less true in the future, but most interestingly it suggests that you already understand the value of CFAR as a coordination point (for rationality in general). That's the kind of value I think CFAR is trying to generate in the future as a coordination point for AI safety in particular, because it might in fact turn out to be that important.

I sympathize with your concerns - I would love for the rationality community to be more diverse along all sorts of axes - but I worry they're predicated on a perspective on existential risk-like topics as these luxuries that maybe we should devote a little time to but that aren't particularly urgent, and that if you had a stronger sense of urgency around them as a group (not necessarily around any of them individually) you might be able to have more sympathy for people (such as the CFAR staff) who really, really just want to focus on them, even though they're highly uncertain and even though there are no obvious feedback loops, because they're important enough to work on anyway.

Comment author: Mass_Driver 14 December 2016 01:28:31AM 1 point [-]

I am always trying to cultivate a little more sympathy for people who work hard and have good intentions! CFAR staff definitely fit in that basket. If your heart's calling is reducing AI risk, then work on that! Despite my disappointment, I would not urge anyone who's longing to work on reducing AI risk to put that dream aside and teach general-purpose rationality classes.

That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.

Comment author: Qiaochu_Yuan 14 December 2016 07:23:23PM 0 points [-]

That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.

That seems fine to me. At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine; CFAR would probably try to help them out. (I don't have a good sense of CFAR's internal position on whether they should themselves spin off such an organization.)

Comment author: username2 14 December 2016 11:12:10PM *  1 point [-]

At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine

Incidentally, if someone decides to do this please advertise here. This change in focus has made me stop my (modest) donations to CFAR. If someone started a cause-neutral rationality building institute I'd fund it, at a higher(*) level than I funded CFAR.

(*) One of the things that restrained my CFAR charity in the last few years, other than lack of money until recently, was uncertainty over their cause neutrality. They seemed to be biased in the causes they pushed for, and that gave me hesitation against funding them further. Now that they've come out of the closet on the issue I'm against giving them even 1 cent.