New Comment
9 comments, sorted by Click to highlight new comments since: Today at 9:02 PM

[Epistemic Effort: noticed that I was referring to "CFAR's thought Process", which was sort of obfuscating details. I think I have a good model of Anna's thought process. I don't have a good model of most other CFAR staff nor how CFAR as a unit makes decisions. I got more specific.]

The problem with deworming as an example is it's really hard for me to imagine that cause a) being the most important cause, b) being urgent in the way that existential risk from AI is urgent.

I don't think Anna's thought process was, after founding CFAR, decided "We should use our program to help the most important cause", and then AI Risk was the most important cause. And I think an ideological-turing-test approach to examining the CFAR decision needs to include a deeper level of understanding/empathy before one can judge it.

My understanding of Anna (based on some conversations with her that I'm pretty sure are considered public, but hope she'll correct me if I misconstrue anything), is that, from day 1, her process was something like:

[Note: periodically I switch from 'things I'm fairly confident about the thought process' to 'speculation on my party' and I try to distinguish that where I notice it]

1) (long time ago) - "I want to help the world. What are the ways I might do that?" (tries lots of things, including fairly mainstream altruism things)

2) Ends up thinking about existential risk and AI Risk in particular, crunches numbers, comes to believe this is the most important thing to work on. But not just "this seems like the most important issue among several possible issues". It's "holy shit, this is mind bogglingly important, and nobody is working on this at all. This problem is incredibly confusing. And it looks like the default course of history includes a frighteningly high chance that humanity will just be extinguished in the next century."

3) Starts examining the problem [note: some of my own speculation here], and notices that a) the problem is extremely challenging to think about. It is difficult in ways that less_wrong_2016 has mostly gotten over (scope insensitivity, weirdness, etc), but continues to be difficult in ways that present_day_cfar/less_wrong continue to find challenging, because building an AI is really hard.

We have very little idea what the architecture of an AI will look like, very little idea of both how to design an AI to be "rational" in a way that doesn't get us killed, and (relatively) little idea of how to interact politically with the various people/orgs who may be relevant to AI safety. At every step in the journey, we have very limited evidence and our present day rationality is not good enough to solve the problem.

4) Eliezer founds Less Wrong, largely in attempt to solve the above problems. Importantly, he has a clearly defined ulterior motive (solving AI Risk), while also earnestly believing in the general cause of rationality and hoping it benefits the world in other ways. (http://lesswrong.com/lw/d6/the_end_of_sequences/)

5) Less Wrong isn't sufficient to solve the above problems. MIRI (then SingInst) begins running various projects to improve rationality.

6) Those projects aren't sufficient / take up too much organizational focus from MIRI. CFAR is spun off, headed by Anna. Like Less Wrong, she has a clear ulterior motive in mind while also earnestly believing in rationality as a generally valuable thing for the world. Her express purpose in created CFAR is to build a tool that is necessary to solve a problem because the future is on fire and it needs putting out. (My understanding is that, possibly due to failure to communicate, plausibly due to some monkey-brain-subconscious-or-semi-conscious slytherining, other founders like Julia and Val join CFAR with the expectation it is cause neutral.)

7) CFAR makes significant improves in its ability to help people improve their lives - but continues to struggle to build its epistemic rationality curriculum, making decisions based on limited information.

[More speculation on my part] - I think part of the reason CFAR struggled to develop an epistemic rationality curriculum is because epistemic rationality isn't that relevant to most people. To develop it, you need concrete projects to work on that actually benefit from probability theory and sifting through mediocre evidence.

So, CFAR is failing to achieve both Anna's original motivation for it, AND one of its more overt goals (that folk like Julia whole heartedly share). So, it begins attempting AI-focused workshops. I do not currently know the results of those workshops.

8) Here, I stop having a clear understanding of the situation, but something to the effect of "AI workshops were not sufficient to push the development of rationality to a level that'd be sufficient to succeed at AI safety - it required some overall shifts in organizational focus." (As Satvik noted somewhere, I think unfortunately on facebook, organizational focus gives you clearer guidelines of when to pursue new opportunities and when to say no to things."

...

So... I think it's reasonable to look at all that and disagree with the outcome. I applaud Ozy for trying to think through the situation in an ideological-turing-test sort of way. But I think to really fairly critique this it's necessary to include not just use something like "Givewell spins off a rationality org, which then ends off deciding it makes most sense to focus that rationality on deworming." I'm not sure I can think of a good example that really captures it.

My understanding/guesses of Anna's cruxes (perhaps more honestly: my own cruxes, informed by things I've heard both her and Eliezer say, are)

a) AI grade rationality is urgent
b) while an important aspect of rationality is managing your filter bubble (and, consequently, your public image so that new people can be attracted to your filter bubble to cross pollinate), it is also an aspect of rationality that you can make more progress on an idea by specializing it and getting into the nitty gritty details as much as possible.
c) AI grade rationality will benefit more from gritty-details work than preserving a broader filter bubble.

As well as the point Ozy addresses, which is that to the extent this was CFAR's existing goal, it is better for them to be honest about it.

(I do think it'd be an incredibly good thing if there were also a truly neutral organization that helps people pursue their own goals, that develops a generally applicable art of rationality, that eventually raises the overall sanity waterline. I think it is deeply sad that CFAR is not that, but that is just one in a large number of deeply sad things the world lacks, and hopefully we'll eventually have the resources to do all of them)

I think Ozy conflates acting as if X were true with believing X (you can devote your career to AI safety while believing there's only a 10% chance it ends up mattering, or whatever), and lists some potential costs of the new focus without attempting to compare them to the potential benefits.

The case for deworming is about deworming tablets being a very cheap intervention that provides a lot of value. It's not centrally about the total size of the impact of the harm that worms cause.

A cause area where we have potentially very cheap intervention is structurally different than AI risk where we don't know an invention to solve it and therefore need to up our game as rationalists.

Though I also want to point out that MIRI-style research seems like a very cheap intervention relative to global warming. And here I'm talking about the research they should be doing, if they had $10 million.

What is missing from this post is a realistic consideration of scope. If CFAR is correct, then their pivot is both extremely important and pretty urgent. The utility difference is so large that it would be defensible to endorse AI safety as the primary goal, even if they expect it to probably not be the biggest problem.

One comment of mine, cross-posted from Ozy's Blog

Things worth noting people may not know:

At EA Global 2014, and (I think) other EA Globals, CFAR has a) been present, b) specifically talked about a goal/plan, broken down as follows:

– The world has big problems, and needs people who are smart, capable, rational, and altruistic (or at least motivated to solve those problems for other reasons) – CFAR has limited number of people they can teach – People tend to rub off on each other when they hang out with each other – People vary in how rational, altruistic, and capable they are. – So, CFAR seeks out people who have SOME combination of high rationality, altruism, and competence. They run workshops with all those people, and one of their hopes is that the rationality/altruism/competence will rub off on each other.

So it is not new that CFAR has (at least) a subgoal of “create people capable of solving the world’s problems, with the motivation to do so.” This may not have been well publicized either, for good or for ill.

I think this was a worthy goal, and the correct one for them to focus on given their limited resources.

So the new AI announcement is basically them saying “we are refining this a step further, to optimize for AI Risk in particular.”

(Whether you think that is good or bad depends on a lot of things)

-

[Epistemic Effort: I noticed myself making a vague statement about CFAR saying this every year, and then realized I only actually had one distinct memory of it, and updated the statement to be more-accurate-given-my-memories]

Now imagine that knowing how to read a study is, in fact, a very important rationality skill. [...] I think it’s very possible that hypothetical Deworming CFAR would flinch away from the idea “maybe we should teach people to read studies”. Or they’d run a pilot program, notice that teaching people to read studies tends to make them less enthusiastic about deworming, and say “well, guess that learning how to read studies actually makes people more irrational.”

My impression is that CFAR specifically intends to focus on rationality skills relevant to evaluating AI risk. Obviously this puts it at elevated risk of the second failure mode you mention (mistaking correct updating away from AI risk for AI risk not being a thing), but it seems to me like if CFAR can't pass that elementary a test of rationality, it fails even without this sort of conflict of interest adjacent problem.

tl;dr: "Tying the project of developing and promoting an art of rationality to the potentially false claim that AI risk is probably the most important cause area risks distorting this project." I agree that this is a consideration. I'd like to point out the possibility that acting as if rationality implies less about AI risk being the most important cause area than it actually does also risks distorting (or has already distorted) rationality. My guess is it's a risk worth taking regardless.