Follow-up to:
- CFAR's new focus, and AI safety
- CFAR's new mission statement (link post; links to our website).
In the days since we published our previous post, a number of people have come up to me and expressed concerns about our new mission. Several of these had the form “I, too, think that AI safety is incredibly important — and that is why I think CFAR should remain cause-neutral, so it can bring in more varied participants who might be made wary by an explicit focus on AI.”
I would here like to reply to these people and others, and to clarify what is and isn’t entailed by our new focus on AI safety.
First: Where are CFAR’s activities affected by the cause(s) it chooses to prioritize?
Some components that people may be hoping for from “cause neutral”, that we can do, and that we intend to do:
-
We can be careful to include all information that they, from their vantage point, would want to know -- even if on our judgment, some of the information is misleading or irrelevant, or might pull them to the “wrong” conclusions.
-
Similarly, we can attempt to expose people to skilled thinkers they would want to talk with, regardless of those thinkers’ viewpoints; and we can be careful to allow their own thoughts, values, and arguments to develop, regardless of which “side” this may lead to them supporting.
-
More generally, we can and should attempt to cooperate with each student’s extrapolated volition, and to treat the student as they (from their initial epistemic vantage point; and with their initial values) would wish to be treated. Which is to say that we should not do anything that would work less well if the algorithm behind it were known, and that we should attempt to run such workshops (and to have such conversations, and so on) as would cause good people of varied initial views to stably on reflection want to participate in them.
Some components that people may be hoping for from “cause neutral”, that we can’t or won’t do:
- CFAR’s history around our mission: How did we come to change?
[1] In my opinion, I goofed this up historically in several instances, most notably with respect to Val and Julia, who joined CFAR in 2012 with the intention to create a cause-neutral rationality organization. Most integrity-gaps are caused by lack of planning rather than strategic deviousness; someone tells their friend they’ll have a project done by Tuesday and then just… doesn’t. My mistakes here seem to me to be mostly of this form. In any case, I expect the task to be much easier, and for me and CFAR to do better, now that we have a simpler and clearer mission.
Have you heard the anecdote about Kahneman and the planning fallacy? It's from Thinking Fast and Slow, and deals with him creating curriculum to teach judgment and decision-making in high school. He puts together a team of experts, they meet for a year, and have a solid outline. They're talking about estimating uncertain quantities, and he gets the bright idea of having everyone estimate how long it will take them until they submit a finished draft to the Ministry of Education. He solicits everyone's probabilities using one of the approved-by-research methods they're including in the curriculum, and their guesses are tightly centered around two years (ranging from about 1.5 to 2.5).
Then he decides to employ the outside view, and asks the curriculum expert how long it took similar teams in the past. That expert realizes that, in the past, about 40% of similar teams gave up and never finished; those who finished, no one took less than seven years to finish. (Kahneman tries to rescue them by asking about skills and resources, and turns out that this team is below average, but not by much.)
It seems to me that if the person who discovered the planning fallacy is unable to make basic use of the planning fallacy when plotting out projects, a general sense that experts know what they're doing and are able to use their symbolic manipulation skills on their actual lives is dangerously misplaced. If it is a bad idea to publish things about decision theory in academia (because the costs outweigh the benefits, say) then it will only be bad decision-makers who publish on decision theory!
Wow, I've read the story but I didn't quite realize the irony of it being a textbook (not a curriuculum, a textbook, right?) about judgment and decision making.