Follow-up to:
- CFAR's new focus, and AI safety
- CFAR's new mission statement (link post; links to our website).
In the days since we published our previous post, a number of people have come up to me and expressed concerns about our new mission. Several of these had the form “I, too, think that AI safety is incredibly important — and that is why I think CFAR should remain cause-neutral, so it can bring in more varied participants who might be made wary by an explicit focus on AI.”
I would here like to reply to these people and others, and to clarify what is and isn’t entailed by our new focus on AI safety.
First: Where are CFAR’s activities affected by the cause(s) it chooses to prioritize?
Some components that people may be hoping for from “cause neutral”, that we can do, and that we intend to do:
-
We can be careful to include all information that they, from their vantage point, would want to know -- even if on our judgment, some of the information is misleading or irrelevant, or might pull them to the “wrong” conclusions.
-
Similarly, we can attempt to expose people to skilled thinkers they would want to talk with, regardless of those thinkers’ viewpoints; and we can be careful to allow their own thoughts, values, and arguments to develop, regardless of which “side” this may lead to them supporting.
-
More generally, we can and should attempt to cooperate with each student’s extrapolated volition, and to treat the student as they (from their initial epistemic vantage point; and with their initial values) would wish to be treated. Which is to say that we should not do anything that would work less well if the algorithm behind it were known, and that we should attempt to run such workshops (and to have such conversations, and so on) as would cause good people of varied initial views to stably on reflection want to participate in them.
Some components that people may be hoping for from “cause neutral”, that we can’t or won’t do:
- CFAR’s history around our mission: How did we come to change?
[1] In my opinion, I goofed this up historically in several instances, most notably with respect to Val and Julia, who joined CFAR in 2012 with the intention to create a cause-neutral rationality organization. Most integrity-gaps are caused by lack of planning rather than strategic deviousness; someone tells their friend they’ll have a project done by Tuesday and then just… doesn’t. My mistakes here seem to me to be mostly of this form. In any case, I expect the task to be much easier, and for me and CFAR to do better, now that we have a simpler and clearer mission.
I like your (A)-(C), particularly (A). This seems important, and something that isn't always found by default in the world at large.
Because it's somewhat unusual, I think it's helpful to give strong signals that this is important to you. For example I'd feel happy about it being a core part of the CFAR identity, appearing in even short statements of organisational mission. (I also think this can help organisation insiders to take it even more seriously.)
On (i), it seems clearly a bad idea for staff to pretend they have no viewpoints. And if the organisation has viewpoints, it's a bad idea to hide them. I think there is a case for keeping organisational identity small -- not taking views on things it doesn't need views on. Among other things, this helps to make sure that it actually delivers on (A). But I thought the start of your post (points (1)-(4)) did a good job of explaining why there are in fact substantive benefits to having an organisational view on AI, and I'm more supportive of this than before. I still think it is worth trying to keep organisational identity relatively small, and I'm still not certain whether it would be better to have separate organisations.