Follow-up to:
- CFAR's new focus, and AI safety
- CFAR's new mission statement (link post; links to our website).
In the days since we published our previous post, a number of people have come up to me and expressed concerns about our new mission. Several of these had the form “I, too, think that AI safety is incredibly important — and that is why I think CFAR should remain cause-neutral, so it can bring in more varied participants who might be made wary by an explicit focus on AI.”
I would here like to reply to these people and others, and to clarify what is and isn’t entailed by our new focus on AI safety.
First: Where are CFAR’s activities affected by the cause(s) it chooses to prioritize?
Some components that people may be hoping for from “cause neutral”, that we can do, and that we intend to do:
-
We can be careful to include all information that they, from their vantage point, would want to know -- even if on our judgment, some of the information is misleading or irrelevant, or might pull them to the “wrong” conclusions.
-
Similarly, we can attempt to expose people to skilled thinkers they would want to talk with, regardless of those thinkers’ viewpoints; and we can be careful to allow their own thoughts, values, and arguments to develop, regardless of which “side” this may lead to them supporting.
-
More generally, we can and should attempt to cooperate with each student’s extrapolated volition, and to treat the student as they (from their initial epistemic vantage point; and with their initial values) would wish to be treated. Which is to say that we should not do anything that would work less well if the algorithm behind it were known, and that we should attempt to run such workshops (and to have such conversations, and so on) as would cause good people of varied initial views to stably on reflection want to participate in them.
Some components that people may be hoping for from “cause neutral”, that we can’t or won’t do:
- CFAR’s history around our mission: How did we come to change?
[1] In my opinion, I goofed this up historically in several instances, most notably with respect to Val and Julia, who joined CFAR in 2012 with the intention to create a cause-neutral rationality organization. Most integrity-gaps are caused by lack of planning rather than strategic deviousness; someone tells their friend they’ll have a project done by Tuesday and then just… doesn’t. My mistakes here seem to me to be mostly of this form. In any case, I expect the task to be much easier, and for me and CFAR to do better, now that we have a simpler and clearer mission.
I did not understand this part.
I don't know how it plays out in the CFAR context specifically, but the sort of situation being described is this:
Alice is a social democrat and believes in redistributive taxation, a strong social safety net, and heavy government regulation. Bob is a libertarian and believes taxes should be as low as possible and "flat", safety-nets should be provided by the community, and regulation should be light or entirely absent. Bob asks Alice[1] what she knows about some topic related to government policy. Should Alice (1) provide Bob with all the eviden... (read more)