Follow-up to:
- CFAR's new focus, and AI safety
- CFAR's new mission statement (link post; links to our website).
In the days since we published our previous post, a number of people have come up to me and expressed concerns about our new mission. Several of these had the form “I, too, think that AI safety is incredibly important — and that is why I think CFAR should remain cause-neutral, so it can bring in more varied participants who might be made wary by an explicit focus on AI.”
I would here like to reply to these people and others, and to clarify what is and isn’t entailed by our new focus on AI safety.
First: Where are CFAR’s activities affected by the cause(s) it chooses to prioritize?
Some components that people may be hoping for from “cause neutral”, that we can do, and that we intend to do:
-
We can be careful to include all information that they, from their vantage point, would want to know -- even if on our judgment, some of the information is misleading or irrelevant, or might pull them to the “wrong” conclusions.
-
Similarly, we can attempt to expose people to skilled thinkers they would want to talk with, regardless of those thinkers’ viewpoints; and we can be careful to allow their own thoughts, values, and arguments to develop, regardless of which “side” this may lead to them supporting.
-
More generally, we can and should attempt to cooperate with each student’s extrapolated volition, and to treat the student as they (from their initial epistemic vantage point; and with their initial values) would wish to be treated. Which is to say that we should not do anything that would work less well if the algorithm behind it were known, and that we should attempt to run such workshops (and to have such conversations, and so on) as would cause good people of varied initial views to stably on reflection want to participate in them.
Some components that people may be hoping for from “cause neutral”, that we can’t or won’t do:
- CFAR’s history around our mission: How did we come to change?
[1] In my opinion, I goofed this up historically in several instances, most notably with respect to Val and Julia, who joined CFAR in 2012 with the intention to create a cause-neutral rationality organization. Most integrity-gaps are caused by lack of planning rather than strategic deviousness; someone tells their friend they’ll have a project done by Tuesday and then just… doesn’t. My mistakes here seem to me to be mostly of this form. In any case, I expect the task to be much easier, and for me and CFAR to do better, now that we have a simpler and clearer mission.
Sure. I think selecting for knowing a lot about AI mostly selects for raw intelligence and a particular kind of curiosity, and that neither of these are all that correlated with what one might call "street rationality," except insofar as street rationality requires enough raw intelligence to reliably do metacognition. There are plenty of very intelligent people who do almost no metacognition.
Elon Musk, Peter Thiel, people who work or might work at DeepMind and similar groups...
The two things you mention add up, minimally to wanting to know about AI
There is a third component to actually knowing a lot about AI, which is having succeeded in having learnt about AI, which is to say, having "won" in a certain sense. If rationality is winning, or knowing how to use raw intelligence effectively, a baseline level of rationality is indicated.
... (read more)