I don't know who the intended audience for this is, but I think it's worth flagging that it seemed extremely jargon-heavy to me. I expect this to be off-putting to at least some people you actually want to attract (if it were one of my first interactions with CFAR I would be less inclined to engage again). In several cases you link to explanations of the jargon. This helps, but doesn't really solve the problem that you're asking the reader to do a large amount of work.
Some examples from the first few paragraphs:
Thanks for posting this, I think it's good to make these things explicit even if it requires effort. One piece of feedback: I think someone who reads this who doesn't already know what "existential risk" and "AI safety" are will be confused (they suddenly show up in the second bullet point without being defined, though it's possible I'm missing some context here).
I found this document kind of interesting, but it felt less like what I normally understand as a mission statement, and more like "Anna's thoughts on CFAR's identity". I think there's a place for the latter, but I'd be really interested in seeing (a concise version of) the former, too.
If I had to guess right now I'd expect it to say something like:
We want to develop a community with high epistemic standards and good rationality tools, at least part of which is devoted to reducing existential risk from AI.
... but I kind of expect you to think I have the emphasis there wrong in some way.
I like this and the overall website redesign.
A few notes on design (slightly off-topic but potentially valuable):
The pale gray, text-heavy but readable layout, and new, more angular "brain" images, suggest seriousness and mental incisiveness, which I think is in keeping with the new mission.
I like the touches of orange. It's a nice change from the overly blue themes of tech-related images, it's cheerful and high-contrast, and it has nice Whiggish connotations. It suggests a certain healthy fighting spirit.
Maybe you'll cover this in a future post, but I'm curious about the outcomes of CFAR's past AI-specific workshops, especially "CFAR for ML Researchers" and the "Workshop on AI Safety Strategy".
In case there are folks following Discussion but not Main: this mission statement was released along with:
The mission says: "we provide high-quality training to a small number of people we think are in an unusually good position to help the world".
Is there evidence that the people you are training actually ARE in an unusually good position to help the world? Compared to the baseline of same-IQ people from the first world, of course.
Physics is established, so one can defer to existing authorities and get right answers about physics. Starting a well-run laundromat is also established, so ditto. Physics and laundromat-running both have well-established feedback loops that have validated their basic processes in ways third parties can see are valid.
Depending on which parts of physics one has in mind, this seems possibly almost exactly backwards (!!). Quoting from Vladimir_M's post Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields:
If a research area has reached a dead end and further progress is impossible except perhaps if some extraordinary path-breaking genius shows the way, or in an area that has never even had a viable and sound approach to begin with, it’s unrealistic to expect that members of the academic establishment will openly admit this situation and decide it’s time for a career change. What will likely happen instead is that they’ll continue producing output that will have all the superficial trappings of science and sound scholarship, but will in fact be increasingly pointless and detached from reality.
Arguably, some areas of theoretical physics have reached this state, if we are to trust the critics like Lee Smolin. I am not a physicist, and I cannot judge directly if Smolin and the other similar critics are right, but some powerful evidence for this came several years ago in the form of the Bogdanoff affair, which demonstrated that highly credentialed physicists in some areas can find it difficult, perhaps even impossible, to distinguish sound work from a well-contrived nonsensical imitation.
The reference to Smolin is presumably to The Trouble With Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next . Penrose's recent book Fashion, Faith, and Fantasy in the New Physics of the Universe also seems relevant.
This is fair; I had in mind basic high school / Newtonian physics of everyday objects. (E.g., "If I drop this penny off this building, how long will it take to hit the ground?", or, more messily, "If I drive twice as fast, what impact would that have on the kinetic energy with which I would crash into a tree / what impact would that have on how badly deformed my car and I would be if I crash into a tree?").
basic high school / Newtonian physics of everyday objects. (E.g., "If I drop this penny off this building, how long will it take to hit the ground?"
This is tricky: basic high school physics lies to you all the time. Example: it says that a penny and a large paper airplane weighting the same as the penny will hit the ground at the same time.
In general, getting the right answers from physics involves knowing the assumptions of of models used and at which points they break down. Physics will tell you, but not at the high school level and you have to remember to ask.
I don't believe that you actually have any intention of "reducing existential risk". Or rather, if you do, you don't seem to be placing much focus on it.
But the correct response to uncertainty is not half-speed
This statement demonstrates a really a poor understanding of basic (random) processes and analogies. You are absolutely right that a person driving in a car that has decided to drive to a certain distance before turning around should not let uncertainty of direction lead to a reduction of speed. You are absolutely wrong in suggesting that the analogy has any place here.
The conclusion works in the car scenario because the driver cannot take multiple options simultaneously. If he could, say by going at half speed in both directions, that would almost certainly be the best option. CFAR can go in at least nine direction at once if it want to.
In fact, there's a math behind this.
That's not the only point I take issue with, but your statement is so poorly grounded and adamant that I don't think it would be worthwhile to poke at it piecemeal. If you think I'm wrong, you can start by telling us the model (or models) within which your mission statement helps resolve existential risk.
I found this document kind of interesting, but it felt less like what I normally understand as a mission statement, and more like "Anna's thoughts on CFAR's identity". I think there's a place for the latter, but I'd be really interested in seeing (a concise version of) the former, too.
If I had to guess right now I'd expect it to say something like:
... but I kind of expect you to think I have the emphasis there wrong in some way.