A bit about our last few months:
- We’ve been working on getting a simple clear mission and an organization that actually works. We think of our goal as analogous to the transition that the old Singularity Institute underwent under Lukeprog (during which chaos was replaced by a simple, intelligible structure that made it easier to turn effort into forward motion).
- As part of that, we’ll need to find a way to be intelligible.
- This is the first of several blog posts aimed at causing our new form to be visible from outside. (If you're in the Bay Area, you can also come meet us at tonight's open house.) (We'll be talking more about the causes of this mission-change; the extent to which it is in fact a change, etc. in an upcoming post.)
-
We care a lot about AI Safety efforts in particular, and about otherwise increasing the odds that humanity reaches the stars.
-
Also, we[1] believe such efforts are bottlenecked more by our collective epistemology, than by the number of people who verbally endorse or act on "AI Safety", or any other "spreadable viewpoint" disconnected from its derivation.
-
Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together. And to do this among the relatively small sets of people tackling existential risk.
Existential wins and AI safety
Who we’re focusing on, why
- AI and machine learning graduate students, researchers, project-managers, etc. who care; who can think; and who are interested in thinking better;
- Students and others affiliated with the “Effective Altruism” movement, who are looking to direct their careers in ways that can do the most good;
- Rationality geeks, who are interested in seriously working to understand how the heck thinking works when it works, and how to make it work even in domains as confusing as AI safety.
Brier-boosting, not Signal-boosting
- Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”
- CFAR's mission statement (link post, linking to our website).
But academics write for other academics, and journalists don't and can't. (They've tried. They can't. Remember Vox?)
AFAIK, there isn't a good outlet for compilations of facts intended for and easily accessible by a general audience, reviews of books that weren't just written, etc. Since LW isn't run for profit and is run as outreach for, among other things, CFAR, whose target demographic would be interested in such an outlet, this could be a valuable direction for either LW or a spinoff site; but, given the reputational risk (both personally and institutionally) inherent in the process of generating new ideas, we may be better served by pivoting LW toward the niche I'm thinking of -- a cross between a review journal, SSC, and, I don't know, maybe CIA (think World Factbook) or RAND -- and moving the generation and refinement of ideas into a separate container, maybe an anonymous blog or forum.
Academics write textbooks, popular books, and articles that are intended for a lay audience.
Nevertheless, I think it's great if LW users want to compile & present facts that are well understood. I just don't think we have a strong comparative advantage.
LW already has a reputation for exploring non-mainstream ideas. That attracts some and repels others. If we tried to sanitize ourselves, we probably would not get back the people who have been repulsed, and we might lose the interest of some of the people we've attracted.