A bit about our last few months:
- We’ve been working on getting a simple clear mission and an organization that actually works. We think of our goal as analogous to the transition that the old Singularity Institute underwent under Lukeprog (during which chaos was replaced by a simple, intelligible structure that made it easier to turn effort into forward motion).
- As part of that, we’ll need to find a way to be intelligible.
- This is the first of several blog posts aimed at causing our new form to be visible from outside. (If you're in the Bay Area, you can also come meet us at tonight's open house.) (We'll be talking more about the causes of this mission-change; the extent to which it is in fact a change, etc. in an upcoming post.)
-
We care a lot about AI Safety efforts in particular, and about otherwise increasing the odds that humanity reaches the stars.
-
Also, we[1] believe such efforts are bottlenecked more by our collective epistemology, than by the number of people who verbally endorse or act on "AI Safety", or any other "spreadable viewpoint" disconnected from its derivation.
-
Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together. And to do this among the relatively small sets of people tackling existential risk.
Existential wins and AI safety
Who we’re focusing on, why
- AI and machine learning graduate students, researchers, project-managers, etc. who care; who can think; and who are interested in thinking better;
- Students and others affiliated with the “Effective Altruism” movement, who are looking to direct their careers in ways that can do the most good;
- Rationality geeks, who are interested in seriously working to understand how the heck thinking works when it works, and how to make it work even in domains as confusing as AI safety.
Brier-boosting, not Signal-boosting
- Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”
- CFAR's mission statement (link post, linking to our website).
Speaking for myself (one of the upvotes), I think that having a single leader is bad, but having a relatively small group of leaders is good.
With one leader, it means anything they do or say (or did or said years or decades ago) becomes interpreted as "this is what the whole rationalist community is about". Also, I feel like focusing on one person too much could make others feel like followers, instead of striving to become stronger.
But if we have a small team of people who are highly respected by the community, and publicly acknowledge each other, and can cooperate with each other... then all we need for coordination is if they meet in the same room once in a while, and publish a common statement afterwards.
I don't want to choose between Eliezer Yudkowsky, Peter Thiel, and Scott Alexander (and other possible candidates, e.g. Anna Salamon and Julia Galef). Each of these people is really impressive in some areas, but neither is impressive at everything. Choosing one of them feels like deciding which aspects we should sacrifice. Also, some competition is good, and a person who is great today may become less great tomorrow.
Or maybe the leader does not have to be great at everything, as long as they are great at "being a great rationalist leader", whatever that means. But maybe we actually don't have this kind of a person yet. (Weak evidence: if a person with such skills would exist, the person would probably already be informally accepted as the leader of rationalists. They wouldn't wait until a comment on LW tells them to step forward.) Peter Thiel doesn't seem to communicate with the rationalist community. Eliezer Yudkowsky is hiding on facebook. Scott Alexander has an unrelated full-time job. Maybe none of them actually has enough time and energy to do the job of the "rationalist leader", whatever that might be.
Also, I feel like asking for a "leader" is the instinctive, un-narrow, halo-effect approach typically generated by the corrupted human hardware. What specific problem are we trying to solve? Lack of communication and coordination in the rationalist community? I suggest Community Coordinator as a job title, and it doesn't have to be any of these high-status people, as long as it is a person with good people skills and cooperates with them (uhm, maybe Cat Lavigne?). Maybe even a Media Speaker who would once in a week or once in a month collect information about "what's new in the rationalist community", and compose an official article.
tl;dr -- we don't need a "leader", but we need people who will do a few specific things which are missing; coordination of the community being one of them
Part of the advantage of having a leader is that he/she could specialize in leading us and we could pay him/her a full-time salary. "Also, I feel like asking for a "leader" is the instinctive, un-narrow, halo-effect approach typically generated by the corrupted human hardware." Yes, but this is what works.