The ten up-votes you have for this post is a signal that either we shouldn't have a leader or if we should it would be difficult for him/her to overcome the opposition in the rationality movement to having a leader.
Speaking for myself (one of the upvotes), I think that having a single leader is bad, but having a relatively small group of leaders is good.
With one leader, it means anything they do or say (or did or said years or decades ago) becomes interpreted as "this is what the whole rationalist community is about". Also, I feel like focusing on one person too much could make others feel like followers, instead of striving to become stronger.
But if we have a small team of people who are highly respected by the community, and publicly acknowledge each othe...
A bit about our last few months:
We care a lot about AI Safety efforts in particular, and about otherwise increasing the odds that humanity reaches the stars.
Also, we[1] believe such efforts are bottlenecked more by our collective epistemology, than by the number of people who verbally endorse or act on "AI Safety", or any other "spreadable viewpoint" disconnected from its derivation.
Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together. And to do this among the relatively small sets of people tackling existential risk.
Existential wins and AI safety
Who we’re focusing on, why
Brier-boosting, not Signal-boosting