Good point!
(And thanks for explaining clearly and noting where you learned about logarithmic scoring.)
I would suggest that "helping people think more clearly so that they'll find truth better, instead of telling them what to believe" already has a name, and it's "the Socratic method." It's unfortunate that this has the connotation of "do everything in a Q&A format", though.
A bit about our last few months:
We care a lot about AI Safety efforts in particular, and about otherwise increasing the odds that humanity reaches the stars.
Also, we[1] believe such efforts are bottlenecked more by our collective epistemology, than by the number of people who verbally endorse or act on "AI Safety", or any other "spreadable viewpoint" disconnected from its derivation.
Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together. And to do this among the relatively small sets of people tackling existential risk.
Existential wins and AI safety
Who we’re focusing on, why
Brier-boosting, not Signal-boosting