Alex_Altair comments on Why CFAR? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (117)
On reflection, this is an opportunity for me to be curious. The relevant community-builders I'm aware of are:
Whom am I leaving out?
My model for what they're doing is this:
GiveWell isn't trying to change much about people at all directly, except by helping them find efficient charities to give to. It's selecting people by whether they're already interested in this exact thing.
80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary "rationality infusion," but isn't trying to alter anyone's underlying character in a lasting way beyond that.
CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality, but has so far mainly succeeded in some improvements in personal effectiveness for solving one's own life problems.
Leverage has tried to directly approach the problem of creating a hero-level community but doesn't seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness
Do any of these descriptions seem off? If so, how?
PS I don't think I would have stuck my neck out & made these guesses in order to figure out whether I was right, before the recent CFAR workshop I attended.
MIRI has been a huge community-builder, through LessWrong, HPMOR, et cetera.
Those predate the founding of CFAR; at that time MIRI (then SI) was doing double duty as a rationality organisation. It's explicitly pivoted away from that and community building since.