Alex_Altair comments on Why CFAR? - Less Wrong

71 Post author: AnnaSalamon 28 December 2013 11:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (117)

You are viewing a single comment's thread. Show more comments above.

Comment author: CarlShulman 29 December 2013 12:37:44AM *  19 points [-]

CEA and GiveWell are both building communities, GiveWell to the point of more than doubling its community (by measures such as number of donors, money moved, with web traffic slightly slower) every year, year after year. Giving What We Can's growth has been more linear, but 80,000 hours has also had good growth (albeit somewhat less and over a shorter time).

That makes the bar for something like CFAR much, much higher than your model suggests, although there is merit in experimenting with a number of different models (and the Effective Altruism movement needs to cultivate the "E"/ element as well as the "A", which something along the lines of CFAR may be especially helpful for).

ETA: I went through more GiveWell growth numbers in this post. Absolute growth excluding Good Ventures (a big foundation that has firmly backed GiveWell) was fairly steady for the 2010-2011 and 2011-2012 comparisons, although growth has looked more exponential in other years.

Comment author: Benquo 29 December 2013 02:44:20AM *  13 points [-]

On reflection, this is an opportunity for me to be curious. The relevant community-builders I'm aware of are:

  • CFAR
  • 80,000 Hours / CEA
  • GiveWell
  • Leverage Research

Whom am I leaving out?

My model for what they're doing is this:

GiveWell isn't trying to change much about people at all directly, except by helping them find efficient charities to give to. It's selecting people by whether they're already interested in this exact thing.

80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary "rationality infusion," but isn't trying to alter anyone's underlying character in a lasting way beyond that.

CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality, but has so far mainly succeeded in some improvements in personal effectiveness for solving one's own life problems.

Leverage has tried to directly approach the problem of creating a hero-level community but doesn't seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness

Do any of these descriptions seem off? If so, how?

PS I don't think I would have stuck my neck out & made these guesses in order to figure out whether I was right, before the recent CFAR workshop I attended.

Comment author: Alex_Altair 29 December 2013 03:31:18AM 9 points [-]

MIRI has been a huge community-builder, through LessWrong, HPMOR, et cetera.

Comment author: ciphergoth 29 December 2013 08:59:37AM 9 points [-]

Those predate the founding of CFAR; at that time MIRI (then SI) was doing double duty as a rationality organisation. It's explicitly pivoted away from that and community building since.