Whispers have been going around on the internet. People have been talking, using words like "defunct" or "inactive" (not yet "dead").

The last update to the website was December 2020 (the copyright on the website states "© Copyright 2011-2021 Center for Applied Rationality. All rights reserved."), the last large-scale public communication was end of 2019 (that I know of).

If CFAR is now "defunct", it might be useful for the rest of the world to know about that, because the problem of making humans and groups more rational hasn't disappeared, and some people might want to pick up the challenge (and perhaps talk to people who were involved in it to rescue some of the conclusions and insights).

Additionally, it would be interesting to hear why the endeavour was abandoned in the end, to avoid going on wild goose-chases oneself (or, in the very boring case, to discover that they ran out of funding (though that appears unlikely to me)).

If CFAR isn't "defunct", I can see a few possibilities:

  • It's working on some super-secret projects, perhaps in conjunction with MIRI (which sounds reasonable enough, but there's still value left on the table with distributing rationality training and raising the civilizational sanity)
  • They are going about their regular business, but the social network they operate in is large enough that they don't need to advertise on their website (I think this is unlikely, it contradicts most of the evidence in the comments linked above)

So, what is going on?

New Answer
New Comment

3 Answers sorted by

AnnaSalamon

1350

CFAR is not defunct.  I am at a workshop right now with Jack and Vaniver (who also work with CFAR), and 7 other people, who are mostly adjunct CFAR instructors (aka, people who are skilled enough to run CFAR classes if they want, and who've trained with us some, but who usually do only very few hours of CFAR-esque work in a typical year), trying things out.  Dan Keys is the fourth person who is also on CFAR's "core staff" (though we are all part-time hourly); there's also a number of other adjunct instructors.

Like some at MIRI, I've been taking something of a sabbatical year, loosely speaking.  That is, I've been looking at the word trying to understand its joints.  For me this sometimes involves running small experimental workshops, e.g. to try to see a thing with a group of people helping and geeking out about it together.  It doesn't so far involve trying to do anything at scale.

There are no super-secret projects at CFAR.  I suppose I might not say if there were, but I at least wouldn't say this.

We haven't run mainlines in awhile.  Irena (who is an adjunct CFAR instructor, and also upstairs in the venue asleep right now) keeps saying she may run one in Prague; if so, I and some others may fly in to help.  Davis keeps saying he may co-organize one in Berkeley, in which case I'll probably help also, but it's not quite clear.  If someone wants one and wants to co-organize, I may be down in general, but I don't quite have the fire to generate those on my own right now -- there may be people who would like to attend a CFAR workshop (okay, I know there are at least some), but running one isn't quite the thing that feels like it'll help me unravel the things I'm trying to understand, and also I don't quite have the stomach for it in some ways, although I'm glad that the people who attended got to attend so this is a bit of a muddy thing to express well.  It's possible the workshop we're currently experimenting in upstairs may lead to a revised mainline at some point, and one my heart could be more solidly in, but that "scale this up to a mainline" outcome is not a thing we're driving at especially hard.

CFAR's internal structure lately involves a "telos committee" who authorizes the allocation of funds internally by checking that the individual (from within CFAR's "core team" or CFAR's adjunct instructors / outside collaborators) has "telos" for the thing they want to do (and tries to get out of the way and let people do things they have "telos" for, without them needing to persuade others much).  I like how this has been going.  It is pretty low-key, though.  It is plausible to me that we had to wind down before we can build freshly.  We wound things down by ceasing to do anything that nobody had "telos" for (even things that were traditional, such as mainlines).  Sort of a theory that things would be easier to see, or at minimum we'd have more slack with which to see, if there wasn't any of that sort of clutter around.

I would not advise anyone wishing to solve human rationality, or to do anything else awesome, to refrain from attempting said awesome thing on the theory that we or anyone else has that covered.  Such thinking always seemed insane to me; if it is more transparently insane now, that seems good.  In hindsight, I wish we had chosen a more local name than the "Center for Applied Rationality" (such as "A Center for Applied Rationality" or "some random group in Berkeley who are taking a go at some applied rationality stuff while also having some more local goals about supporting MIRI's staffing needs, and should totally not be occupying the entire namespace here").  We do not have a super secret rationality sauce such that people attempting such a thing from outside CFAR are at a bad disadvantage, or anything like that.  If you want to try and want to make sure we don't know something you're missing first, I'm probably happy to talk.  Others might be too but I can't speak for them.

In terms of whether there is some interesting thing we discovered that caused us to abandon e.g. the mainline: I can't speak for more than myself here either.  But for my own take, I think we ran to some extent into the same problem that something-like-every self-help / hippy / human potential movement since the 60's or so has run into, which e.g. the documentary (read: 4-hour somewhat intense propaganda film) Century of the Self is a pretty good introduction to.  I separately or also think the old mainline workshops provided a pretty good amount of real value to a lot of people, both directly (via the way folks encountered the workshop) and via networks (by introducing a bunch of people to each other who then hit it off and had a good time and good collaborations later).  But there's a thing near "self-help" that I'll be trying to dodge in later iterations of mainline-esque workshops, if there are later iterations.  I think.  If you like, you can think with some accuracy of the small workshop we're running this week, and its predecessor workshop a couple months ago, as experiments toward having a workshop where people stay outward-directed (stay focused on inquiring into outside things, or building stuff, or otherwise staring at the world outside their own heads) rather than focusing on e.g. acquiring "rationality habits" that involve a conforming of one's own habits/internal mental states with some premade plan.

The above is somewhat scattered; feel free to ask more questions.

[-]gjm530

You refer to "the same problem that something like every self-help / hippy / human potential movement since the 60s has run into", but then don't say what that problem is (beyond gesturing to a "4-hour-long propaganda film").

I can think of a number of possible problems that all such movements might have run into (or might credibly be thought to have run into) but it's not obvious to me which of them, if any, you're referring to.

Could you either clarify or be explicit that you intended not to say explicitly what you meant? Thanks!

[EDITED to fix a misquotation that made it look like Anna wrote something ungrammatical; sorry]

Sorry.  I don't have a good short description of the problem, and so did not try to say explicitly what I meant.  Instead I tried to refer to a 4-hour film, "Century of the self," as trying to describe the same problem.

I may come back later with an attempted description, probably not a good one.

[-]gjm470

Thanks. I am, realistically, not going to watch four hours of propaganda (assuming your description of it is accurate!) in the hope of figuring out what you meant, so in the hope that you will come back and have at least a sketchy try at it I'll list my leading hypotheses so you have something concrete to point at and say "no, not that" about.

  • It turns out that actually it's incredibly difficult to improve any of the things that actually stop people fulfilling what it seems should be their potential; whatever is getting in the way isn't very fixable by training.
  • "Every cause wants to be a cult", and self-help-y causes are particularly vulnerable to this and tend to get dangerously culty dangerously quickly.
  • Regardless of what's happening to the cause as a whole, there are dangerously many opportunities for individuals to behave badly and ruin things for everyone.
  • In this space it is difficult to distinguish effective organizations from ineffective ones, and/or responsible ones from cultish/abusive ones, which means that if you're trying to run an effective, responsible one you're liable to find that your potential clients get seduced by the ineffective irresponsible ones that put more of their efforts into marketing.
  • In this space it is difficult to distinguish effective from ineffective interventions, which means that individuals and organizations are at risk of drifting into unfalsifiable woo.
8clone of saturn
As someone who has watched "Century of the Self" I'd guess it's more along the lines of * What people want is not what they need. People don't need much help to self-improve in ways which are already consonant with their natural desires and self-image. So any safe and effective self-improvement program would be a nonstarter in the free market because it would immediately repel the very people who could benefit from it.
4AnnaSalamon
Fair enough.  FWIW, I found the movie good / full of useful anecdata for piecing together a puzzle that I personally care a lot about, and so found it rewarded my four hours, but our interests are probably pretty different and I know plenty who would find it empty and annoying. On reflection, I shouldn’t have written my paragraph the way I did in my parent comment; I am not sure what trouble something-like-every self-help thingy has run into, I just suspect there’re threads in common based on how things look.  I might be wrong about it. Still, I wrote up my take on some of the hypotheses you listed (I appreciate that you took the trouble to list them; thanks!), and my take in general as to why we didn’t get a more formidable art of rationality.  Many of the factors I list remind me of my guesses at a bunch of stuff that also happened to other self-help groups and the “human potential movement” and so on, but I haven’t researched those well and might be wrong.  My take is long-winded, so I posted it blog-post style.  I’d love your/others thoughts if you have them.   My take / my reply to your comment.

Seconding gjm's reply, and wondering what can possibly be so difficult to talk about that even a 4-hour film can only be an introduction? I watched a few 20-second snippets scattered over its whole length (since this is an Adam Curtis film, that is all that is needed), and I am sceptical that the line that he draws through a century of history corresponds to a load-bearing rope in reality.

I suspect you should update the website with some of this? At the very least copying the above comment into a 2022 updates blog post.

The message 'CFAR did some awesome things that we're really proud of, now we're considering pivoting to something else, more details to follow' would be a lot better than the implicit message you may be sending currently 'nobody is updating this website, the CFAR team lost interest and it's not clear what the plan is or who's in charge anymore'

Afterthoughts / later additions:

I used to be in-practice orienting to trying to help MIRI with recruiting.  (Not, mostly, to trying to develop an art of human rationality, though there was some of that.)

MIRI is mostly not recruiting, or at least not in the way it used to be for the research programs it discontinued, so that is no longer a viable model for impact, which if you like you could reasonably accurately see as a cause of why I personally have been primarily trying to understand the world and to look for joints, rather than primarily trying to run mainlines at scale.

I do not think I've given up in any important sense, and I do not personally think CFAR has given up in any important sense either, although one of the strengths of our community has always been its disagreeableness, and the amount of scaling down and changing activities and such is enough that I will not think someone necessarily uninformed if they say the opposite.

My guess is actually that we'll be less focused on AI or other narrow interventions, and more focused on something sort of like "human rationality broadly" (without "rationality" necessarily being quite the central thing -- maybe more like: "san... (read more)

And a second afterthought:

I think for a long time CFAR was trying, though maybe not in a very smart/calibrated/wise/accurate way, to have public relationship with "the rationality community" along the lines of "we will attempt this project that you guys care about; and you guys may want to collaborate with us on that."  (Details varied by year; I think at the beginning something like this was more intended, accurate, and sincere, but after awhile it was more like accumulated branding we didn't mean but didn't update.)

I think at the moment we are not trying to take on any public mantles, including not that one.

This is probably also part of what's up with us not prioritizing more public communication about CFAR, though I and I think others are happy to discuss what's going on, but it's not "here is a thing we're doing, please believe in it's potential."

I honestly don't really get why the "telos committee" is an overall good idea (though there may be some value in experimenting with that sort of thing)—intuitively, a large portion of extremely valuable projects are going to be boring, and the sort of thing that people are going to feel "burnt out" on a large portion of the time. Shutting down projects that don't feel like saving the world probably doesn't select well for projects that are maximilly effective. Might just be misunderstanding what you mean here, of course.

I really like this part:

I would not advise anyone wishing to solve human rationality, or to do anything else awesome, to refrain from attempting said awesome thing on the theory that we or anyone else has that covered.

As someone who worked for CFAR for a couple years and then quit at the beginning of 2021: In addition to this advice, I would also advise that anyone wishing to gain basic skill in rationality, teaching, and workshop running, because they do not yet feel ready to solve human rationality or do anything else awesome, should pursue some strategy... (read more)

If you want to try and want to make sure we don't know something you're missing first, I'm probably happy to talk. 

This is great to hear.  
The Guild of the ROSE is striving to teach rationality to the layperson, and are excited to carry on the torch you folks lit. 
We will be reaching out shortly.

Ben Pace

190

Don't know if I have all the info, but I believe Anna Salamon continues to run small experimental workshops. For instance there's one happening right now.

But both CFAR and MIRI exist much less than they used to, having wound down and let go lots of staff and not done much to wind back up again.

I'll be excited for either of them to start new projects. That said I'm not betting on CFAR to solve rationality or MIRI to solve alignment, and I'd encourage anyone to solve those problems directly if you would like to see those problems solved.

Context: used to work for CFAR, currently work for MIRI.

"CFAR exists much less than it used to" feels true.  "MIRI exists much less than it used to" feels true, but false-by-implication when lumped in with the comment about CFAR, because that makes them feel like similar reductions and that's not at all the case.

CFAR is essentially nonexistent/non-recognizable from the perspective of someone who attended a workshop 2014 - 2019.  There are (if I understand correctly) 2-3 employees at the moment, and projects are spun up one at a time.  There might be more ambitious or enduring things in the future, but I think that's all up in the air?

Whereas MIRI still has multiple functioning research groups that have been pursuing their research directions more-or-less uninterrupted the whole time.  We did wind down a large project that had caused us to hire a bunch of engineers, and those engineers don't work for us anymore, and also it's quite true that Nate and Eliezer (our two seniormost researchers) do not have a concrete angle of attack on the acute risk problem.

But if CFAR is something like 10% of its 2018 self, MIRI is more like 70-90% of its 2018 self.  It swelle... (read more)

[This comment is no longer endorsed by its author]Reply
5Ben Pace
(For the record these comments by Duncan seemed to me both helpful and closer to the ground than mine, I don't know why Duncan retracted them.)
5Duncan Sabien (Deactivated)
Oh, Anna chimed in and they seemed to have been in-many-places directly contradicted by Anna's statements.

Duncan Sabien here, worked at CFAR 2015 to 2018 (currently work at MIRI).

My understanding of the state of CFAR:

"Defunct" is a reasonable description, relative to the org that ran 5-15 workshops per year and popularized TAPs and Goal Factoring and Double Crux and so forth.

Currently, it is not-quite-accurate but closer to true than false to say "CFAR is currently just two people."  The org still has the venue, as far as I know, and it's still occasionally being used by the EA/rationalist/longtermist communities.  The org also still has some amount of funding, which it is using for various individual projects (e.g. I don't know if former CFAR employees like Dan Keys or Elizabeth Garrett are getting CFAR grants to run their own individual investigations and experiments, but I would not be surprised and also to be clear this would be a pretty appropriate and good thing, according to me).

There are some smaller, quieter, workshop-esque things happening from time to time, but they are more geared toward specific audiences or accomplishing narrow goals, and they are not the generalized "develop the art of human rationality and deliver that art to high-impact people" goal that CFAR used to somewhat fill.  As far as I can tell, there's a decent chance that new ambitious projects might rise from the ashes, so to speak, but they'll likely be AI-xrisk oriented.

I personally have been wishing for more clarity on all of this, for precisely the reason that I remain interested in people furthering human rationality and would like people to not be thinking that "CFAR is on it" when afaik it has not been on it since some time in 2019.  

I'm part of a small group of people who might plausibly launch new projects in that direction, and I myself am currently running one-off workshops-for-hire (am typing this from Day 3 of a workshop in the Bay Area, as it happens) for groups that are capable of pulling together participants, venue, and ops and just need the content/leadership.

Additionally, it would be interesting to hear why the endeavour was abandoned in the end, to avoid going on wild goose-chases oneself (or, in the very boring case, to discover that they ran out of funding (though that appears unlikely to me)).

I certainly cannot speak with any authority or finality for CFAR, having not been there since late 2018/early 2019.  But my sense is more like "it was always more about the AI fight than about general rationality, and the org in its evolved state was not serving the AI fight goals particularly well, so it just kinda fizzled and each of its individual members struck out on their own new path."

[This comment is no longer endorsed by its author]
16 comments, sorted by Click to highlight new comments since:
[-]mdt250

the problem of making humans and groups more rational hasn't disappeared, and some people might want to pick up the challenge (and perhaps talk to people who were involved in it to rescue some of the conclusions and insights).

I'd think (and encourage) that anyone is welcome to pick up the challenge, regardless of the state of CFAR. More people working on the challenge seems like a great thing to have.

Yep, agreed. Especially if you can be clear about how your approach differs from CFAR's.

Gentle reminder: I offered a method for creating Beisutsukai that I'm pretty darn sure would work. Anyone is welcome to pick it up and make it work. I'm almost certainly not going to; I'm busy doing something related but quite different. But I'd still love to see someone do this. And I'm pretty darn sure it's never ever going to be CFAR — not unless it dies and something with a completely different soul inherits its name.

I'd like to upvote reading Val's linked post, if someone's wondering whether to bother reading it and likes my opinions on things.

Also agreed.

That sounds mostly true in general (unless an area is really saturated).

I got a lot of benefit from attending one of their workshops.  I hope they survive.

[-]gjm110

I thought CFAR had already largely pivoted away from trying to solve "the problem of making humans and groups more rational" to (optimistic version) "the problem of making people working on AI safety more rational" or (cynical version) "the problem of persuading smart people to work on AI safety through rationality training". So someone wanting to prioritize solving the original problem already shouldn't have been relying on CFAR to do it.

This rhymes with some comments by Duncan Sabien and Anna Salamon I just found:

In the last two years, CFAR hasn't done much outward-facing work at all, due to COVID, and so has neither been a MIRI funnel nor definitively not a MIRI funnel.

from here and

Yes, but I would predict that we won't be the same sort of MIRI funnel going forward. This is because MIRI used to have specific research programs that it needed to hire for, and it it was sponsoring AIRCS (covering direct expenses plus loaning us some researchers to help run the thing) in order to recruit for that, and those research programs have been discontinued and so AIRCS won't be so much of a thing anymore.

from here.

The clearest recent statement for CFARs vision is

Also, more broadly, CFAR has adopted different structures for organizing ourselves internally, and we are bigger now into "if you work for CFAR, or are a graduate of our instructor training program, and you have a 'telos' that you're on fire to do, you can probably do it with CFAR's venue/dollars/collaborations of some sorts" (we're calling this "platform CFAR," Elizabeth Garrett invented it and set it up maybe about a year ago, can't remember), and also into doing hourly rather than salaried work in general (so we don't feel an obligation to fill time with some imagined 'supposed to do CFAR-like activity" vagueness, so that we can be mentally free) and are also into taking more care not to have me or anyone speak for others at CFAR or organize people into a common imagined narrative one must pretend to believe, but rather into letting people do what we each believe in, and try to engage each other where sensible. Which makes it a bit harder to know what CFAR will be doing going forward, and also leaves me thinking it'll have a bit more variety in it. Probably.

from the same comment.

This constrains my projection of what CFAR does well enough that my curiosity is not as ravenously hungry anymore, but if anyone "in the know" still wants to chime in with recent info I'd be still happy about it.

I think the conclusion I take from it is ~"There's a bunch of individual people who were involved with CFAR still doing interesting stuff, but there is no such public organisation anymore in a meaningful sense (although shards of the organisation still help with AIRCS workshops); so you have to follow these individual people to find out what they're up to. Also, there is no concentration of force working towards a public accessible rationality curriculum anymore."

Maybe that comes out as too pessimistic? But I don't know how to better express it. And these decisions probably make more sense if you assume short AI timelines.

Also, there is no concentration of force working towards a public accessible rationality curriculum anymore.

This is true but also there is a growing cluster of people who are considering various collaborative possibilities here (more than six that I'm occasionally in contact with and surely nonzero that I am not).

I think the conclusion I take from it is ~"There's a bunch of individual people who were involved with CFAR still doing interesting stuff, but there is no such public organisation anymore in a meaningful sense (although shards of the organisation still help with AIRCS workshops); so you have to follow these individual people to find out what they're up to. Also, there is no concentration of force working towards a public accessible rationality curriculum anymore."


This seems about right to me personally, although as noted there is some network / concentration of force working toward... things individuals affiliated with us see as worth working toward, in loose coordination.  (But "a publicly accessible rationality curriculum" is not a thing we've been concentrating force toward, so far.)

I'm a person who has lived in the Bay area almost the whole time CFAR has existed, and am also moderately (though not intensely) intertwined with that part of the rationalist social network. I was going to write up my own answer but I think you pretty much nailed it with your conclusion here, especially with the part about distinguishing individual people from the institution.

If this is the case, it would be really nice to have confirmation from someone working there.

Agreed, although as I noted a bit above, my guess is that we're a bit more focused on human rationality/similar overall now (vs in previous years when gjm's comment was more true), but not that people ought to count on "CFAR" as as having that covered in any way.

A different point, which others in this comment thread have also mentioned, is that the people who used to work at CFAR are still around, many are doing things, and a number of CFAR-esque workshops are being run by various ex-instructors and other groups.  So if you go with the "CFAR dissolved" narrative (which fits for me more than "CFAR is defunct" actually, for whatever reason, I guess because when I hear "CFAR is" I think of the people currently upstairs in the venue, who seem pretty real, but when I hear "CFAR dissolved" I think more of the people and curriculum-bits and such that were here in 2018 or 2014), it's worth noting that the people and memories and units and such are still floating around in a lot of cases, as opposed to the memories and capacities having been mostly lost.

[-]lc100

Man, if they are defunct that's sad. I didn't even know.

using words like "defunct" or "inactive" (not yet "dead").

Those words seem plausible. If they were actually dead though, I'd expect that to be something that they'd announce, for reasons you mention, so I don't think they're actually dead.

Maybe they're struggling and considering shutting down but taking a long time to decide because it requires a lot of thought and/or because it is hard to pull the trigger.

Obviously a different market and more mainstream but the Alliance for Decision Education has funding, big names on their board, and plans in motion to scale rationality training (under a different name) https://alliancefordecisioneducation.org/