In response to Why CFAR's Mission?
Comment author: ChristianKl 02 January 2016 12:30:42PM *  12 points [-]

Why is CFAR's main venue for teaching those skills a 4-day workshop?

Why not weekly classes of 2 to 3 hours?
Why not a focus on written material as the original sequences had?
Why not a focus on creating videos that teach rationality skills?
Why not focus on creating software that trains the skills?

Comment author: AnnaSalamon 11 January 2016 02:59:41AM 14 points [-]

The short answer: because we're trying to teach a kind of thinking rather than a pile of information, and this kind of thinking seems to be vary more easily acquired in an immersive multi-day context -- especially a context in which participants have set aside their ordinary commitments, and are free to question their normal modes of working/socializing/etc. without needing to answer their emails meanwhile.

Why I think this: CFAR experimented quite a bit with short classes (1 hour, 3 hours, etc.), daylong commuter events, multi-day commuter events, and workshops of varying numbers of days. We ran our first immersive workshop 6 months into our existence, after much experimentation with short formats; and we continued to experiment extensively with varied formats thereafter.

We found that participants were far more likely to fill in high scores to "0 to 10, are you glad you came?" at multi-day residential events. We found also that they seemed to us to engage with the material more fully and acquire the "mindset" of applied rationality more easily and more deeply, and that conversations relaxed, opened up, and became more honest/engaged as each workshop progressed, with participants feeling free to e.g. question whether their apparently insoluble problems were in fact insoluble, whether they in fact wanted to stay in the careers they felt "already stuck" in, whether they could "become a math person after all" or "learn social skills after all" or come to care about the world even if they hadn't been born that way, etc.

We also find we learn more from participants with whom we have more extensive contact, and the residential setting provides that well per unit staff time -- we can really get in the mode of hanging out with a given set of participants, trying to understand where they're at, forming hypotheses that might help, trying those hypotheses real-time in a really data-rich setting, seeing why that didn't quite work, and trying again... And developing better curricula is perhaps CFAR's main focus.

That said, discussed in our year-end review & fundraiser post, we are planning to attempt more writing, both for the sake of scalable reading and for the sake of more explicitly formulating some of what we think we know. It'll be interesting to see how that goes.

(You might also check our Critch's recent post on why CFAR has focused so much on residential workshops.)

Comment author: TheMajor 05 January 2016 06:40:44PM 13 points [-]

How very deep. But if I'm not mistaken the original argument around Chesterton's fence is that somebody had gone through great efforts to put a fence somewhere, and presumably would not have wasted that time if it would be useless anyway. In your example, "the common practice of taking down Chesterton fences", this is not the case. The general principle is to not undo that which others have worked hard for to create, unless you are certain that it is useless/counterproductive. Nobody worked hard on making sure people could remove fences without understanding them (or at the very least I'm willing to claim that this is counterproductive), so this principle is not protected.

Comment author: AnnaSalamon 09 January 2016 06:42:02AM *  2 points [-]

Nobody worked hard on making sure people could remove fences without understanding them ..., so this principle is not protected.

This seems false to me. I agree with Stuart's opening suggestion that democracy, free markets, and the Enlightenment more generally are in part designed to make it easy to dismantle historical patterns (e.g. religion, guilds, aristocracy, traditions; one can see this discussion explicitly in e.g. Adam Smith, Locke, Toqueville, Bacon). Bostrom's "status quo bias" also comes to mind.

Comment author: Academian 19 December 2015 04:12:34AM 27 points [-]

Just donated $500 and pledged $6500 more in matching funds (10% of my salary).

Comment author: AnnaSalamon 21 December 2015 11:02:24AM 8 points [-]

Thank you! We appreciate this enormously.

Comment author: Gleb_Tsipursky 19 December 2015 05:53:36PM 9 points [-]

Great progress, and I just donated! As a nonprofit director myself, I am especially happy to see your progress on systematization going forward. That's what will help pave the path to long-term success. Great job!

Comment author: AnnaSalamon 21 December 2015 11:01:59AM 3 points [-]

Thanks!

Comment author: lukeprog 20 December 2015 07:57:54PM 18 points [-]

Just donated!

Comment author: AnnaSalamon 21 December 2015 11:01:55AM 6 points [-]

Thanks!

Comment author: AnnaSalamon 21 December 2015 08:52:43AM 3 points [-]

We revised the text some after posting; apologies to anyone who replied to original text that has now been changed.

Comment author: gjm 20 December 2015 10:13:58PM 6 points [-]

I find this bit quite alarming:

We hit some of our concrete goals for 2015, and pivoted away from others.

We created a metric for strategic usefulness, hitting the first goal; we started tracking that metric, hitting the second goal.

We chose to change focus from boosting alumni scores on these components, however. [...] Focusing on boosting those components no longer made sense, and we transitioned away from that target.

because it seems to me to amount to this: "Our goals for the year included putting in place metrics by which we could tell whether we were actually achieving what we want. So we did that. And then we decided we didn't want to track those, so we threw them away again."

... which is, to be sure, a reasonable course of action if you discover that you were measuring the wrong thing -- but is also exactly what you'd see if CFAR had found (or guessed) that it wasn't making progress according to those metrics, and didn't want that fact to be too visible.

Combined with an apparent shift from "hold workshops that enhance people's instrumental rationality" in the direction of "hold workshops that funnel people into MIRI", and the discovery that real rationality apparently necessarily involves "deep caring" ... I dunno, maybe it's all absolutely fine, but it looks just a little too much like a transition from "rationality enhancer" to "cult recruitment vehicle".

Comment author: AnnaSalamon 21 December 2015 08:51:25AM *  4 points [-]

Sorry. Original phrasing around how we were now going to measure was pretty bad, I agree. I just edited it. I had been bothered by the very text you quoted, and we had an internal thread where we all discussed that and agreed that the phrases were wrong... but we were slow about that, and you commented while we were discussing! The new text more closely reflects the actual structure of how we've been thinking about it all.

It's a bit tricky to publish a long post with many co-editors without letting something inaccurate through (at least in a sleep-deprived marathon like we very rationally used before publishing this one...; there were a bunch of us working collaboratively on the text...); but we should probably in fact have edited a bit more before posting; anyhow, my apologies for editing this text on you after you commented.

Comment author: AnnaSalamon 20 December 2015 02:48:29AM *  14 points [-]

I would be extremely glad to talk to talk to anyone about CFAR, the impact of marginal CFAR donations on the world's talent bottlenecks, or any related things. (If you like, we can try the double crux game.) You can book time with me here: http://www.meetme.so/cfar-anna

In response to comment by gwillen on LessWrong 2.0
Comment author: Kaj_Sotala 03 December 2015 12:56:52PM 18 points [-]

I agree with this. I might not get as much value from LW as I used to, but it being up and running is still positive net value for me.

In response to comment by Kaj_Sotala on LessWrong 2.0
Comment author: AnnaSalamon 14 December 2015 04:49:06AM 5 points [-]

Me, too.

In response to LessWrong 2.0
Comment author: AnnaSalamon 14 December 2015 04:45:15AM 26 points [-]

I need to think about this more, but my present impression is in favor of keeping LW and making it good. It seems to me that we gain quite a bit from having a Schelling place to post, such that evidence and arguments posted to that Schelling location become common knowledge.

I agree that LW has been doing fairly badly lately, but I am fairly seriously attempting to craft a post sequence designed to combat that; if not LW, I'd favor some other method that has a single secure Schelling spot.

My impression is that we know a fair bit of applied rationality that was not successfully conveyed by Eliezer's original Sequences (although a fair chunk of it seems to me to be implicit in those Sequences), and that we are now in a position to make a more serious attempt to convey in writing.

View more: Prev | Next