Not a guide, but I think the vocab you use matters a lot. Try tabooing 'rationality', the word itself mindkills some people straight to straw vulcan etc. Do the same with any other words that have the same effect.
Revisiting past conversations I think this is exactly what has been happening. When I mention rationality, reason, logic it becomes a logic v. emotion discussion. I'll taboo in future, thanks!
I have large PR problems when talking about rationality with others unfamiliar with it, with the Straw Vulcan being the most common trap conversation will fall into.
Are there any guides out there in the vein of the EA Pitch Wiki that could help someone avoid these traps and portray rationality in a more positive light? If not, would it be worth creating one?
So far I've found, how rationality can make your life more awesome, rationality for curiosity sake, rationality as winning, PR problems and the contrary rationality isn't all that great.
[CFAR's newest instructor, here; longtime educator and transhumanist-in-theory with practical confusions]
ScottL—I'm just coming out of the third workshop in six weeks, and flying to Boston to give some talks, so I'm exhausted and haven't had a chance to read through your compilation yet. I will, soon (+1 for the effort you've put forth), but in the meantime I wanted to pop in and give some thoughts on the comments thus far.
Benito, Rainbow, and Crux—+1 for all three perspectives.
Can CFAR content be learned from a compilation or writeup? Yes. After all, it's not magic—it was developed by careful thinkers looking at research and at their own cognition, iterated over 20+ formal attempts (and literally hundreds of informal ones) to share those same insights with others. It's complex, but it's also fundamentally discoverable.
However, there are three large problems (as I see it, speaking as the least experienced staff member). The first is the most obvious—it's hard. It's hard like learning karate from text descriptions is hard. If you go about this properly, without being sloppy or taking shortcuts or making dangerous assumptions, then you're in for a LONG, difficult haul. Speaking as someone who pieced together the discipline of parkour back in 2003, from scattered terrible videos (pre Youtube) and a few internet comment boards—pulling together a cohesive and working practice from even the best writeups is a tremendously difficult task. It's better on almost every axis with instructors, mentors, friends, companions—people to help you avoid the biggest pitfalls, help you understand the subtle points, tease apart the interesting implications, shore up your motivation, assist you in seeing your own mistakes and weaknesses. None of that is impossible on your own, but it's somewhere between one and two orders of magnitude more efficient and more efficacious with guidance.
The second is corruption. As Benito points out, a large part of the problem of rationality instruction is finding things that actually work—if mere knowledge of the flaws were sufficient to protect us from the flaws, then everybody who cared enough could just slog through Heuristics and Biases and be something like 70% of the way there. We've already put several thousand thought-hours and 20+ iterations into tinkering with content, scaffolding, presentation, and practice. What we've got works pretty well, but progress has been incremental and cumulative. What we had before worked less well, and what we had before that worked less well still.
Picture throwing out a complete text version of our current best practices, exposing it to the forces of memetic selection and evolution. Fragments would get seized upon, and quoted out of context; bits of it would get mixed up with this and that; things would be presented out of order and read out of order; people would skip and skim and possibly completely ignore sections they THOUGHT they already knew because the title or the first paragraph seemed mundane or familiar. And there wouldn't be the strong selection pressure toward clarity and cohesion that we've been providing, top-down—instead, there would be selection pressures for what's memorable, pithy, or easily crystallized, none of which would be likely to drive the art forward and make the content BETTER. Each step away from our current best practices is much more likely to be a decrease in quality rather than an increase, and though you and others here on LW are likely to have the necessary curiosity and diligence to "do it right," that doesn't mean that the majority of people exposed to the memes in this way share your autodidactic rigor.
The third problem (related to the second) is idea inoculation. Having seen crappy, distorted versions of the CFAR curriculum (or having attempted to absorb it from text, and failed), a typical human would then be much, much less receptive to other, better explanations in the future. This is why, even within the context of the workshop, we often ask that participants not read the relevant sections of their workbooks until AFTER a given lecture or activity. I'm going to assume this is a familiar concept, and not spend too many words on it, but suffice it to say that I believe an uncanny valley version of our curriculum trending on the internet for one day could produce enough anti-rationality in the general population to counterbalance all of our efforts so far.
None of these problems are absolute in nature. The Sequences exist, and are known to be helpful. And clearly, Rainbow and Benito have gotten at least some value out of the writeups they've gleaned and assembled themselves. Again, there's nothing to stop others from having the same insights we've had, and there's nothing to stop a diligent autodidact from connecting scattered dots.
But they are statistical. They are real. They become quite scary, once you start talking big numbers of people and the free exchange of content-sans-context. And that's without even talking about other concerns like framing, signaling, inferential distance, etc. Lots of worms in this can.
So the question then becomes—what to do?
Thus far, CFAR hasn't had the cycles to spend time creating the (let's say) 80-20 version of their content. Remember that it's a fledgling startup with fewer than ten full-time staff members (when Pete and I were hired, it only had six). They were pouring every 60- and 70- and 80-hour week into trying to squeeze an extra percentage point of comprehension or efficacy out of every activity, every explanation. In other words, the objection wasn't fundamental (to the best of my understanding, which may be wrong) ... it was pragmatic. Creating packaged material fit for the general public wasn't anywhere near the top of the list, which was headed by "create material that's actually epistemically sound and demonstrably effective."
For my own part, I think this belongs in our near future. I think it's an area to be approached cautiously, in incremental steps with lots of data collection, but yes—I'd like to see some of our simpler, core techniques made broadly available. I'd like to see scalability in the things we think we can actually explain on paper. And if it goes well, I'd like to see more and more of it. I'm personally taking steps in this direction (tackling and improving our written content is one of my primary tasks, and I've started with simple things like drafting a glossary and tracking which definitions leave the reader confused (or worse, confident but wrong)).
But we have to a) find the time and manpower to actually run the experiment, and b) find content that genuinely works. Those are both non-trivially difficult, and they're both trading off against the continued expansion and improvement of our version of the art of rationality. I've only just now taken on enough responsibility myself to free up a few of the core staff's hours—and that's mostly gone into reducing their workload from insane to merely crazy. It hasn't actually created sufficient surplus to allow online tutorials to meet the threshold for worth-the-risks.
In short, despite Crux's entirely appropriate and reasonable skepticism, the answer has to be (for the immediate future)—either you find us trustworthy, or you don't (and if you don't, maybe you don't want our material anyway?). I, for one, don't think published material threatens workshop revenue, any more than online tutorials threaten martial arts dojos. There will always be obvious benefits to attending an intensive, collaborative workshop with instructors who know what they're doing, and there will always be people who recognize that the value is worth the cost, particularly given our track record. Our reasons for having refrained from publication thus far aren't monetary (or, to be more precise, money isn't in the top five on what's actually a fairly long and considered list).
Instead, it's that we actually care about getting it right. We don't want to poison the well, we don't want to break the very thing we're trying to protect, and as a member of a group with something that at least resembles expertise (if you don't want to credit us as actual experts), I think that requires a lot more work on our end, first.
That being said, if you have questions about the content above, or about what CFAR is doing this week and this month and this year, or if you're struggling with creating the art of rationality yourself and you've had novel and interesting insights—
Well. You know where to find us, and we don't know where to find you, or we'd have already reached out.
Hope this helps,
- Duncan
Problems one and two (hard and imperfect) would suggest that people will get less value out of ScottL's post than a workshop. OK, fine. Don't let the perfect be the enemy of the good. Scale ScottL's post up through easy online access and the many, many people getting a smaller somewhat unreliable benefit turns into something very significant. But problem 3,
Having seen crappy, distorted versions of the CFAR curriculum (or having attempted to absorb it from text, and failed), a typical human would then be much, much less receptive to other, better explanations in the future.
We don't want to poison the well, we don't want to break the very thing we're trying to protect, and as a member of a group with something that at least resembles expertise (if you don't want to credit us as actual experts), I think that requires a lot more work on our end, first.
That's reason enough to not release your own material. But specifically, do you think ScottL's compilation above or sharing the guide I've written (if I was to post it here for anyone to use) has the same effect? Do you think our compilations will have a net negative effect on rationality?
Thus far, CFAR hasn't had the cycles to spend time creating the (let's say) 80-20 version of their content.
For my own part, I think this belongs in our near future.
Do you have an estimate on this? I won't hold you to it, I'd just like to know what kind of time frame 'near' is.
CFAR has all of this material readily available likely in a much more comprehensive and accurate format. CFAR are altruists. Smart altruists. The lack of anything like this canon suggests that they don't think having this publicly available is a good idea. Not yet anyway. Even the workbook handed out at the workshops isn't available.
Rather than deferring to the judgment of the Smart Altruists and assuming that within their secret backroom discussions they've determined with logic, rigor, and a plethora of academic citations that it's crucial to the mission of raising the sanity waterline to not release a comprehensive exposition of their body of rationality techniques, perhaps we need only consider your second point except in less reverential light:
I highly value CFAR as an organisation. I want them to be highly funded and want as many people to attend their workshops as possible. It would upset me to learn that someone had read my compilation and not attended a workshop thinking they had gotten most of the value they could.
So much for the Internet-era model of "free information to be disseminated to all".
Without a deferential attitude toward the Great Rationalists of CFAR, Occam's Razor suggests that perhaps they're simply trying to keep the money flowing. Would it upset you if thousands of people without the resources or time to make it to a CFAR workshop had access to a self-study version of the CFAR curriculum?
Rather than deferring to the judgment of the Smart Altruists and assuming that within their secret backroom discussions they've determined with logic, rigor, and a plethora of academic citations that it's crucial to the mission of raising the sanity waterline to not release a comprehensive exposition of their body of rationality techniques, perhaps we need only consider your second point except in less reverential light.
Given the ease with which CFAR could publish all their material online it seems worth considering why they haven't done so. If spreading rationality wide is indeed their goal, then why haven't they picked this low hanging fruit yet? I'd rather not have to make any assumptions so if someone from CFAR is reading this perhaps they can answer that.
So much for the Internet-era model of "free information to be disseminated to all". Without a deferential attitude toward the Great Rationalists of CFAR, Occam's Razor suggests that perhaps they're simply trying to keep the money flowing. Would it upset you if thousands of people without the resources or time to make it to a CFAR workshop had access to a self-study version of the CFAR curriculum?
Of course that would not upset me. If the CFAR curriculum remained forever available only to the few who attended their workshops that would be sad indeed. But CFAR Labs is currently working on new rationality sequences, and I don't think the curriculum will be as inaccessible for much longer.
I want the world to be a more rational place. I want as many people as possible to have the opportunity to become more rational in the most effective way available. More than any other individual or group it seems to me that CFAR is best positioned to achieve that goal. Even if the reason is money - if that money goes towards increasing the speed at which effective rationality techniques are developed and spread worldwide then all the better.
I had a very similar thought to this post. So similar in fact that I went ahead and wrote a kind of user guide for each CFAR's techniques (though it has changed a great deal even in the last 4 months since I finished writing). I also have never been to a CFAR workshop and drew on many of the same online sources that you have. It took about a month to compile of working in my spare time. My motivation for doing so was the cost of attending a workshop (financially and time costs) were simply too high for someone in my position overseas.
I've printed it and only use it personally. I've never shared it other than with one close friend. I'm concerned about you posting this now, for the same reasons that stopped me from sharing my compilation even though I could see a great deal of benefit in it.
My thoughts for not sharing it are,
CFAR has all of this material readily available likely in a much more comprehensive and accurate format. CFAR are altruists. Smart altruists. The lack of anything like this canon suggests that they don't think having this publicly available is a good idea. Not yet anyway. Even the workbook handed out at the workshops isn't available.
I highly value CFAR as an organisation. I want them to be highly funded and want as many people to attend their workshops as possible. It would upset me to learn that someone had read my compilation and not attended a workshop thinking they had gotten most of the value they could.
C.S. Lewis addressed the issue of faith in Mere Christianity as follows:
In one sense Faith means simply Belief—accepting or regarding as true the doctrines of Christianity. That is fairly simple. But what does puzzle people—at least it used to puzzle me—is the fact that Christians regard faith in this sense as a virtue, I used to ask how on earth it can be a virtue—what is there moral or immoral about believing or not believing a set of statements? Obviously, I used to say, a sane man accepts or rejects any statement, not because he wants or does not want to, but because the evidence seems to him good or bad. Well, I think I still take that view. But what I did not see then— and a good many people do not see still—was this. I was assuming that if the human mind once accepts a thing as true it will automatically go on regarding it as true, until some real reason for reconsidering it turns up. In fact, I was assuming that the human mind is completely ruled by reason. But that is not so. For example, my reason is perfectly convinced by good evidence that anaesthetics do not smother me and that properly trained surgeons do not start operating until I am unconscious. But that does not alter the fact that when they have me down on the table and clap their horrible mask over my face, a mere childish panic begins inside me. In other words, I lose my faith in anaesthetics. It is not reason that is taking away my faith: on the contrary, my faith is based on reason. It is my imagination and emotions. The battle is between faith and reason on one side and emotion and imagination on the other. When you think of it you will see lots of instances of this. A man knows, on perfectly good evidence, that a pretty girl of his acquaintance is a liar and cannot keep a secret and ought not to be trusted; but when he finds himself with her his mind loses its faith in that bit of knowledge and he starts thinking, “Perhaps she’ll be different this time,” and once more makes a fool of himself and tells her something he ought not to have told her. His senses and emotions have destroyed his faith in what he really knows to be true. Or take a boy learning to swim. His reason knows perfectly well that an unsupported human body will not necessarily sink in water: he has seen dozens of people float and swim. But the whole question is whether he will be able to go on believing this when the instructor takes away his hand and leaves him unsupported in the water—or whether he will suddenly cease to believe it and get in a fright and go down. Now just the same thing happens about Christianity. I am not asking anyone to accept Christianity if his best reasoning tells him that the weight of the evidence is against it. That is not the point at which Faith comes in. Faith, in the sense in which I am here using the word, is the art of holding on to things your reason has once accepted, in spite of your changing moods.
Although many religious people use the word differently, this is how I use Faith, and I propose that it would be an acceptable one to facilitate this discussion: a determination to hold on to what you have already established a high confidence level in, despite signals you may have received from less rational sources (i.e. emotions).
Faith, in the sense in which I am here using the word, is the art of holding on to things your reason has once accepted, in spite of your changing moods.
So Bayes update on intellectual arguments, but not on your emotions when you consider them likely to change in the immediate future? That seems like a good virtue if one desires accurate beliefs.
I recently attended a 10 day intensive Vipassana meditation retreat. Would a write-up of the experience be something LWers are interested in as an article for discussion?
I had minimal to moderate experience in meditation before this but now feel much more comfortable with it. I can see potential rationality relevance through,
* Discipline
* Concentration
* Emotion and habit regulation
* Seeing reality as it is
If there is interest then I would appreciate it if someone is willing to look over a draft of the article for me as I haven't written for LW before.
Are they good quality for listening to?
I listen to audio books regularly and they are at the upper end in terms of quality.
Moreover, is the material they cover comprehensible?
Yes. Articles that don't translate well into audio are not produced e.g. Intuitive Bayes Theorem is unavailable.
have you found the Less Wrong casts to be understandable sufficiently well for a first-time listener?
Yes.
I'd like to know in which order I should provide those articles that are available on Castify.
I don't know what you mean here but you can contact Castify directly with your questions - http://castify.co/contact/new
Fixed the links to the epub and mobi!
Blacked out pdf links are new to me - what's the reader?
Sumatra PDF 3.0 on Windows 8.1 x64. I believe the problem is the same one this user had with the AI to Zombies ebook.
I'll be reading the epub personally (which works fine in Sumatra) on my Ipad so it doesn't bother me, but I thought I would mention it as Sumatra is a relatively popular reader and if this ebook is produced by the same team as the rationality ebook then it seems to be a recurring problem.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
What exactly are you doing that you have PR problems?
Are you simply relabeling normal conversations with friends as PR?
Something like,
A: I've been reading a lot about rationality in the last year or two. It's pretty great.
B: What's that?
A: Explanation of instrumental + epistemic OR Biases a la Kahneman
B: Sounds dumb. I do that already.
A: I've found it great because X, Y, Z.
B: I think emotion is much more important than rationality. I don't want to be a robot.
Yes. Sorry for the lack of clarity.