Thanks! I do also rely to some extent on reasoning... for example, Chapter 3 is my argument for why we should expect to be better off with (on the margin) more scout mindset and less soldier mindset, compared to our default settings. I point out some basic facts about human psychology (e.g., the fact that we over-weight immediate consequences relative to delayed consequences) and explain why it seems to me those facts imply that we would have a tendency to use scout mindset less often than we should, even just for our own self interest.
The nice thing about argumentation (as compared to citing studies) is that it's pretty transparent -- the reader can evaluate my logic for themselves and decide if they buy it.
Hey Ozzie! Thanks for reading / reviewing.
I originally hoped to write a more “scholarly” book, but I spent months reading the literature on motivated reasoning and thought it was mostly pretty bad, and anyway not the actual cause of my confidence in the core claims of the book such as “You should be in scout mindset more often.” So instead I focused on the goal of giving lots of examples of scout mindset in different domains, and addressing some of the common objections to scout mindset, in hopes of inspiring people to practice it more often.
I left in a handful of studies that I had greater-than-average confidence in (for various reasons, which I might elaborate on in a blog post – e.g. I felt they had good external validity and no obvious methodological flaws). But I tried not to make it sound like those studies were definitive, nor that they were the main cause of my belief in my claims.
Ultimately I’m pretty happy with my choice. I understand why it might be disappointing for someone expecting a lot of research... but I think it's an unfortunate reality, given the current state of the social sciences, that books which cite a lot of social science studies tend to give off an impression of rigor that is not deserved.
This doesn't really ring true to me (as a model of my personal subjective experience).
The model in this post says despair is "a sign that important evidence has been building up in your buffer, unacknowledged, and that it’s time now to integrate it into your plans."
But most of the times that I've cycled intermittently into despair over some project (or relationship), it's been because of facts I already knew, consciously, about the project. I'm just becoming re-focused on them. And I wouldn't be surprised if things like low blood sugar or anxiety spilling over from other areas of my life are major causes of some Fact X seeming far more gloomy on one particular day than it did just the day before.
And similarly, most of the times I cycle back out of despair, it's not because of some new information I learned or an update I made to my plans. It's because, e.g., I went to sleep and woke up the next morning and things seemed okay again. Or because my best friend reminded me of optimistic Facts Y and Z which I already knew about, but hadn't been thinking about.
Hey, I'm one of the founders of CFAR (and used to teach the Reference Class Hopping session you mentioned).
You seem to be misinformed about what CFAR is claiming about our material. Just to use Reference Class Hopping as an example: It's not the same as reference class forecasting. It involves doing reference class forecasting (in the first half of the session), then finding ways to put yourself in a different reference class so that your forecast will be more encouraging. We're very explicit about the difference.
I've emailed experts in reference class forecasting, described our "hopping" extension to the basic forecasting technique, and asked: "Is anyone doing research on this?" Their response: "No, but what you're doing sounds useful." [If I get permission to quote the source here I will do so.]
This is pretty standard for most of our classes that are based on existing techniques. We cite the literature, then explain how we're extending it and why.
I usually try to mix it up. A quick count shows 6 male examples and 2 female examples, which was not a deliberate choice, but I guess I can be more intentional about a more even split in future?
Thanks for showing up and clarifying, Sam!
I'd be curious to hear more about the ways in which you think CFAR is over-(epistemically) hygienic. Feel free to email me if you prefer, but I bet a lot of people here would also be interested to hear your critique.
Sure, here's a CDC overview: http://www.cdc.gov/handwashing/show-me-the-science-hand-sanitizer.html They seem to be imperfect but better than nothing, and since people are surely not going to be washing their hands every time they cough, sneeze, or touch communal surfaces, supplementing normal handwashing practices with hand sanitizer seems like a probably-helpful precaution.
But note that this has turned out to be an accidental tangent since the "overhygienic" criticism was actually meant to refer to epistemic hygiene! (I am potentially also indignant about the newly clarified criticism, but would need more detail from Sam to find out what, exactly, about our epistemic hygiene he objects to.)
Edited to reflect the fact that, no, we certainly don't insist. We just warn people that it's common to get sick during the workshop because you're probably getting less sleep and in close contact with so many other people (many of whom have recently been in airports, etc.). And that it's good practice to use hand sanitizers regularly, not just for your own sake but for others'.
Perhaps this is silly of me, but the single word in the article that made me indignantly exclaim "What!?" was when he called CFAR "overhygienic."
I mean... you can call us nerdy, weird in some ways, obsessed with productivity, with some justification! But how can you take issue with our insistence [Edit: more like strong encouragement!] that people use hand sanitizer at a 4-day retreat with 40 people sharing food and close quarters?
[Edit: The author has clarified above that "overhygienic" was meant to refer to epistemic hygiene, not literal hygiene.]
... By the way, you might've misunderstood the point of the Elon Musk examples. The point wasn't that he's some exemplar of honesty. It was that he was motivated to try to make his companies succeed despite believing that the most likely outcome was failure. (i.e., he is a counterexample to the common claim "Entrepreneurs have to believe they are going to succeed, or else they won't be motivated to try")