(This is not your typical factual-answer question, but I think it makes sense to format this as a question rather than a post.)
TLDR: Recommend some posts for a "practice of rationality" sequence I want to curate! Proposing posts that should exist but don't is also cool.
I've been thinking recently that it would be nice if rationality were more associated with a practice -- a set of skills which you can keep grinding and leveling up. Testable rationality skills (like accuracy or calibration in forecasting) are obviously a plus, but I'm not referring exclusively to this -- some very real things can be hard to evaluate externally, such as emotional wellness.
A model I have in mind is meditation: meditation is easy to "grind" because the meditator gets constant immediate feedback about how well they're focusing (or at least, they get that feedback if they meet a minimum of focus required to keep track of whether they are focusing). Yet it's quite difficult to evaluate progress from the outside.
(In fact, when I mentioned this desire for a "practice" of rationality to one friend, they were like "I agree, and in fact I think the practice should just be insight meditation.")
This is basically reiterating Brienne's call for tortoise skills (see also), except what I want to do is collect proposed things which could be part of a practice.
Obviously, some CFAR content could already qualify. CFAR doesn't exactly teach it that way -- as far as I've observed, CFAR's focus is on mindset interventions. "Mindset intervention" is the fancy psychology term for getting someone to think differently by having them do something once. For example, the point of "growth mindset" interventions is that you explain it once and this has long-lasting impact on someone's behavior. Another mindset intervention is: you ask people to write about what matters to them. Doing this once has shown long-term results.
In my first CFAR experience (which was an MSFP, fwiw), the phrase "It's not about the exercises!" was kind of a motto. It was explained at the beginning that CFAR teaches exercises not because people learn the exercises and then go out and use the exercises, but rather, going through the exercises a few times changes how you think about things. (The story was that people often go to a CFAR workshop and then improve a bunch of things in their life, but say "but I haven't been doing the exercises!".)
But many of the things CFAR teaches could be used as a practice, and (again referring to my first CFAR experience) CFAR does do some things which encourage you to look at them that way, like the follow-up emails which encourage you to overlearn one exercise per week (practicing that one thing a bunch so that it becomes an automatic mental motion).
Another example pointing at what I want here is bewelltuned.com. The content may or may not be right, but the sort of thing seems exactly right to me -- actionable skills you can keep working on regularly after getting simple explanations of how to do it. And furthermore, the presentation seems exactly right. LessWrong has a tendency to focus on wordy explanations of intellectual topics, which is great, but the bewelltuned style seems like an excellent counterbalance.
I'm using the "question" format so that answers can recommend specific things (perhaps represented by existing LW posts, perhaps not), whereas comments can discuss this more broadly (such as what more general criteria should be applied to filter suggestions, or whether this is even a good idea). The answer list here could serve as a big repository. I'll probably create a sequence which can be my own highly opinionated curation of the suggestions here, plus my own writing on the subject.
I originally intended Becoming Unusually Truth Oriented to be the start of a sequence on the subject written entirely by me. However, some resulting discussion made me question my approach (hence the motivation for this question).
One friend of mine (going off of some of the discussion in comments to that post) voiced a concern about the rationality community falling into the same pitfalls as martial arts. Several articles about this have been written on LW. (I'm not finding all the ones I remember! If you put links to more of them in the comments I'll probably edit this to add them.) The concern is that a martial art of rationality could lead to the same kinds of epistemic viciousness which are seen in literal martial arts -- a practice divorced from reality due to the constraints and incentives of training/teaching.
That same friend suggested that the solution was to focus on empirically verifiable skills, namely forecasting. But in the in-person rationalist community in the bay area, I've encountered some criticism of extreme focus on forecasting which suggests that it's making the very mistake we're afraid of here -- Goodharting on the problem. One person asked me to give any examples of Superforecasting-like skills resulting in actual accomplishments, suggesting that planning is the far more valuable skill and varies significantly from forecasting. Another person recounted their experience sitting down with several other rationalists to learn superforecasting skills. It was a group of rather committed and also individually competent rationalists, but they quickly came to the conclusion that while they could put in the effort to become much better at forecasting, the actual skills they'd learn would be highly specific to the task of winning points in prediction tasks, and they abandoned the project, concluding that it would not meaningfully improve their general capability to accomplish things!!
So, this seems like a hard problem.
What could/should be a part of a 'practice' of rationality?
The TL:DR comment on this is also the conclusion.
What you (can) learn from something might not be obvious in advance. While it's possible they were right, it's possible they were wrong.
And if you're right, then doing the thing is a waste, but if you are wrong then it's not.*
*Technically the benefit of something can equal the cost.
U(x) = Benefit - Cost. The first is probabilistic - in the mind, if not in the world. (The second may be as well, but to lesser extent.)
If this is instead modeled using a binary variable 'really good (RG)', the expected utility of x is roughly:
Outcome_RG*p_RG + Outcome_not*(1-p_RG) - cost
But this supposes that the action is done or not done, ignoring continuity. You to superforecaster you, is a continuum. If this is broken up into into intervals of hours then there may exist both a number of hours x and y, such that U(x)-cost >0, but U(y)-cost < 0. The continuous generalization is the derivative of 'U(x hours) - cost', and it becomes zero where the utility has stopped increasing and started increasing (or when the reverse holds). This leaves the question of how U(x) is calculated, or estimated. One might imagine that this group could have been right - perhaps the low hanging of fruit of forecasting/planning is Fermi estimates, and they already had that skill/tool.
Forecasting (predicting the future) is all well and good if you can't affect something, but if you can then perhaps planning (creating the desired future) is better. The first counterexample that comes to mind is that if you could predict the stock market in advance, then you might be able to make money off of that. This example seems unlikely, but it suggests a relationship between the two - some information about the future is useful for 'making plans'. However, while part of what information that will/could be important in the future may be obvious, that leaves:
**Moving is harder than calculating.
Since one can't do most of the things in the world for oneself, expert judgement has to be one of the upstream skills chosen for investment/cultivation.