One might preface the OODA loop with 'actually' or otherwise indicate that there is a break-downable skill in the steps i.e.
Actually Observing (noticing)
Actually Orienting (causal reasoning)
Actually Deciding (emotional processing and logistics)
Actually Acting (execution reps)
to which we might also add, Actually Resting or otherwise creating the slack in which the above can operate.
You know, "Actually OODA-ing" would just be a good standalone blogpost. I'd also be interested in reading alternate versions of it by @Andrew_Critch, @LoganStrohl, you (Romeo) and me. (I think this post by Duncan feels kinda related although not structured the same way).
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
lol that is amazingly terrible.
That doc was a memo at a private retreat that a) not actually that private, but b) is mostly just a repackaging of this:
https://www.lesswrong.com/posts/rz73eva3jv267Hy7B/can-you-keep-this-confidential-how-do-you-know
This is great!
I really like this about slack:
- If you aren’t maintaining this, err on the side of cultivating this rather than doing high-risk / high-reward investments that might leave you emotionally or financially screwed.
- (or, if you do those things, be aware I may not help you if it fails. I am much more excited about helping people that don’t go out of their way to create crises)
Seems like a good norm and piece of advice.
I originally wrote this in 2021, when Lightcone was exploring a vision of building "a rationalist campus" or maybe "a rationalist society."
Right now, neither I, nor Lightcone as a whole, are focused on this in the same way. You *might* make a guild that taught these skills on purpose, or a barrier-to-entry that filtered for them. That’s not where I’m currently focusing my plans.
But, I mostly am publishing it now because I still roughly like it as an aspirational world I'd like to be a part of some day. I like these skills, and I'd like my collaborators to have them, however that ends up being mediated. I haven't edited the post since 2021.
At the LW Team, we’ve been tossing around the idea of founding some kind of rationalist/x-risk guild, society, or other closed barrier social/professional network.
There’s a bunch of reasons I think a guild might be good. But one key thing I’m interested in is “Have a cluster of people who reliably have a particular set of skills, and a particular orientation towards x-risk, such that one can reliably find allies.”
In the current status quo, there is a vaguely defined rationalist/EA-landscape where people are filtered for some basic level of IQ, understanding of some foundational essays (but, not any particular foundational essay), and some vague pressure to be successful on at least some dimensions. It’s amorphous and unpredictable.
I’d like to have a network of people I can reliably trust. Reasons this seems important to me:
I think having a guild would provide a framework for both making sure people meet some basic bar of “be good at thinking, and good at collaborating”, as well as training people in new, deeper skills over time. Ideally, the bar for guild members goes up over time as we figure out better skills and better ways of teaching them.
In this Very Rough Draft of a plan, I think there should be some skills that are core requirements for being part of the guild (maybe along with a bunch of required reading and background knowledge which is less skill-based). The first skills are chosen to help a person unfold into more/better skills (i.e. have the habit of gaining habits, and ability to notice and reflect on your own thought processes)
I don’t actually know which skills should make it into the core training loop, but here are two giant braindumps on:
This post covers the first two things. I ran out of steam before getting to part three.
Rationality Skills
(braindump from the last time I thought about this, not standing by strongly)
Background beliefs (listed in Duncan's original post)
Building-Block and Meta Skills
(Necessary or at least very helpful to learn everything else)
Notice you are in a failure mode, and step out. Examples:
(The "Step Out" part can be pretty hard and would be a long series of blogposts, but hopefully this at least gets across the ideas to shoot for)
Social Skills (i.e. not feeding into negative spirals, noticing what emotional state or patterns other people are in [*without* accidentaly rounding them off to a stereotype])
Actually Thinking About Things
Actually Changing Your Mind
Noticing Disagreement and Confusion, and then putting in the work to resolve it
Collaboration Skills
I’d love to live in a world where I had the luxury of only working with Good Collaborators.
This is an unrealistic wish. Being a Good Collaborators is a skill – one skill among many that I might need on my team. I might need people who are good at programming, or logistics, or aesthetics. Often I’m running a small scrappy volunteer project where not that many people want to help out in the first place.
If I were to hold out for only people who are also Good Collaborators According To Me, I might not have anyone to work with. It’s not even obvious that the people I want to work with should prioritize gaining collaboration-meta-skills instead of whatever object-level skills that they’re great at.
Still, I think Being a Good Collaborators is a pretty important, cross-domain skill. In my dreamworld, it’s a core skill that everyone in my surrounding culture is working on.
Some core concepts:
“The Basics”