The Center for Applied Rationality is a Bay Area non-profit that, among other things, ran lots of workshops to offer people tools and techniques for solving problems and improving their thinking. Those workshops were accompanied by a reference handbook, which has been available as a PDF since 2020.
The handbook hasn't been substantially updated since it was written in 2016, but it remains a fairly straightforward primer for a lot of core rationality content. The LW team, working with the handbook's author Duncan Sabien, have decided to republish it as a lightly-edited sequence, so that each section can be linked on its own.
In the workshop context, the handbook was a supplement to lectures, activities, and conversations taking place between participants and staff. Care was taken to emphasize the fact that each tool or technique or perspective was only as good as it was effectively applied to one's problems, plans, and goals. The workshop was intentionally structured to cause participants to actually try things (including iterating on or developing their own versions of what they were being shown), rather than simply passively absorb content. Keep this in mind as you read—mere knowledge of how to exercise does not confer the benefits of exercise!
Discussion is strongly encouraged, and disagreement and debate are explicitly welcomed. Many LWers (including the staff of CFAR itself) have been tinkering with these concepts for years, and will have developed new perspectives on them, or interesting objections to them, or thoughts about how they work or break in practice. What follows is a historical artifact—the rough state-of-the-art at the time the handbook was written, circa 2017. That's an excellent jumping-off point, especially for newcomers, but there's been a lot of scattered progress since then, and we hope some of it will make its way into the comments.
Note: despite the different username, I'm the author of the handbook and a former CFAR staff member.
I disagree with this take as specifically outlined, even though I do think there's a kernel of truth to it.
Mainly, I disagree with it because it presupposes that obviously the important thing to talk about is nuclear weapons!
I suspect that Phil is unaware that the vast majority of both CFAR staff and prolific LWers have indeed 100% passed the real version of his test, which is writing and contributing to the subject of existential risk, especially that from artificial intelligence.
Phil may disagree with the claim that nuclear weapons are something like third on the list, rather than the top item, but that doesn't mean he's right. And CFAR staff certainly clear the bar of "spending a lot of time focusing on what seems to them to be the actually most salient threat."
I agree that if somebody seems to be willfully ignoring a salient threat, they have gaps in their rationality that should give you pause.