LessWrong has gotten big over the years: 31,260 posts, 299 sequences, and more than 120,000 users.[1] It has budded offshoots like the alignment and EA forums and earned itself recognition as "cult". Wonderful!
There is a dark side to this success: as the canon grows, it becomes harder to absorb newcomers (like myself).[2] I imagine this was the motivation for the recently launched "highlights from the sequences".
There's built-in support to export notes & definitions to Anki, goodies for tracking your progress through the content, useful metadata/linking, and pretty visualizations of rationality space...
It's not perfect — I'll be doing a lot of fine-tuning of broken links & missing tags as I work my way through all the content — but there should be enough already in-place that you can get some value out of it. I'd love to hear your feedback, and if you're interested in contributing, please reach out. I'll also soon be adding support for the AF and the EAF .
More generally, I'd love to hear your suggestions for new aspiring rationalists. For example, a round of users proposed alternative reading orders about a decade ago (by Academian, jimrandomh, and XiXiDu). Is it time for a revisit in 2022?
LessWrong has gotten big over the years: 31,260 posts, 299 sequences, and more than 120,000 users.[1] It has budded offshoots like the alignment and EA forums and earned itself recognition as "cult". Wonderful!
There is a dark side to this success: as the canon grows, it becomes harder to absorb newcomers (like myself).[2] I imagine this was the motivation for the recently launched "highlights from the sequences".
To make it easier on newcomers (veterans, you're also welcome to join in), I've created an Obsidian starter-kit for taking notes on the LessWrong core curriculum (the Sequences, Codex, HPMOR, best of, concepts, various jargon, and other odds and ends).
There's built-in support to export notes & definitions to Anki, goodies for tracking your progress through the content, useful metadata/linking, and pretty visualizations of rationality space...
It's not perfect — I'll be doing a lot of fine-tuning of broken links & missing tags as I work my way through all the content — but there should be enough already in-place that you can get some value out of it. I'd love to hear your feedback, and if you're interested in contributing, please reach out. I'll also soon be adding support for the AF and the EAF .
More generally, I'd love to hear your suggestions for new aspiring rationalists. For example, a round of users proposed alternative reading orders about a decade ago (by Academian, jimrandomh, and XiXiDu). Is it time for a revisit in 2022?
Check out the repo
From what I can scrape off the graphql endpoint.
Already a decade ago, jimrandomh was worrying about LW's intimidation factor — we're now about an order of magnitude further along.