Curated. The wiki pages collected here, despite being written in 2015-2017 remain excellent resources on concepts and arguments for key AI alignment ideas (both still widely used and those lesser known). I found that even for concepts/arguments like the orthogonality thesis and corrigibility, I felt a gain in crispness from reading these pages. The concept of, e.g. epistemic and instrumental efficiency I didn't have, yet feels useful in thinking about the rise of increasingly powerful AI.
Of course, there's also non-AI content that got imported. The Bayes guide likely remains the best resource for building Bayes intuition, and same with the guide on logarithms that is extremely thorough.
FYI, Relative URLs don't work in emails, the email version I received has all the links going to http://w/<post-title>
and thus broken
There are typos in the articles for example the category theory one:
A statement about terminal object is that any
maybe "terminal object" was a link with "s" added at the end but it reverted to its natural form in the importing process
Note: this is a static copy of this wiki page. We are also publishing it as a post to ensure visibility.
Circa 2015-2017, a lot of high quality content was written on Arbital by Eliezer Yudkowsky, Nate Soares, Paul Christiano, and others. Perhaps because the platform didn't take off, most of this content has not been as widely read as warranted by its quality. Fortunately, they have now been imported into LessWrong.
Most of the content written was either about AI alignment or math[1]. The Bayes Guide and Logarithm Guide are likely some of the best mathematical educational material online. Amongst the AI Alignment content are detailed and evocative explanations of alignment ideas: some well known, such as instrumental convergence and corrigibility, some lesser known like epistemic/instrumental efficiency, and some misunderstood like pivotal act.
The Sequence
The articles collected here were originally published as wiki pages with no set reading order. The LessWrong team first selected about twenty pages which seemed most engaging and valuable to us, and then ordered them[2][3] based on a mix of our own taste and feedback from some test readers that we paid to review our choices.
Tier 1
These pages are a good reading experience.
Bayes Rule Guide
Tier 2
These pages are high effort and high quality, but are less accessible and/or of less general interest than the Tier 1 pages.
The list starts with a few math pages before returning to AI alignment topics.
Lastly, we're sure this sequence isn't perfect, so any feedback (which you liked/disliked/etc) is appreciated – feel free to leave comments on this page.
Mathematicians were an initial target market for Arbital.
The ordering here is "Top Hits" subject to a "if you start reading at the top, you won't be missing any major prerequisites as you read along".
The pages linked here are only some of the AI alignment articles, and the selection/ordering has not been endorsed by Eliezer or MIRI. The rest of the imported Arbital content can be found via links from the pages below and also from the LessWrong Concepts page (use this link to highlight imported Arbital pages).