I'd like to distill AI Safety posts and papers, and I'd like to see more distillations generally. Ideally, posts and papers would meet the following criteria:
- Potentially high-impact for more people to understand
- Uses a lot of jargon or is generally complex and difficult to understand
- Not as well-known as you think they should be (in the AI X-risk space)
What posts meet these criteria?
Raemon's new rationality paradigm (might be better to wait until the launch test is finished). The CFAR handbook is also pretty distillable.
The Superintelligence FAQ (allegedly one of the best ways to introduce someone to AI safety)
OpenAI's paper on the use of AI for manipulation (important for AI macrostrategy)
Cyborgism
Please don't throw your mind away (massive distillation potential, but trickier than it looks)
The Yudkowsky Christiano debate (I tried showing this to my 55-yo dad and he couldn't parse it and bounced off because he knows software but not econ, the AI chapter from the precipice later got him to take AI safety seriously)
Stuff on AI timelines is generally pretty good, the authors have a tangible fear of getting getting lampooned by the general public/journalists/trolls for making the tiniest mistake, so they make the papers long and hard to read; if you distill them, that diffuses responsibility.