Update: We knew the Review Phase would need at least a month. Upon reflection, it was pretty silly to expect a lot of work during that month to happen while people were traveling for holidays.
So, we've decided to extend the Review Phase another two weeks.
By the end of this week we also aim to launch the voting system (see this post and some of it's comments for some current ideas about that). The plan now is for the Review Phase and the Voting Phase to overlap each other (both ending on January 13th).
The voting phase will be 2 weeks long, and halfway through we'll announce a snapshot of what the results would be, if the current votes were tallied. (People can update their votes as often as they want until the 13th)
People are welcome to write new reviews and edit posts in response to the one-week-voting-information. (The idea is for the final two weeks of Review Phase, and 2 weeks of Voting, to be more like a conversation people can respond to than an immediate, final decision)
The Review Phase is a bit of an evolving process – I'm expecting us to learn over the course of the month what sort of reviews are most helpful.
One explicit update I made since last week is shifting the Review Phase from "write up whether you think this post should be included in the book" to "focus on providing information to other people who are evaluating the post."
The "judge" mindset seemed to be outputting less useful content than the "provide information to help evaluate" mindset.
I do think including notes about what you think should be included in the book is still valuable, but is something it makes more sense to do after you've spent some time in "evaluate and add information" mode.
I do think including notes about what you think should be included in the book is still valuable, but is something it makes more sense to do after you've spent some time in "evaluate and add information" mode.
Yeah, that's the central question for the voting phase, which comes after the reviewing phase.
LessWrong is currently doing a major review of 2018 — looking back at old posts and considering which of them have stood the test of time. Info about what features we added to the site for writing reviews is in December's monthly updates post.
There are three phases:
We’re now in the Review Phase, and there are 75 posts that got two or more nominations. The full list is here. Now is the time to dig into those posts, and for each one ask questions like “What did it add to the conversation?”, “Was it epistemically sound?” and “How do I know these things?”.
The LessWrong team will award $2000 in prizes to the reviews that are most helpful to them for deciding what goes into the Best of 2018 book.
If you’re a nominated author and for whatever reason don’t want one or more of your posts to be considered for the Best of 2018 book, contact any member of the team - e.g. drop me an email at benitopace@gmail.com.
Creating Inputs For LW Users' Thinking
The goal for the next month is for us to try to figure out which posts we think were the best in 2018.
Not which posts were talked about a lot when they were published, or which posts were highly upvoted at the time, but which posts, with the benefit of hindsight, you're most grateful for being published, and are well suited to be part of the foundation of future conversations.
This is in part an effort to reward the best writing, and in part an effort to solve the bandwidth problem (there were more than 2000 posts written in 2018) so that we can build common knowledge of the best ideas that came out of 2018.
With that aim, when I'm reviewing a post, the main question I'm asking myself is
A large part of the review phase is about producing inputs for our collective thinking. With that in mind, I’ve gathered some examples of things you can write that are help others understand posts and their impacts.
1) Personal Experience Reports
There were a lot of examples of this in the nomination phase, which I found really useful, and would find useful to read more of. Here are some examples:
Raemon:
Joh N. Swentworth
Swimmer963:
David Manheim:
Eli Tyre:
ryan_b:
More detail is also really great. I'd definitely encourage the above users to be more thorough about how the ideas in the post impacted them. Here's a nomination that had a bunch more detail about how the ideas have affected them.
jacobjacob:
A special case here is data from the author themselves, e.g. “Yeah, this has been central to my thinking” or “I didn’t really think about it again” or “I actually changed my mind and think this is useful but wrong”. I would generally be excited for users to review their own posts now that they've had ~1.5 years of hindsight, and I plan to do that for all the posts I've written that were nominated.
If a post had a big or otherwise interesting impact on you, consider writing that up.
2) Big Picture Analysis (e.g. Book Reviews)
There are lots of great book reviews on the web that really help the reader understand the context of the book, and explain what it says and adds to the conversation.
Some good examples on LessWrong are be the reviews of Pearl's Book of Why, The Elephant in the Brain, The Secret of Our Success, Consciousness Explained, Design Principles of Biological Circuits, The Case Against Education (part 2, part 3), and The Structure of Scientific Revolutions.
Many of these reviews do a great job of things like
A review of some LessWrong posts would be that time Scott reviewed Inadequate Equilibria. Oh, and don’t forget that time Scott reviewed Inadequate Equilibria.
Many of the posts we’re reviewing are shorter than most of the reviews I linked to, so it doesn’t apply literally, but much of the spirit of these reviews is great. Also check out others short book reviews and consider writing something in that style (e.g. SSC, Thing of Things).
Consider picking a book review style you like and applying it to one of the nominated posts.
3) Testing Subclaims (e.g. Epistemic Spot Checks)
Elizabeth Van Nostrand has written several posts in this style.
For another example, in Scott's review of Secular Cycles, one way he tried to think about the ideas in the book was to gather a bunch of alternative data sets on which to test some of the author’s claims.
These things aren't meant to be full reviews of the entire book or paper, or advice on overall how to judge it. They take narrower questions that are definitively answerable, like is a random sample of testable claims literally true, and answers them as fully as possible.
If there is an important subclaim of a post you think you can check out, consider trying to verify/falsify the claim and writing up your results and partial results.
Go forth and think out loud!