What is the plan for incorporation of comments into the book?
I'm guessing for most posts they'll just be omitted, and it'll be fine (or perhaps some curated selection of comments will make it into the book). But I notice that Unreal's Circling seems to be a historically relevant post that I would only want to endorse if it came along with a substantial fraction of the discussion in the comments (in a way that would dramatically lengthen its section, possibly 'taking over' the book).
I hunted your comment down here and upvoted it strongly.
I basically only write comments, and when I write "comments for the ages" that I feel proud of, I consider it a good sign if they (1) get many upvotes (especially votes that arrive after lots of competing sibling comments already exist) and (2) do not get any responses (except "Wow! Good! Thanks!" kind of stuff).
Looking at "first level comments" to worthwhile OPs according to a measure like this might provide some interesting and reasonably brief postscripts.
Applying the same basic measure to posts themselves, if an OP gets a large number of direct replies that are highly upvoted that OP may not be dense with relatively useful and/or flawless content. (Though there are probably exceptions that could be detected by thoughtful curating... for example, if the OP is a request for ideas then a lot of highly voted comments are kinda the point.)
Occasionally I think about writing a review, but then feel like I'm too confused to do so.
Some of my open questions:
Perhaps worth noting (ironically)
I just went to begin looking over the 2018 posts, thinking about my own nominations. I was immediately hit with a bit of paralysis of "aaah but I don't even know what standards to employ here – I feel like I want to take a long time to think about all the posts I might want to nominate and how they compare and how they fit into the big picture" (plus, a bit of Pat Modesto whispering in my ear saying "who are youuuu to decide what posts are good!?")
And, well, if I'm experiencing that it seemed like others might be as well.
So, wanted to explicitly note: I think this process will be more fruitful (as well as more fun) if it's more like an evolving conversation than a bunch of people silently thinking independently. A lot of the value is in getting old posts back into the public spotlight in a concentrated way.
So, I'd err on the side of going ahead and nominating things that seem good – you can retract the nomination later if you feel like it was a mistake. You can also start with a relatively brief nomination-endorsement-message that gives the rough gist of why a post was valuable, and later follow it up with a more extensive message when you have time.
So, I don't necessarily think that all the details of this belong in the 2019 books, but... y'know, this is LessWrong, things just don't feel complete without a few levels of meta thrown in.
Update: Posts need at least 2 nominations to proceed to the Review Phase.
I initially left this requirement as a somewhat vague "sufficient nominations", because I wasn't sure how many people would be engaging with the process and how thoroughly. I'm less worried about that now, and meanwhile I think it's there's a fairly substantial shift between "at least one person liked this and took time to say so" to "at least two people liked it."
(The goal is still to have the Review Phase include 50-100 posts, which could potentially mean the nomination-requirement
...Is there some way I can see all the posts I upvoted in 2018 so I can figure out which I think are worthy of nomination?
Compiling the results into a physical book. I find there's something... literally weighty about having your work in printed form. And because it's much harder to edit books than blogposts, the printing gives authors an extra incentive to clean up their past work or improve the pedagogy.
Physical books are also often read in a different mental mode, with a longer attention span, etc. You could also sell it as a Kindle book to get the sa
...For people who are into forecasting, I made a Foretold notebook where you can predict which posts will end up in the final Best of 2018 book.
Quick update: the Review UI is almost ready but has a few kinks to work out before me merge it into production. Apologies for delay.
Some open questions and meandering thoughts on 'What exactly do we want out of the Review Phase?'
There's a few different goals one might have for Review. I think ideally I'd like all of them, but I'm not sure how much bandwidth people will have.
I see two broad ontologies for "what I want reviewers to do"
Ontology A – What information do we want?
Different posts call for different types of evaluation.
A post that makes a bunch of empirical claims should have at least some of those claims epistemic-spot-checked.
A post that proposes ontologies and ca
...I'm optimistic about the review process incentivizing high-quality intellectual engagement by means of "upping the stakes." Normally, if someone writes a bad post, I'm likely to just downvote or ignore it if I have better things to do with my time that day than argue on the internet.
But if someone writes a bad post and it gets multiple nominations to be included in a paper book allegedly representing the best my stupid robot cult has to offer, then that forces me to write a rebuttal, even though I'd kind of rather not, because I was planning on spending all of my spare energy this month on memoir-writing to help me process trauma and stop being so emotionally attached to this stupid robot cult that's bad for me. If other people feel the same way (higher stakes spur more effort), we could have some fruitful discussions that we otherwise wouldn't.
Thanks so much for organizing this! (Not sarcasm, actual sincere and enthusiastic thanks despite negative-valence words in previous paragraph.)
overton-window fights
So, sorry in advance if I'm reading way too much into a casual choice of words, but—this is an incredibly ominous metaphor, right? (I'm definitely not blaming you for anything, because I've also used it in just this context, and it took me a while to notice how incredibly ominous it is.)
Maybe my rationality realism is showing, but I thought the premise and promise of the website is that there are laws of systematically correct reasoning as objective as mathematics—different mathematicians from different cultures might have different interests (like analysis or algebra or combinatorics) or be accustomed to different notations, but ultimately, they're all on the same cooperative quest for Truth—even if that cooperative process may occasionally involve some amount of yelling and crying.
("And being universals," said the Lady 3rd, "they bear no distinguishing evidence of their origin.")
The Overton window concept describes a process of social-pressure mind control, not rational deliberation: an idea is said to be "outside the Overton window" not on account of its being wrong, but on account of its being unacceptably unpopular. If a mathematician were to describe a
...I also note that I'm looking afresh at many of my backburner post ideas, since getting them out before the end of December would mean they'd be available for review in 2020 instead of 2021.
A couple of comments on nomination UI:
I got an email about this, so I decided to check if the quality of content here has really increased enough to claim to have material for a new Sequence (I stopped coming here after the in my opinion botched execution of lw2).
I checked the posts, and I don't see anywhere near enough quality content to publish something called a Sequence, without cheapening the previous essays and what 'The Sequences' means in a LessWrong context.
I'm curious about negative or neutral endorsements. That is, I'm going through and looking at posts and thinking "should this be in the review? Why or why not?", and sometimes the answer comes back "no" for somewhat interesting reasons.
The example that prompted this question is Write a Thousand Roads to Rome. It's a clear statement of an important pedagogical point, but it's an exhortation to action that I don't think moved the community all that much (from my vantage point). If I want people to read it now, it's more because "hey, here's some advice we st
...Personal/meta/process note:
I've particularly liked looking for posts to nominate, because it's revealed to me ideas that I now think should inform my thinking, but did not at the time. As such, it's somewhat sad that these posts are (as I understand it) not the sort of things I should nominate for the "Best of 2018", and I wish I had another way to signal-boost them, perhaps by nominating them for "Under-rated of 2018". (I guess I could just comment on them, but that doesn't seem like the sort of thing comments are for).
The link to the 2018 posts sorted by karma is not working correctly for me; it redirects me to /allPosts for some reason.
For what it's worth, I bid for the review prizes to be based off of people voting for which reviews were useful. The alternatives, and why I think they're worse:
I alluded to this in this comment, but wanted to put it a bit more clearly:
I think it makes sense to think of The 2018 Review as like "an academic journal", where you submit ideas, and if the ideas seem valuable it get included into a curated work – but not a work that everyone is expected to have read.
By contrast, Rationality A-Z is more like "a textbook", which is foundational to the field. My current best guess it'll make sense for next year's review process to include considering which things make sense to add to a sequence that's similar in scope to R
...I think I have enough karma, but I can't figure out where the nomination button is. Could someone share a screenshot?
Is the intent in the review phase to display the number of nominations received (which will impact which posts get reviewed) or not (which fails to display information that I am likely to find useful in using the list of posts that have been nominated by enough people to form a reading list)?
It would be nice if you could link your ""Best of 2018" sequence " text to the actual results of this process...
I am assuming that this outcome was actually reached?
Searching in the searchbox for "best of" gets me best quotes and this article, but not the "best of 2018" sequence.
LessWrong is currently doing a major review of 2018 — looking back at old posts and considering which of them have stood the tests of time. It has three phases:
Authors will have a chance to edit posts in response to feedback, and then the moderation team will compile the best posts into a physical book and LessWrong sequence, with $2000 in prizes given out to the top 3-5 posts and up to $2000 given out to people who write the best reviews.
Helpful Links:
This is the first week of the LessWrong 2018 Review – an experiment in improving the LessWrong Community's longterm feedback and reward cycle.
This post begins by exploring the motivations for this project (first at a high level of abstraction, then getting into some more concrete goals), before diving into the details of the process.
Improving the Idea Pipeline
In his LW 2.0 Strategic Overview, habryka noted:
Over the past couple years, much of my focus has been on the early-stages of LessWrong's idea pipeline – creating affordance for off-the-cuff conversation, brainstorming, and exploration of paradigms that are still under development (with features like shortform and moderation tools).
But, the beginning of the idea-pipeline is, well, not the end.
I've written a couple times about what the later stages of the idea-pipeline might look like. My best guess is still something like this:
I still have a lot of uncertainty about the right way to go about a review process, and various members of the LW team have somewhat different takes on it.
I've heard lots of complaints about mainstream science peer review: that reviewing is often a thankless task; the quality of review varies dramatically, and is often entangled with weird political games.
Meanwhile: LessWrong posts cover a variety of topics – some empirical, some philosophical. In many cases it's hard to directly evaluate their truth or usefulness. LessWrong team members had differing opinions on what sort of evaluation is most useful or practical.
I'm not sure if the best process is more open/public (harnessing the wisdom of crowds) or private (relying on the judgment of a small number of thinkers). The current approach involves a mix of both.
What I'm most confident in is that the review should focus on older posts.
New posts often feel exciting, but a year later, looking back, you can ask if it actually has become a helpful intellectual tool. (I'm also excited for the idea that, in future years, the process could also include reconsidering previously-reviewed posts, if there's been something like a "replication crisis" in the intervening time)
Regardless, I consider the LessWrong Review process to be an experiment, which will likely evolve in the coming years.
Goals
Before delving into the process, I wanted to go over the high level goals for the project:
1. Improve our longterm incentives, feedback, and rewards for authors
2. Create a highly curated "Best of 2018" sequence / physical book
3. Create common knowledge about the LW community's collective epistemic state regarding controversial posts
Longterm incentives, feedback and rewards
Right now, authors on LessWrong are rewarded essentially by comments, voting, and other people citing their work. This is fine, as things go, but has a few issues:
The aim of the Review is to address those concerns by:
A highly curated "Best of 2018" sequence / book
Many users don't participate in the day-to-day discussion on LessWrong, but want to easily find the best content.
To those users, a "Best Of" sequence that includes not only posts that seemed exciting at the time, but distilled reviews and followup, seems like a good value proposition. And meanwhile, helps move the site away from being time-sensitive-newsfeed.
Common knowledge about the LW community's collective epistemic state regarding controversial posts
Some posts are highly upvoted because everyone agrees they're true and important. Other posts are upvoted because they're more like exciting hypotheses. There's a lot of disagreement about which claims are actually true, but that disagreement is crudely measured in comments from a vocal minority.
The end of the review process includes a straightforward vote on which posts seem (in retrospect), useful, and which seem "epistemically sound". This is not the end of the conversation about which posts are making true claims that carve reality at it's joints, but my hope is for it to ground that discussion in a clearer group-epistemic state.
Review Process
Nomination Phase
1 week (Nov 20th – Dec 1st)
Review Phase
4 weeks (Dec 1st – Dec 31st)
Voting Phase
1 Week (Jan 1st – Jan 7th)
Posts that got at least one review proceed to the voting phase. The details of this are still being fleshed out, but the current plan is:
Books and Rewards
Public Writeup / Aggregation
Soon afterwards (hopefully within a week), the votes will all be publicly available. A few different aggregate statistics will be available, including the raw average, and potentially some attempt at a "karma-weighted average."
Best of 2018 Book / Sequence
Sometime later, the LessWrong moderation team will put together a physical book, (and online sequence), of the best posts and most valuable reviews.
This will involve a lot of editor discretion – the team will essentially take the public review process and use it as input for the construction of a book and sequence.
I have a lot of uncertainty about the shape of the book. I'm guessing it'd include anywhere from 10-50 posts, along with particularly good reviews of those posts, and some additional commentary from the LW team.
Note: This may involve some custom editing to handle things like hyperlinks, which may work differently in printed media than online blogposts. This will involve some back-and-forth with the authors.
Prizes