In the context of:

In the spirit of writing down conversations, this is a rough summary of some recent conversations with Oliver Habryka and Zvi Mowshowitz about how and why to implement peer review on Less Wrong.

This is not meant to argue for a point or outline a specific plan, just to highlight a bunch of the thoughts we’re currently thinking about. I haven’t put much effort towards polishing it, but it’s here for people who want to follow along with our thought process.

Curation, Canon and Common Knowledge

If 90% of the people around have the idea, when I’m not confident that 100% do then I often explain the basic idea for everyone. This often costs a lot of time.
– Ben Pace on Common Knowledge

Right now we curate about 3 posts a week. This feels about right from the perspective of readers checking in on good content regularly, and authors having a reasonable chance of getting into curated if their posts are good. But it means curation isn’t that strong a signal of quality or importance. A 90th percentile LW post doesn’t necessarily mean “6 months later, we’d expect this idea to still seem important.”

Our intention was for curated to be a fairly big deal and to only curate things we’re confident are “important”, but in practice it seems hard.

We’ve considered either raising standards for Curated, or introducing a new category above it. Early ideas here were renaming “Curated” to “Common Knowledge” with the intent to slightly raise standards and imply that if you want to stay up to date on “what it’s expected you know if you’re participating in LW discourse”, you should read things in the “Common Knowledge” section.

As cute as this name was, a major strike against it was that common knowledge is a useful technical term, and we don’t want to degrade it’s meaning.

We considered other terms like “Canon”, with slightly different connotations: maybe once every few months we could look back and see which posts (curated or otherwise) seemed like they were likely to enter the longterm rationalsphere lexicon.

I chatted separately with Zvi and Oliver about this idea, and found that we had different intuitions about what "Canon" means.

Canon as History

To Zvi and I, the main association for canon was “famous works you’re expected to have read, not necessarily because they’re the clearest or most rigorous but because of their historical context.” Maybe someone has written a better version of Hamlet, or a better version of Plato’s Republic, but you may still want to read those if you’re part of subcultures that value their historical legacy.

In the rationalsphere, Inadequate Equilibria is (sort of) a more formal and rigorous version of Meditations on Moloch, but you might still find value in the poetry, emotional oomph and historical legacy of Meditations, and knowing about it may help you understand a lot of conversations going on since longtime readers will be using chunks of it as shorthand.

Therefore, you might want Meditations on Moloch to be recognized as a part of the Rationalist Canon.

Canon as the Best Reference

Oliver’s take was more like “Canonical status is about which writing you want to be the canonical reference point for a thing”, and you may want this to change over time. (He pointed out the canonization process of the Bible literally involved deciding which stories seemed important, revising their order, etc)

Part of the reason Ben wrote his common knowledge post was that the closest thing he could find to a canonical common knowledge introduction was Scott Aaronson’s essay, which involved a tricky logic puzzle that I still personally struggle to understand even after stepping through it carefully over several hours.

There’s value in stepping through that tricky logic puzzle, but meanwhile, common knowledge is a really useful concept that seemed like it could be explained more intuitively. Ben spent several weeks writing a post that he hoped stood a chance of becoming the Canonical Less Wrong Post on Common Knowledge.

Peer Review as Canon, and Upping Our Game

Meanwhile, one problem in Real Science™ is that it’s hard to remove things from canon. Once something’s passed peer review, made its way into textbooks and entered the public zeitgeist… if you have a replication crisis or a paradigm shift, it may be hard to get people to realize the idea is now bogus.

This suggests two things:

  1. Naively, this suggests you need to be really careful about what you allow into Canon in the first place (if using the Canon as Best Reference frame).
  2. You may even want to aspire higher, to create a system where removing things from Canon is more incentivized. This is probably harder.

LessWrong 2.0 is aiming to be a platform for intellectual progress. Oliver and I are optimistic about this because we think LessWrong 1.0 contributed a lot of genuine progress in the fields of rationality, effective altruism, AI safety and x-risk more generally. But while promising, the progress we’ve seen so far doesn’t seem as great as we could be doing.

In the Canon As History frame, everything in the Sequences should be part of Canon. In the Canon as Best Reference or Peer Review as Canon, frames, there’s… a lot of sequence posts that might not cut it, for a few reasons:

  • The replication crisis happened so some bits of evidence are not longer as compelling
  • The concept hasn’t turned out to be as important after several years of refining instrumental or epistemic rationality
  • Some of the writing just isn’t that great.

Similarly, in the Slatestar Codex, there’s a lot of posts that introduce great ideas, but in a highly politicized context that makes it more fraught to link to them in an unrelated discussion. It’d be useful to have a canonical reference point that introducing an idea without riling up people who have strong opinions on feminism.

And also meanwhile, other problems in Real Science™ include the peer review being:

  • Thankless for the reviewers
  • Intertwined with conferences and journals that come with weird social games and rent-seeking and gate keeping problems
  • Quality of review varies a lot
  • Something about academia makes people write in styles that are really hard to understand, and people don’t even see this as a problem. (See Chris Olah’s Research Debt)

So…

...all of these ideas bumping around has me thinking we shouldn’t just be trying to add another threshold of quality control. As long as we’re doing this, let’s try to solve a bunch of longstanding problems in academia, at least in the domains that LessWrong has focused on.

I’ve recently written about making sure LessWrong is friendly to idea generation. I’ve heard many top contributors talk about feeling intense pressure to make sure their ideas are perfect before trying to publish them, and this results in ideas staying locked inside organizations and in-person conversation.

I think LessWrong still needs more features that enable low-pressure discussion of early stage thoughts.

But, if we’re to become a serious platform for intellectual progress, we also need to incentivize high caliber content that is competitive with mainstream science and philosophy – some combination of “as good or better at rigor and idea generation” and “much better at explaining things.”

I think a challenging but achievable goal might be to become something like distill.pub, but for theoretical rationality and the assorted other topics that LessWrongers have ended up interested in.

[Note: a different goal would be to become prestigious in a fashion competitive with mainstream science journals. This seems harder than “become a reliable internal ecosystem for the rationality and effective altruism communities to develop and vet good ideas, without regard for external prestige.”

I’m not even sure becoming prestigious would be useful, and insofar as it is, it seems much better to try to become good and worry about translating that into prestige later. Meanwhile it’s not a goal I’m personally interested in]


This is the first of a few posts I have planned on this subject. Some upcoming concepts include:

[Edit: alas, I never wrote these up, although we're still thinking about it a bunch]

  • What is Peer Review for? [The short answers are A) highlighting the best content and B) forcing you to contend with feedback that improves your thinking.]
  • What makes for good criticism at different stages of idea development
  • What tools we might build eventually, and what tools exist now that authors may want to use more. (I think a lot of progress can be made just changing some of the assumptions around existing tools on LW and elsewhere)
  • How all of this influences our overall design choices for the site

New to LessWrong?

New Comment
23 comments, sorted by Click to highlight new comments since: Today at 8:38 AM

After tagging settles down a bit, it may be time to re-visit this question more.

I think LW hasn't yet managed to approach google docs in terms of draft-feedback process. Since I compose all my posts directly on LW, this matters to me (of course I could try to copy/paste). 

GDoc-Like Comments For Drafts

The primary thing here is the commenting-on-highlights interface. It's just so much better for editing!

Probably this should work for publish posts as well, facilitating easy private pings of authors for broken links, spelling mistakes, etc. Although there's a question of whether this kind of direct editing feedback from anyone feels aversive and could discourage people.

GDoc-Like Comments for Public Review/Critique

I also think it would be nice if there were some way to associate public comments with specific points in a document, for the purpose of well-organized debate about the points raised in a post. However, I don't know how to make this work without it being (a) pretty aversive, and (b) not too visually cluttered, while (c) still making sure people can see the point-by-point objections to a post fairly easily.

Argument Diagramming

This is a bit of a stretch, but it sure would be nice if there were some natural argument-diagramming going on. To sketch a possible implementation:

  • Points in posts can be pulled out and associated with questions.
  • Questions can be associated with each other via arguments. EG, an answer (perhaps a new type of answer, not a text answer) can state that a particular answer would be true if X and Y were true, where X and Y are different answers (to different questions). 

The point is not so much that this is a good idea as stated. It's just that some form of argument mapping might really help to map disagreements and ultimately clarify the evidential status of contentious points raised in a post.

Post Endorsement

Upvotes have ambiguous meaning. For a while (mainly due to Curi accusing LW of lacking any surface area for falsification, due to the way no one explicitly stands by any principles or any canonical texts as really seriously right) I have been thinking that it would be nice if LW encouraged users to state what they endorse on their homepages. But this would not do very much good without a system for discussing endorsements. 

Let's say for a moment that a post called "The Blindness of History" is really popular but has a big flaw in its argument -- for concreteness let's say it cites a major source as stating the exact opposite of what that source really concludes. People don't notice right away, and like 50 people endorse that post.

Someone notices the problem. Now they need to approach like 50 people and question their endorsements. There needs to be a way to find all the people who endorse a post and do something like that. As things stand, you'd have to search users to find the ones mentioning that post as endorsed on their profile page, and then PM each one.

There could be something good about having a public system like that, which notifies users specifically of challenges to posts which they endorse, and encourages users to respond somehow, perhaps putting an explicit caveat into their endorsement or something.

And of course, these responses need to themselves have responses, etc, encouraging real responses because if you made a bad argument someone will call you out on it.

Not sure how all of this could possibly work.

GDoc-Like Comments For Drafts

Yep, this seems doable, we're working on it. Not currently a priority (lately it's been tagging for most of the team, and the book for me). I think it's plausibly the next priority.

GDoc-Like Comments for Public Review/Critique

My main issue is the visual clutter and breaking up a person's narrative flow. I think the reading experience of this post is substantially impeded by the regular comments saying "Nope, different thing" which often demotivates continuing reading the main argument, and I wouldn't want to force that onto writers' posts.

Some options that come to mind: authors choosing to confirm the comments appearing inline (and otherwise they just appear in the normal comment section); authors choosing a small number of users who can comment inline; things only appearing inline if they reach a karma threshold; things only appearing inline after you've already read the post once (e.g. a button at the end of the post to turn the comments inline). I don't expect any of these ideas to work. Overall I've not played around with it yet, will probably have better ideas later.

Argument Diagramming

I like your concrete suggestions, I feel more pessimistic about something like 'automatically generated diagrams', they reliably go very poorly.

Post Endorsement

Something about negotiating lots of public endorsements sounds very gossipy to me in a way that I could imagine being a big mess. I do quite like the idea that when people criticise a post and the critique gets high karma, the people who upvoted (or strong upvoted) the initial post can get a notification. In general "get a notification for substantial discussion of a post I liked" sounds like a notification I'd like to get.

I agree that a post should only become "canon" if it has been public for a while and no convincing counterargument has materialized.

For my AI alignment contest submission, I emailed a bunch of friends asking for feedback prior to submission. That produced a surprisingly high hit rate--I think above 70%. (Getting a soft in-person commitment for feedback prior to emailing might have played a role in some cases.)

Then my submission won $2000, and the top comment on the prize announcement was me asking people for feedback. That hit rate was vastly lower--so far my Medium post has 3 short comments.

This experience causes me to think that the best way to do peer review for blog posts is to email friends/contacts prior to publication and ask for feedback. Maybe you could integrate a workflow like this into the LW interface--for example, instead of "submitting" my post, I privately share a messy draft with someone from the Sunshine Regiment, they give me some thoughts, etc. Something like this could be especially valuable for folks outside the Bay Area who aren't as well-connected as I am.

I'd be especially enthusiastic about sharing drafts with the Sunshine Regiment if they could tell me in advance whether my post would be featured. Maybe I would have submitted my alignment contest entry to LW if I knew it would be featured. As things were, my post was sufficiently heterodox that I was worried about reflexive downvoting, and I wanted the contest organizers to judge it on its merits.

More broadly, I wonder if there's a sense in which downvoting is just a bad fit for blog posts. An author can spend weeks laboring over a post, then have someone downvote it after reading the first paragraph--this feels imbalanced. Even reddit mostly just votes on comments & links. LW is the only group blog I know of where post voting is a big thing. I think I'd rather submit to the considered judgement of one or two respected individuals, who will communicate with me & explain their reasoning, than the whims of the masses. (Maybe this "gatekeeper" person should be an anonymous reviewer who is not the same as my "colleague" from the Sunshine Regiment mentioned above--the same way the professor who helps me write a paper will not decide if it gets accepted to a journal or not.)

Eliezer wrote a thing on Facebook once about how you need 4 layers of feedback / criticism or else everything is terrible. I don't want to find it because finding things on Facebook is terrible. Does anyone know if it's written down in a form that's at all feasible to find / link to?

Edit: Here it is.

Alyssa Vance wrote it up. Yup, seems super relevant. If you search Rational Conspiracy blog for it you'll probably find it.

Also I link to it in Meta-tations on Moderation

Thoughts on the unsuitability of adding more thresholds of quality control:

The idea of promoting things up a narrowing hierarchy falls into one's lap as an easy fix. It won't solve the problem, I think though, of how a good initial proposal of an idea just does not thrive on the same metrics as the write-up of common context needs to. A first draft is not just a shoddy version of a final draft; it actually does something different than the final copy. In the progression of idea generation to canon, someone has to do the transformation work. The same person could do it; general skill / ability over the whole progression exists and I genuinely hope we find a way to nurture it in people. I doubt the sense of counting on people's skill (and willingness) at each step being transferable.

A first draft is not just a shoddy version of a final draft; it actually does something different than the final copy.

Very much agree with this (it's a part of the followup posts I'm half finished with), although it doesn't seem to me like it's a major reason to have more, fewer or same thresholds.

My current best guess for where this site should be going is "make idea generation, first/second drafts, and final products all more visible, and decomposed slightly so the same person doesn't necessarily have to do all of them."

Some subcomponents of that are:

  • At different stages, different kinds of criticism are helpful. Gaining skill at delivering each kind of criticism is important. (The site should probably help people to learn which type is appropriate and how to deliver that type well. Although this also probably varies per person and post, and so rather than trying to standardize it it may be better to just let people specify what sort of criticism they're looking for on a per-post basis. Dunno)
  • I think that making each stage public will make people err on the side of getting their ideas out there sooner rather than later – right now I know a lot of smart people who sit on ideas for months or years because they're worried about releasing them, and my sense is this is both because they're worried about looking dumb, and because they're worried about bad/wrong ideas taking hold. My hope is that making blogposts feel more like first drafts and making sure that only ideas that been pretty rigorously evaluated make it into Canon can help address both concerns.
  • Writing things that are clear does seem like a different skill from idea generation, and we're hoping to develop both UI/functionality and general culture such that people get credit reinforcement for working together to develop ideas.

I think the single best example of the process I'm hoping for looks like Scott Garrabrant's Goodheart Taxonomy post, where people crowdsourced examples that made the post much more clear, and then those changes were incorporated, and then later someone else reworked it into a formal paper. (In this case I think the paper version is less clear than the blogpost, I think because part of the deal was it was reworded to sound more academic, which I think is bad for clarity but good for looking rigorous/prestigious, which isn't the part I want to capture on LessWrong, but is something that seems valuable for interfacing with the outside world, and regardless I was glad to see the collaborative process)

This is a good point. There are a lot more works with the potential to be cannon-worthy, then which immediately belong to the cannon. However, I do worry for the potential of something to be lost when someone other than the author transforms the work. Hopefully, the original author is willing to stay involved in the transformation, even if they don't want to do it all themselves.

When it comes to peer review, good peer review isn't just about selecting important papers. An important part of peer review is offering suggestions for improving a post before it gets published for public consumption. When I write posts that I want to be referenced in the future getting quality feedback to improve them feels important.

Currently, that can be done by circulating a Google Docs draft but there might be a better way to go about it.

Yep, the original title of this essay was actually "Peer Review and Google Docs" and started going into details of why Google Docs accomplish a lot of the goals of Peer Review, and that we've considered just saying "y'all should treat Google Doc review as an important part of the LW posting process", but that my current take is that it's worth replicating Google Doc's features so they can be more obviously and tightly integrated into the LW ecosystem.

I guess that's actually the 80/20 (maybe 95/5?) of the remaining essay right there, but I will still try to flesh it out fully at some point.

(Started to make a top level comment, but commenting here still felt right for the sake of replying to the "google docs" bit.)

I really like the ideas floating around in the post, and I'd be excited for things to move in this direction.

Having google-doc-comments and a system around them which makes sense does seem like a fairly obvious potential big win here. I'm glad you aren't fixating on that and are exploring the shape of the problem first.

A public comments / private comments distinction may be associated with this; lots of ways that might work out. (Is there a group of "reviewers" who can make private comments? Can they see each other's comments? Who sets such a group? Probably that's too inorganic... but the dynamic of public vs private comments could get weird...)

It seems like you think that the process of things becoming canon should be called 'peer review' and maybe should be similar to or based on the process of peer review as it exists in science. I'm not sure why you think this, and it basically seems like a mistake to me - peer review in science seems either incredibly time consuming or pretty random for papers that are neither brilliant not terrible.

None of this is meant to be anything like a final plan. But one thing that caused me to title this post "Peer Review" instead of a bunch of other things was to communicate "we are aspiring to be legibly good in a similar way to how science aspires to be legibly good.

Since part of the goal here is "be better than science" we won't necessarily do the same, or even similar things. But this frames the approach differently that you'd see if our goal was simply to be a discussion forum that highlighted the most interesting or useful posts.

The longterm canon of LW should have particular qualities that distinguish it from other sites. I'm not 100% confident I could nail them down, but some off the cuff guesses:

  • importance
  • clarity
  • be subjected to criticism and scrutiny
  • ideally, be mathematically rigorous in a way that can be reliably, transparently built upon (although this depends on the subject matter)

The ethos of the site should shine through at several levels – it should be apparent in the final output, the underlying process should be visibly part of the output, and the naming / UI / design should reinforce it.

I'm not at all sure that there should be a formal process called "Peer Review", but I did want to orient the process around "create a longterm, rigorous edifice of knowledge that can be built upon."

Even though you say some things in the direction already, I want to harp on the distinction between what we can judge by looking at a post in itself and things which can and should only be judged by the test of time.

I am thinking of a particular essay (not on LW) about how peer review should not judge the significance of a work, only the accuracy, but I can't find it. I think "something like that" is central to your point, since you

  • want something like a retrospective judgement about which essays have stood the test of time
  • also want to have more features to lower the bar.

The essay I am thinking of was about a publishing venue which was established with the explicit goal of peer reviewers judging the rigor of a paper but not its impact/significance, since this cannot be judged ahead of time and is not very relevant to whether work should be published (sort of a generalization of the way replications are harder to publish -- facts are facts, and should be publishable regardless of how flashy they are). The scientist was complaining that a paper of theirs was rejected from that venue due to not being of sufficient interest to the readership. The question of impact had been allowed to creep back into the reviews.

I think there's a very general phenomenon where a venue starts out "alive" -- able to spontaneously generate ideas -- and then produces some good stuff, which raises the expectations, killing off the spontaneity. This can happen with individuals or groups. Some people start new twitter accounts all the time because there is too much pressure to perform once an account has gotten many followers. Old LW split into Discussion and Main, and then Discussion split off discussion threads to indicate a lowered bar.

Um. I guess I'm not being very solution-oriented here.

The problem is, something seems to implicitly raise people's standards as a place gets good, even if there are explicit statements to the contrary (like the rule stating peer reviewers should not judge impact). This can kill a place over time.

Carefully separating what can be judged in the moment vs only after the fact seems like a part of the solution.

Maybe you want to create a "comfortable sloshing mess" of relatively low signal-to-noise chatter which makes people feel comfortable to post, while also having the carefully determined canon which contains the value.

Obviously the "comfortable sloshing mess" is not distinguished primarily by its being bad -- it should be good, but good along different dimensions than the canon. It should meet high standards of rigor along dimensions that are easy to judge from the text itself and not difficult or off-putting for writers to meet. There should be a "google docs comment peer review" for these aspects. (Maybe not **only** for these aspects, but somehow virtuously aligned with respect to these aspects?)

Have you written any of the upcoming posts yet? If so, can you link them?

Have not, alas. (I should stop promising to do that. :P)

Hmm. I had a second half of an essay that was taking too long to finish but the comments so far are making me think I should have waited, so people had a better idea of how I was seeing a bunch of related principles tie together.

When it comes to making a concept making a post like common knowledge well read I'm not sure that curating a list is the best way to go about it.

The alternative is to rely more heavily on hyperlinks. Hyperlinks provide a good way for new people to discover content that's important to know about because it's referenced by other discussions.

Hyperlinks seem to be the main way LW goes about this (both now and previously), and they definitely seem quite important. (This is indeed how I read most of the sequences). But they're fairly bad for being confident in my "completionism" (I have no idea what percentage of the sequences I've read), which is in turn bad for knowledge of what everyone (or at least most people) have read. It also means that people end up reading things in pretty random order, which means someone who's read, say, half the sequences and a bunch of random posts may get a less than half-solid foundation.

Having said that, I think reading via random hyperlink crawl was more _fun_ than reading R:AZ in order. I'm glad it's an option, but I don't think it should be the only option.

Heavliy hyperlinking a post doesn't force the post the actually undergo periodic revision.

Hyperlinks seem analogous to "citations" in science, as a reasonable proxy metric for how important an idea was (i.e, if a lot of people are linking to you while writing other posts, that means you post was a useful building block for other ideas). This is obviously easy to goodhart on but I actually think is pretty reasonable for a rough approximation.

It's occurring to me as I write this that I'd actually like to see more mechanics tying in hyperlinking within LessWrong.

  • if an author retracts a post, then all posts on LW linking to that post could get a little red "this post is linking to a retracted
  • you could plot out a graph of which essays get most linked within LW articles that are in turn further linked, to generate an approximate list of which ideas are most load-bearing, which could be useful both as a "most important to read for context" list as well as a "most important to actually vet the claims, in case The Replication Crisis happened or something". Depending on use case this'd probably want to be a first pass list that is then manually curated

In science you don't have any journal where all papers are read by everyone and it still works efficiently to build on the work of other people.

The main way this is achieved is by putting the responsibility of reading whatever is referenced when the reader feels like they need more information.

I think that even if you would have a curated list of very important posts not everybody is going to read them. Having such a resource could be useful as a resource for some people who desire to have more curation to guide their reading but that won't be everybody.

When it comes to additional features to encourage linking, we might add a list of backlinks within LessWrong to a post. On the top of a post the amount of backlinks could be stated and when a user clicks on the counter it expands and all the backlinks are shown.

If another category is added, that might make the front-page even more complicated?

Regarding canon, that seems like the kind of thing that should only be decided after a certain amount of time has passed as some posts have more staying power than others.

Yeah, part of the thought here was to look back and say "which posts, after 6 months or so, still seem relevant?"

And yes, this would involve redesigning the Front Page in some fashion as opposed to just adding more things.

For the time being the Recommended Sequences section actually roughly functions as the Canon section (it means that things only end up in "Canon" if they ended up in the context of a sequence, which excludes some things unnecessarily , but isn't actually the worst placeholder solution)