I think one of the ways I would frame a crux-y question here is: when would we prefer to have a low-value comment vs not have that comment?
Ideally every good post would get good comments.
For less than ideal worlds, should we be trying harder to make sure good posts get at least bad comments?
I understand wanting to have a very high quality bar for comments on the side. I also understand (as a researcher) how both deeply motivating/validating any kind of interaction is, and how demotivating non-interaction is.
It would be beneficial (I think) to put out guidance on this that helps people navigate this tradeoff: (do I quickly leave a bad/low-effort comment or do I not say anything?)
Great question. IMO it's probably worth it to try leaving what you think may be a bad comment rather than no comment at all. Sometimes what we assume is obvious or a bad comment may actually be very useful or just the feedback/fresh perspective that someone else needed.
You've made me realize that simply counting the comments and other criteria in my post probably doesn't provide enough signal though. Because the first comment may indeed still be bad from the perspective of the original author/researcher or just not the kind of feedback they really needed, or else they need more of it.
But we could promote this "just go for it" attitude for initial comments as a community norm in combination with this suggestion from Jon Garcia:
What if posts could be flagged by authors and/or the community as needing feedback or discussion? This could work something like pinning to the front page, except that the "pinnedness" could decay over time to make room for other posts while getting periodically refreshed.
Then in case the initial rushed comment didn't satisfy the author's need for feedback, they could simply continue to leave the post flagged as still needing feedback/discussion.
It would be easy to forget to remove this tag from your posts. So every time a new comment is made, AF/LessWrong should probably prompt the author whether the tag is still necessary, or automatically remove the tag and force the author to re-add it if they still need it.
Additional incentives may be useful to make sure people don't clutter up the tag with comments that just-kinda-sorta-would-be-nice to have more comments on. Maybe an author is limited to having 2~3 posts with this tag at a time. Or it uses some kind of bounty mechanism where comments submitted on a post with the "needs feedback" tag earn 2x karma, with the extra karma donated from the author of the post.
I remember seeing that post and really wanting to look into it further, but the dense, novel math notation made me postpone that until I forgot about it. (We also had a new baby recently, and I'm working on responding to reviewer comments on a paper I'm trying to publish, so that wasn't the only reason.) It could certainly have helped me grasp the gist of their work if someone had posted a comment at the time summarizing what they got out of it.
What if posts could be flagged by authors and/or the community as needing feedback or discussion? This could work something like pinning to the front page, except that the "pinnedness" could decay over time to make room for other posts while getting periodically refreshed.
I don't know what algorithm is used for sorting front page posts (some combination of recency of the post and its comments and the karma score it has received?), but you could an extra term that has this "jump and decay" behavior. Perhaps the more community members flag it for discussion and feedback, the more often its front-page status gets refreshed. And the more comments it receives, the more the coefficient on this term goes to 0.
Thus, its position in the sorted queue of posts would become like that of a typical post once it has actually gotten feedback (or the author has indicated that the feedback is satisfactory). But it would keep coming back to the community's attention automatically until then.
This may remind you of a project that's underway to provide peer review for posts on the Alignment Forum.
Is that still ongoing?
That said, I strongly agree that I'd love to see more peer review.
Nope, this is dead at the moment. I still think peer review matters a lot, but that's not what I'm focusing on at the moment, and none of the researchers on that project had enough time to invest for these in-depth review...
Gratitude for posting this. It caused me to read and write this comment https://www.alignmentforum.org/posts/DreKBuMvK7fdESmSJ/how-deepmind-s-generally-capable-agents-were-trained?commentId=wkyYQj8DRTLsxFQe5#comments
Yesterday on the Weekly Alignment Research Coffee Time call as people were sharing updates on their recent work, Vanessa Kosoy brought up her and Diffractor's post Infra-Bayesian physicalism: a formal theory of naturalized induction which she was interested in getting feedback on.
Vanessa seemed a bit disappointed/frustrated that this post had received no comments yet. I had to agree after learning that it proposes the first formal decision theory realizing naturalized induction, which has been an open problem in alignment for years. It's also been over 5 weeks since she posted it.
So why hasn't anyone commented? Well it's possible folks have just been busy since it's been the end of the year and the holidays. It also may have to do with that the post includes accompanying proofs and invokes some complex-looking maths; such papers take more time to digest and it's only a subset of people on this forum and LessWrong who are capable of doing so.
But it's also possible that the post was forgotten. Even though it may be important, since it's no longer on the front page of the Alignment Forum and hasn't been referenced anywhere else, nobody would think to look at it again unless it happened to come up in one of their searches or they were prompted about it. In other words, it may have just "fallen through the cracks".
This got me wondering about what other posts on the Alignment Forum might have fallen through the cracks. I did a quick search on the forum to see if I could find out. My search criteria was the following:
Here are the posts that came up in my search. I only went through about ~200 posts for this so consider it exemplary rather than exhaustive:
To be sure, I don't expect all of these posts to necessarily be super important and in dire need of comments. There could be other reasons a post met these criteria, for example that people liked it but it was relatively straightforward and self-contained.
But if there's even a 10% chance that a post was forgotten and that its author is blocked or slowed in their research progress by not receiving any feedback, it seems like an easy win for the community for us to periodically resurface such posts. Authors may be too shy or concerned about looking self-promotional to repost their own work.
Do you know of other posts that you think are important and could benefit from feedback but didn't receive any or enough? This can include your own work. Mention them in the comments below.
I expect this problem to only get worse as the Alignment Forum grows. So I wanted to raise this issue and start a conversation about it. I or someone else could periodically do a post like this to help resurface posts that have fallen through the cracks. Or perhaps there are other more systematic ways we could address this problem as the community scales.
Note: This may remind you of a project that's underway to provide peer review for posts on the Alignment Forum. I consider this to be related but distinct from what I'm talking about here. In that project, Adam, Joe and Jérémy are providing in-depth academic-style peer review for select posts on AF.
I think this is great, but here I'm more concerned about potentially promising posts that haven't gotten any feedback at all. My assumption is that a lot of posts could benefit from having at least one person read through and comment based on their initial impressions - and that the community is large enough to provide this - even if we don't have capacity yet to provide them all with full-fledged peer review.