Every now and then, I write an LW comment on some topic and feel that the contents of my comment pretty much settles the issue decisively. Instead, the comment seems to get ignored entirely - it either gets very few votes or none, nobody responds to it, and the discussion generally continues as if it had never been posted.
Similarly, every now and then I see somebody else make a post or comment that they clearly feel is decisive, but which doesn't seem very interesting to me. Either it seems to be saying something obvious, or I don't get its connection to the topic at hand in the first place.
This seems like it would be about inferential distance: either the writer doesn't know the things that make the reader experience the comment as uninteresting, or the reader doesn't know the things that make the writer experience the comment as interesting. So there's inferential silence - a sufficiently long inferential distance that a claim doesn't provoke even objections, just uncomprehending or indifferent silence.
But "explain your reasoning in more detail" doesn't seem like it would help with the issue. For one, we often don't know beforehand when people don't share our assumptions. Also, some of the comments or posts that seem to encounter this kind of a fate are already relatively long. For example, Wei Dai wondered why MIRI-affiliated people don't often respond to his posts that raise criticisms, and I essentially replied that I found the content of his post relatively obvious so didn't have much to say.
Perhaps people could more often explicitly comment if they notice that something that a poster seems to consider a big thing doesn't seem very interesting or meaningful to them, and briefly explain why? Even a sentence or two might be helpful for the original poster.
If we had developer resource, we'd have buttons on each comment that allowed us to register one of several standard opinions on a comment, like "broadly agree" etc. I once again wonder whether hacking on LW would be high-value activity.
I have been practicing intense automation for a consider amount of time now and I feel as though I could easily provide advice (effectively: lead a team) to make the task of hacking on LessWrong significantly trivial. In such a case, there would be no question as to the value of the experiment because it could be conducted so readily.
However, we already have buttons for what you describe in the generalized upvoting and downvoting options. The issue is not that we do not have such functions, but we are not utilizing them optimally; replies that may be exceptionally relevant are being ignored in whole.