What about trolls? What about pile-ons?
Trolls: some people are not upset by negative feedback or even actively seek it. I think this could be structured such that this negative feedback would not be rewarding to such people, but it merits consideration, because backfire is at least in principle possible.
Pile-ons: There are documented cases of organized downvote brigades on various platforms, who effectively suppress speech simply because they disagree with it. Now, I wouldn't object to a brigade of mathematicians on a proof wiki downvoting and pages they disagreed with and thereby censoring the pages or driving away their authors; but in most other cases, I think such brigades would be a problem. Again, you might be able to design a version that successfully discouraged such brigades (for instance: have "number of downvotes", "correlation with average downvoter", and "correlation with most-similar downvoter" all visible in someone's profile?), but it merits thought.
I'm sure you could think of a dozen solutions to fill this out into a well-defined system if you spent 5 minutes thinking about it.
Zuegel's point is that you want some people to be able to express implicit or tacit disapproval in a less legible way than leaving a public criticism. To continue the dinner party analogy: you don't go to a dinner party with 10 people chosen at random from billions of people; they are your friends, relatives, coworkers, people you look up to, famous people etc. A look of disapproval or a conspicuous silence from them is very different from context collapse causing a bunch of Twitter bluechecks swarming your replies to crush dissent. So the question is who to choose.
You could, for example, just disable these implicit downvotes for anyone you do not 'follow', or anyone you have not 'liked' frequently. You could have explicit opt-in where you whitelist specific accounts to enable feedback. You could borrow from earlier schemes for soft-voting or weighting of votes like Avogadro: votes are weighted by the social graph, and the more disconnected someone is from you, the less their anonymous downvote counts (falling off rapidly with distance).
My first thought for LW was “post author plus anyone in the comments plus anyone over 1k karma” as the default.
It seems to me that LessWrong already has a downvote button and that downvote button is effectively used to drive out content that the community doesn't want to see in the way that's described.
So far I have found the LW voting behavior instructive and reasonable. It seems like LW'ers do vote on your epistemology rather than the content of your post (like in reddit). It's very cool.
I don't think the post was about LessWrong specifically (at all); think Twitter or Facebook or random blog comments.
Here on this site, yes both downvotes and the absence of upvotes are strong mostly-legible signals.
People only receive feedback from people that are engaged enough to give it.
On The Internet, that's generally true. But that's not so true IRL, face-to-face. And the point of the post is that we could engineer feedback-by-default like the reactions people mostly can't help having when they're visible (or audible) in small groups.
Interesting - I had previously been thinking about the problems that arise from there being so few approving looks on the Internet (upvotes, "likes" etc. are a step in that direction, but still not the same). It hadn't occurred to me consider the reverse as well.
It's a great post, and has a really solid UI idea in the footnotes.
The LW team has been thinking about building private responses like this for a while, but in comment form. Buttons that give more constrained private info are very interesting...