paper-machine comments on [meta] Policy for dealing with users suspected/guilty of mass-downvote harassment? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (239)
Well, here I am again, this time providing a paper backing up my claim that having a downvote mechanism at all is just pure poison.
It doesn't make any sense for this type of community. This isn't Digg. We're not trying to rate content so an algorithm can rank it as a news aggregation service.
Look at Slate Star Codex, where everybody is spending their time now - no aversive downvote mechanism, relaxed, cordial atmosphere, extremely minimal moderation. Proof of concept.
Just turn off the downvote button for one week and if LessWrong somehow implodes catastrophically ... I'll update.
Specifically:
In a footnote, they discuss what they meant by "write worse":
They measure post quality based on textual evidence by spinning up a mechanical turk on 171 comments and using that data to train a binomial regression model. So cool!
Incredibly interesting article. Must read.
EDIT: Consider myself updated. Therefore, I believe downvotes must be destroyed.
The main function of downvotes in LW is NOT to re-educate the offender. Its main function is to make the content which has been sufficiently downvoted effectively invisible.
If you eliminate the downvotes, what will replace them to prune the bad content?
Well, if this is really the goal, then maybe disentangle downvotes from both post/comment karma and personal karma while leaving the invisibility rules in place? Make it more of a "mark as non-constructive" button that if enough people hit it, the post becomes invisible. If we want to make it more comprehensive, it could be made to weigh these votes against upvotes to make the show/hide decision.
Could be done, though it makes karma even more irrelevant to anything.
Negative externalities.
Something else? The above study is sufficient evidence for me (and hopefully others) to start finding another solution.
I am aware of the concept. What exactly do you mean?
It says "This paper investigates how ratings on a piece of content affect its author's future behavior." I don't think LW should be in the business of re-educating its users to become good 'net citizens. I'm more interested in effective filtering of trolling, stupidity, aggression, drama, dick waving, drive-by character assassination, etc. etc.
It's not like the observation that downvoting a troll does not magically convert him into a hobbit is news.