You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

paper-machine comments on [meta] Policy for dealing with users suspected/guilty of mass-downvote harassment? - Less Wrong Discussion

28 Post author: Kaj_Sotala 06 June 2014 05:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (239)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 06 June 2014 02:29:17PM *  12 points [-]

Specifically:

By applying our methodology to four large online news communities for which we have complete article commenting and comment voting data (about 140 million votes on 42 million comments), we discover that community feedback does not appear to drive the behavior of users in a direction that is beneficial to the community, as predicted by the operant conditioning framework. Instead, we find that community feedback is likely to perpetuate undesired behavior. In particular, punished authors actually write worse in subsequent posts, while rewarded authors do not improve significantly.

In a footnote, they discuss what they meant by "write worse":

One important subtlety here is that the observed quality of a post (i.e., the proportion of up-votes) is not entirely a direct consequence of the actual textual quality of the post, but is also affected by community bias effects. We account for this through experiments specifically designed to disentangle these two factors.

They measure post quality based on textual evidence by spinning up a mechanical turk on 171 comments and using that data to train a binomial regression model. So cool!

When comparing the fraction of upvotes received by a user with the fraction of upvotes given by a user, we find a strong linear correlation. This suggests that user behavior is largely "tit-for-tat".... However, we also note an interesting deviation from the general trend. In particular, very negatively evaluated people actually respond in a positive direction: the proportion of up-votes they give is higher than the proportion of up-votes they receive. On the other hand, users receiving many up-votes appear to be more "critical", as they evaluate others more negatively.

Incredibly interesting article. Must read.

EDIT: Consider myself updated. Therefore, I believe downvotes must be destroyed.

Comment author: Lumifer 06 June 2014 03:44:42PM 9 points [-]

The main function of downvotes in LW is NOT to re-educate the offender. Its main function is to make the content which has been sufficiently downvoted effectively invisible.

If you eliminate the downvotes, what will replace them to prune the bad content?

Comment author: TylerJay 06 June 2014 04:00:15PM *  11 points [-]

Well, if this is really the goal, then maybe disentangle downvotes from both post/comment karma and personal karma while leaving the invisibility rules in place? Make it more of a "mark as non-constructive" button that if enough people hit it, the post becomes invisible. If we want to make it more comprehensive, it could be made to weigh these votes against upvotes to make the show/hide decision.

Comment author: Lumifer 06 June 2014 04:17:29PM 2 points [-]

Could be done, though it makes karma even more irrelevant to anything.

Comment author: [deleted] 06 June 2014 03:58:29PM 1 point [-]

The main function of downvotes in LW is NOT to re-educate the offender. Its main function is to make the content which has been sufficiently downvoted effectively invisible.

Negative externalities.

If you eliminate the downvotes, what will replace them to prune the bad content?

Something else? The above study is sufficient evidence for me (and hopefully others) to start finding another solution.

Comment author: Lumifer 06 June 2014 04:13:00PM *  9 points [-]

Negative externalities.

I am aware of the concept. What exactly do you mean?

The above study is sufficient evidence for me

It says "This paper investigates how ratings on a piece of content affect its author's future behavior." I don't think LW should be in the business of re-educating its users to become good 'net citizens. I'm more interested in effective filtering of trolling, stupidity, aggression, drama, dick waving, drive-by character assassination, etc. etc.

It's not like the observation that downvoting a troll does not magically convert him into a hobbit is news.