You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on [meta] Policy for dealing with users suspected/guilty of mass-downvote harassment? - Less Wrong Discussion

28 Post author: Kaj_Sotala 06 June 2014 05:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (239)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 06 June 2014 03:44:42PM 9 points [-]

The main function of downvotes in LW is NOT to re-educate the offender. Its main function is to make the content which has been sufficiently downvoted effectively invisible.

If you eliminate the downvotes, what will replace them to prune the bad content?

Comment author: TylerJay 06 June 2014 04:00:15PM *  11 points [-]

Well, if this is really the goal, then maybe disentangle downvotes from both post/comment karma and personal karma while leaving the invisibility rules in place? Make it more of a "mark as non-constructive" button that if enough people hit it, the post becomes invisible. If we want to make it more comprehensive, it could be made to weigh these votes against upvotes to make the show/hide decision.

Comment author: Lumifer 06 June 2014 04:17:29PM 2 points [-]

Could be done, though it makes karma even more irrelevant to anything.

Comment author: [deleted] 06 June 2014 03:58:29PM 1 point [-]

The main function of downvotes in LW is NOT to re-educate the offender. Its main function is to make the content which has been sufficiently downvoted effectively invisible.

Negative externalities.

If you eliminate the downvotes, what will replace them to prune the bad content?

Something else? The above study is sufficient evidence for me (and hopefully others) to start finding another solution.

Comment author: Lumifer 06 June 2014 04:13:00PM *  9 points [-]

Negative externalities.

I am aware of the concept. What exactly do you mean?

The above study is sufficient evidence for me

It says "This paper investigates how ratings on a piece of content affect its author's future behavior." I don't think LW should be in the business of re-educating its users to become good 'net citizens. I'm more interested in effective filtering of trolling, stupidity, aggression, drama, dick waving, drive-by character assassination, etc. etc.

It's not like the observation that downvoting a troll does not magically convert him into a hobbit is news.