You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

moridinamael comments on [meta] Policy for dealing with users suspected/guilty of mass-downvote harassment? - Less Wrong Discussion

28 Post author: Kaj_Sotala 06 June 2014 05:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (239)

You are viewing a single comment's thread.

Comment author: moridinamael 06 June 2014 02:11:01PM *  10 points [-]

Well, here I am again, this time providing a paper backing up my claim that having a downvote mechanism at all is just pure poison.

It doesn't make any sense for this type of community. This isn't Digg. We're not trying to rate content so an algorithm can rank it as a news aggregation service.

Look at Slate Star Codex, where everybody is spending their time now - no aversive downvote mechanism, relaxed, cordial atmosphere, extremely minimal moderation. Proof of concept.

Just turn off the downvote button for one week and if LessWrong somehow implodes catastrophically ... I'll update.

Comment author: Nornagest 06 June 2014 06:42:48PM *  12 points [-]

I'd rather kill karma entirely than refactor it into an upvote-only system. If you're trying to do anything more controversial than deciding which cat picture is the best, upvote-only systems encourage nasty factional behavior that I don't want to see here: it doesn't matter how many people you piss off as long as you're getting strong positive reactions, so it's in your interests to post divisive content. That in turn leads to cliques and one-upmanship and other unpleasantness. It's a common pattern on social media, for example.

The other failure mode you get from it is lots of content-free feel-good nonsense, but we have strong enough norms against that that I don't think it'd be a problem in the short term.

Comment author: moridinamael 06 June 2014 07:11:00PM 6 points [-]

I'd be fine with that. I feel a bit silly repeating the same arguments, but we're supposed to be striving to be, like, the most rational humans as a community, yet the social feedback system we are using was chosen ... because it came packaged with Reddit and Reddit is what was chosen as the LessWrong platform because it was the hot thing of its day. There was no clever Quirrell-esque design behind our karma system designed to bring out the best in us or protect us from the worst in us. It's a relic. Let's be rid of it.

No Karma 2014

Comment author: [deleted] 06 June 2014 02:29:17PM *  12 points [-]

Specifically:

By applying our methodology to four large online news communities for which we have complete article commenting and comment voting data (about 140 million votes on 42 million comments), we discover that community feedback does not appear to drive the behavior of users in a direction that is beneficial to the community, as predicted by the operant conditioning framework. Instead, we find that community feedback is likely to perpetuate undesired behavior. In particular, punished authors actually write worse in subsequent posts, while rewarded authors do not improve significantly.

In a footnote, they discuss what they meant by "write worse":

One important subtlety here is that the observed quality of a post (i.e., the proportion of up-votes) is not entirely a direct consequence of the actual textual quality of the post, but is also affected by community bias effects. We account for this through experiments specifically designed to disentangle these two factors.

They measure post quality based on textual evidence by spinning up a mechanical turk on 171 comments and using that data to train a binomial regression model. So cool!

When comparing the fraction of upvotes received by a user with the fraction of upvotes given by a user, we find a strong linear correlation. This suggests that user behavior is largely "tit-for-tat".... However, we also note an interesting deviation from the general trend. In particular, very negatively evaluated people actually respond in a positive direction: the proportion of up-votes they give is higher than the proportion of up-votes they receive. On the other hand, users receiving many up-votes appear to be more "critical", as they evaluate others more negatively.

Incredibly interesting article. Must read.

EDIT: Consider myself updated. Therefore, I believe downvotes must be destroyed.

Comment author: Lumifer 06 June 2014 03:44:42PM 9 points [-]

The main function of downvotes in LW is NOT to re-educate the offender. Its main function is to make the content which has been sufficiently downvoted effectively invisible.

If you eliminate the downvotes, what will replace them to prune the bad content?

Comment author: TylerJay 06 June 2014 04:00:15PM *  11 points [-]

Well, if this is really the goal, then maybe disentangle downvotes from both post/comment karma and personal karma while leaving the invisibility rules in place? Make it more of a "mark as non-constructive" button that if enough people hit it, the post becomes invisible. If we want to make it more comprehensive, it could be made to weigh these votes against upvotes to make the show/hide decision.

Comment author: Lumifer 06 June 2014 04:17:29PM 2 points [-]

Could be done, though it makes karma even more irrelevant to anything.

Comment author: [deleted] 06 June 2014 03:58:29PM 1 point [-]

The main function of downvotes in LW is NOT to re-educate the offender. Its main function is to make the content which has been sufficiently downvoted effectively invisible.

Negative externalities.

If you eliminate the downvotes, what will replace them to prune the bad content?

Something else? The above study is sufficient evidence for me (and hopefully others) to start finding another solution.

Comment author: Lumifer 06 June 2014 04:13:00PM *  9 points [-]

Negative externalities.

I am aware of the concept. What exactly do you mean?

The above study is sufficient evidence for me

It says "This paper investigates how ratings on a piece of content affect its author's future behavior." I don't think LW should be in the business of re-educating its users to become good 'net citizens. I'm more interested in effective filtering of trolling, stupidity, aggression, drama, dick waving, drive-by character assassination, etc. etc.

It's not like the observation that downvoting a troll does not magically convert him into a hobbit is news.

Comment author: Pfft 06 June 2014 02:45:21PM 33 points [-]

For what it's worth I find the SSC comment section pretty unreadable, since it is just a huge jumble of good and bad comments with no way to find the good ones.

Comment author: [deleted] 06 June 2014 02:47:50PM 0 points [-]

There's also a significant amount of astroturfing from various sources that muddies the water further.

Comment author: David_Gerard 07 June 2014 06:33:28AM 2 points [-]

?? Such as?

Comment author: VAuroch 10 June 2014 09:02:01PM 3 points [-]

Presumably p-m primarily means the neoreactionaries.

Comment author: Nornagest 10 June 2014 09:17:17PM *  6 points [-]

I don't think that's astroturfing; I think it's just that Scott's one of the few semi-prominent writers outside their own sphere who'll talk to NRx types without immediately writing them off as hateful troglodytic cranks. Which is to his credit, really.

Comment author: VAuroch 10 June 2014 09:39:51PM 1 point [-]

That's fair, but I think it was probably what paper-machine was referring to.

Comment author: [deleted] 10 June 2014 10:23:34PM 0 points [-]

More or less. They're not the only ones, of course, but perhaps they're the most obvious.

Comment author: David_Gerard 11 June 2014 08:00:54AM *  1 point [-]

I wouldn't call that astroturfing, I'd say that's more wanting anyone to talk to. The lack of a rating system means people don't get downvoted to obvlion, instead they get banned if they break the house rules badly enough. (I'm surprised James A. Donald lasted as long as he did there.)

Comment author: [deleted] 11 June 2014 01:21:33PM *  0 points [-]

I don't know what "that" you and Nornagest are referring to, so I have no way of knowing if "that" is really astroturfing or not. On the other hand, six comments about the appropriateness of a single word seems like overkill. On the gripping hand, it appears the community wants more of it, so by all means, continue.

Comment author: Viliam_Bur 06 June 2014 05:18:23PM *  20 points [-]

I think people go to Slate Star Codex, because that's where Scott writes his articles, not because of the voting mechanism.

From the paper:

authors of negatively evaluated content are encouraged to post more, and their future posts are also of lower quality

Seen that at LW a few times. At some moment the user's karma became so low they couldn't post anymore, or perhaps an admin banned them. From my point of view, problem solved.

I think it would be useful to distinguish between systems where the downvoted comments remain visible, and where the downvoted comments are hidden.

I am reading another website, where the downvoted comments remain proudly visible, with the number of downvotes, and yes, it seem to enrage the user to write more and more of the same stuff. My hypothesis is that some people perceive downvotes as rewards (maybe they love to make people angry, or they feel they are on a crusade and the downvotes mean they successfully hurt the enemy), and these people are encouraged by downvoting. Hiding the comment, and removing the ability to comment, now that is a punishment.

Comment author: buybuydandavis 06 June 2014 11:35:47PM 2 points [-]

My hypothesis is that some people perceive downvotes as rewards (maybe they love to make people angry, or they feel they are on a crusade and the downvotes mean they successfully hurt the enemy)

When I think others are wrong, and in particular, the groupthink is wrong, I take downvotes as a greater indication that someone needs to get their head straight, and it could be them or me. Let's see.

I can think of at least one case where I criticized someone for something I thought was disgraceful, after his post was massively upvoted. I was massively downvoted in turn, but eventually convinced the original poster that they had crossed a line in their original post. Or at least he so indicated. Maybe he was just humoring the crazy person.

maybe they love to make people angry, or they feel they are on a crusade and the downvotes mean they successfully hurt the enemy

Downvotes are a signal. Big downvotes are a big signal.

Maybe it's not about hurting people. Maybe it's about identifying contradiction as the place to look for bad ideas that need fixing.

Comment author: Lumifer 06 June 2014 05:32:58PM 2 points [-]

My hypothesis is that some people perceive downvotes as rewards

A bog-standard troll wants attention and drama. Downvotes are evidence of attention and drama.

Comment author: duckduckMOO 07 June 2014 10:57:40PM *  1 point [-]

"some people perceive downvotes as rewards"

Is this just a dig at people vehemently defending downvoted posts or are you serious in calling this a hypothesis?

Comment author: Lumifer 08 June 2014 01:13:27AM 2 points [-]

To trolls any attention (including downvotes) is a reward.

Comment author: Viliam_Bur 08 June 2014 09:56:34AM *  3 points [-]

Completely serious. Just realise that different people have different goals and/or different models of the world.

Downvote is merely a signal for "some people here don't like this". If you care about opinions of LW readers, and you want to be liked by them, then downvotes hurt. Otherwise, they don't.

For some sick person, making other people unhappy may be inherently desirable, and downvotes are an evidence they succeeded. Imagine some kind of psychopath that derives pleasure from frustrating strangers on internet. (Some people suggest that this actually explains a lot of internet trolling.) Or someone may model typical LW users -- or, in other forum, typical users of the forum X -- as their enemies whose opinions have to be opposed, and downvotes are an evidence that they succeeded to write an "inconvenient truth". Imagine a crackpot, or a heavily mindkilled person. Or a spammer.

Comment author: Blazinghand 06 June 2014 06:13:08PM 7 points [-]

I do not like the voting and commenting system at Slate Star Codex.

Comment author: moridinamael 06 June 2014 06:36:06PM 2 points [-]

It is seriously broken in many ways, I was mainly highlighting the tone and the fact that it doesn't have a voting mechanism and the fact that people still use it in droves despite its huge flaws.

Comment author: David_Gerard 07 June 2014 06:32:23AM 7 points [-]

i think that has way more to do with it being a blog with interesting posts on than anything to do with the commenting system or lack of "like" buttons.

Comment author: PhilGoetz 11 June 2014 12:43:56AM *  6 points [-]

Digging into the paper, I give them an A for effort--they used some interesting methodologies--but there's a serious problem with it that destroys many of its conclusions. Here's 3 different measures they used of a post's quality:

  • q': Quality as determined by blinded users given instructions on how to vote.
  • p: upvotes / (upvotes + downvotes)
  • q: Prediction for p, based on bigram frequencies of the post, trained on known p for half the dataset

q is the measure they used for most of their conclusions. Note that it is supposed to represent quality, but is based entirely on bigrams. This doesn't pass the sniff test. Whatever q measures, it isn't quality. At best it's grammaticality. It is more likely a prediction of rating based on the user's identity (individuals have identifiable bigram counts) or politics ("liberal media" and "death tax" vs. "pro choice" and "hate crime").

q is a prediction for p. p is a proxy for q'. There is no direct connection between q' and q -- no reason to think they will have any correlation not mediated by p.

R-squared values:

  • q to p: 0.04 (unless it is a typo when it says "mean R = 0.22" and should actually say "mean R^2 = 0.22")
  • q to q': 0.25
  • q' to p: 0.12

First, the R-squared between q', quality scores by judges, and p, community rating, is 0.12. That's crap. It means that votes are almost unrelated to post quality.

Next, the strongest correlation is between q and q', but the maximum possible causal correlation between them is 0.04 * 0.12 = 0.0048, because there is no causal connection between them except p.

That means that q, the machine-learned prediction they use for their study, has an acausal correlation with q', post quality, that is 50 times stronger than the causal correlation.

In other words, all their numbers are bullshit. They aren't produced by post quality, nor by user voting patterns. There is something wrong with how they've processed their data that has produced an artifactual correlation.

Comment author: David_Gerard 06 June 2014 08:54:02PM 11 points [-]

Tricky one. I had a look at the Facebook group and was slightly horrified. You know all the weird extrapolations-from-sequences lunacy we don't get any more at LW? Yeah, it's all there. I think because there are no downvotes there.

Comment author: moridinamael 06 June 2014 09:34:16PM *  0 points [-]

That's true, but there are other salient differences between Facebook and LessWrong. Like the fact the Facebook has a picture of your real face right there, incentivizing everyone to play nice, while we are hobbled with only aliases here. Or the absence of a nested discussion threading system on Facebook. Or the fact the Eliezer posts on Facebook all the time now and rarely here anymore. But I tend to agree that the aversiveness of karma drives people away.

Comment author: fubarobfusco 07 June 2014 04:45:48AM 8 points [-]

Like the fact the Facebook has a picture of your real face right there, incentivizing everyone to play nice, while we are hobbled with only aliases here.

My impression is that real-names-and-faces systems incentivize everyone to play to their expected audience's biases, not to be nice. If the audience enjoys being nasty to someone, real-names-and-faces systems strongly disincentivize expressions of toleration.

Comment author: David_Gerard 07 June 2014 09:44:06AM 8 points [-]

The very nastiest trolls I've encountered really just do not give a shit. Name, address, phone number, all publicly available.

Comment author: David_Gerard 07 June 2014 06:31:09AM *  7 points [-]

Like the fact the Facebook has a picture of your real face right there, incentivizing everyone to play nice

This is the "real names make people nicer online" claim, which is one of those ideas people keep putting forth and for which there is no evidence it works this way. I say there is no evidence because every time it comes up I ask for some (and particularly during the G+ nymwars) and don't get any, but if you have some I'd love to see it.

edit: and by the way, here's my "photo".

Comment author: NancyLebovitz 07 June 2014 10:20:59AM 2 points [-]

Using a photograph of yourself on Facebook is optional.

Comment author: RichardKennaway 07 June 2014 06:58:55PM 2 points [-]

It would be interesting to run the voting data for LW through the analyses they made.

Comment author: drethelin 07 June 2014 06:43:57PM 1 point [-]

this paper seems to say exactly the opposite of complaints I've heard from people about how posting on lesswrong is scary because they don't want to get downvoted.