Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[meta] Policy for dealing with users suspected/guilty of mass-downvote harassment?

28 Post author: Kaj_Sotala 06 June 2014 05:46AM

Below is a message I just got from jackk. Some specifics have been redacted 1) so that we can discuss general policy rather than the details of this specific case 2) because presumption of innocence, just in case there happens to be an innocuous explanation to this.

Hi Kaj_Sotala,

I'm Jack, one of the Trike devs. I'm messaging you because you're the moderator who commented most recently. A while back the user [REDACTED 1] asked if Trike could look into retributive downvoting against his account. I've done that, and it looks like [REDACTED 2] has downvoted at least [over half of REDACTED 1's comments, amounting to hundreds of downvotes] ([REDACTED 1]'s next-largest downvoter is [REDACTED 3] at -15).

What action to take is a community problem, not a technical one, so we'd rather leave that up to the moderators. Some options:

1. Ask [REDACTED 2] for the story behind these votes
2. Use the "admin" account (which exists for sending scripted messages, &c.) to apply an upvote to each downvoted post
3. Apply a karma award to [REDACTED 1]'s account. This would fix the karma damage but not the sorting of individual comments
4. Apply a negative karma award to [REDACTED 2]'s account. This makes him pay for false downvotes twice over. This isn't possible in the current code, but it's an easy fix
5. Ban [REDACTED 2]

For future reference, it's very easy for Trike to look at who downvoted someone's account, so if you get questions about downvoting in the future I can run the same report.

If you need to verify my identity before you take action, let me know and we'll work something out.

-- Jack

So... thoughts? I have mod powers, but when I was granted them I was basically just told to use them to fight spam; there was never any discussion of any other policy, and I don't feel like I have the authority to decide on the suitable course of action without consulting the rest of the community.

Comments (239)

Comment author: ChristianKl 06 June 2014 08:54:17AM 24 points [-]

Healthy gardens have moderation. If Eliezer doesn't want to do it I think someone else should have the authority to moderate. I consider you (Kaj Sotala) to be trustworthy for that role. Having somebody who's in charge helps.

Comment author: buybuydandavis 06 June 2014 11:00:08PM 6 points [-]

It's usually a debacle when moderators start punishing people, particularly when the moderators are also members of the forum.

God's wrath should be reserved for significant issues. But I'd be in favor of God sending a vision to the perpetrator "You're causing me a problem that I don't want to have to figure out. Do you really need to do this? Can you knock it off?"

Comment author: kilobug 08 June 2014 08:39:30AM 10 points [-]

My own view is :

  1. Mass downvoting of most/all a user wrote regardless of content defeats the purpose of the karma/score system and therefore is harmful to the community.

  2. Mass downvoting is rude and painful for the target, and therefore is harmful to the community.

So we should have an official policy forbidding it. For the current case, I would support using first 1. (it's always good to ask for reasons behind an act before taking coercive action), and then apply any of 2.an 4. and 5. depending on the answer (or lack of it).

Comment author: lmm 06 June 2014 06:19:19PM 9 points [-]

I would rather see mods take matters into their own hands than see a tribunal or other bureaucracy.

I think it is vital that any moderator action be public. If you ban them, fine - but let's see a great big USER WAS BANNED FOR THIS POST.

I think that if we believe mass downvoting is wrong then there should be a public ex cathedra statement that this is so and any practical technical measures to prevent it should be applied.

Comment author: moridinamael 06 June 2014 06:01:06PM 8 points [-]

So... thoughts? I have mod powers, but when I was granted them I was basically just told to use them to fight spam; there was never any discussion of any other policy, and I don't feel like I have the authority to decide on the suitable course of action without consulting the rest of the community.

I just wanted to comment that I trust you to take thoughtful action with your mod powers. Part of being The Rationalist Community (tm) should be some group coordination abilities, and deferral of the ultimate power of decision and action to an appointed trusted and trustworthy designee seems like a good solution here.

Comment author: David_Gerard 06 June 2014 08:52:48PM 0 points [-]

yeah, a mod who cares and has time is just the thing.

Comment author: ciphergoth 07 June 2014 10:26:29AM 7 points [-]

Remember to think like an attacker in what you recommend.

Comment author: shminux 06 June 2014 06:32:47AM *  27 points [-]

As one of those targeted, I thought about what I would change if I could. All I came up with is posting mass downvoting stats periodically. If people knew their actions would be detected and made public, they would probably refrain from doing it in the first place.

I am not familiar with the LW database schema, but It is probably trivial to write a SELECT statement which finds users who have been downvoted more than, say, 100 times in the last month, and find the most prolific downvoter of that user. Hopefully this can be a roughly O(n) task, so that the server is not overloaded. I'm sure Jack can come up with something sensible.

Comment author: Luke_A_Somers 06 June 2014 12:34:45PM 5 points [-]

Minimally invasive and might be effective. I like it.

Comment author: shminux 06 June 2014 06:07:33PM 1 point [-]

Thanks! However, judging by the anti-trolling discussions some year and a half ago, simple automated solutions are not very popular here.

Comment author: buybuydandavis 06 June 2014 10:04:42PM 3 points [-]

Isn't downvoting a valid a signal? Why should it necessarily be discouraged?

Is there anything that keeps sock puppets from voting? Wouldn't the offenders just switch to those?

I think a better alg is the author of the max downvotes on one person. It just seems to me that downvoting per se is not necessarily a bad thing.

Comment author: shminux 06 June 2014 11:05:23PM 3 points [-]

I think a better alg is the author of the max downvotes on one person.

Yes, I believe that this is similar to what I have suggested. A mass downvoter would be a strong outlier on the 30-day downvote histogram (# users who downvoted vs # downvotes they gave) of a given user.

Comment author: Gunnar_Zarncke 07 June 2014 12:27:58PM 1 point [-]

I also see it that strictly downvoting is a valid signal - esp. as it is limited to x4 karma. See my comment here.

Comment author: Nornagest 06 June 2014 10:34:13PM *  1 point [-]

Is there anything that keeps sock puppets from voting?

The limit on total downvotes proportional to karma gives you more than you'll ever need unless you're planning to downvote the world, but it does make it significantly harder to manage a sockpuppet army.

You could potentially use sockpuppets to vote more than once on someone's posts, if you feel so inclined, but all your socks would individually have to be productive contributors in good standing, and you're limited by your total contributions in the same way. If we're talking hundreds of total downvotes, pushing socks' individual contributions into undetectable territory would entail tedious account management and some pretty serious compromises in terms of status on your main account. I can think of a couple ways of finessing this with automated help, but they're pretty fragile and easily detected.

Comment author: Lumifer 07 June 2014 12:35:48AM 2 points [-]

You could potentially use sockpuppets to vote more than once on someone's posts, if you feel so inclined, but all your socks would individually have to be productive contributors in good standing

Sockpuppets boost one another. If you have, say, five sockpuppets, each post by one of them immediately gets +4 karma.

Comment author: Nornagest 07 June 2014 05:50:42AM 3 points [-]

That'd work, but I feel voting your own stuff up, especially in a systematic way across several accounts, is much more clearly a violation of community fair-play norms than systematic downvoting or running sockpuppets is.

It's also pretty easily detectable.

Comment author: Lumifer 08 June 2014 01:08:17AM 5 points [-]

Once you spin up a few sock puppets for karma manipulation, I don't think the community fair-play norms bind you much.

Comment author: David_Gerard 06 June 2014 01:38:07PM 16 points [-]

I have one of these too. Someone is slowly working back through my comments systematically downvoting them. Given the rate, I think they're actually doing it by hand, and must have a browser window they've kept open for months just for this task. It's like they're trolling themselves for me, without me having to actually lift a finger. Some LW karma is cheap for such entertainment.

Comment author: Tenoke 06 June 2014 01:55:00PM *  8 points [-]

It was/is the same for me and others, too - small blocks of downvotes on old comments until they reach your first one, and then periodic block downvotes on your recent comments.

I also suspected that it is done by hand at first, but now I am leaning towards it being done with a bot/script (something adapted from reddit most likely), since it happens to many users and the pattern is quite regular over a long time.

Comment author: David_Gerard 06 June 2014 08:21:32PM 20 points [-]

Oh, leave me my illusions. I want to picture them FURIOUSLY DOWNVOTING ME COMMENT BY COMMENT, in UNQUENCHABLE NERD RAGE.

Comment author: pinyaka 07 June 2014 01:11:46AM 3 points [-]

With your and David's karma, it seems like you must have a fair number of comments. The 4xkarma limitation on downvotes suggests that it's someone who's got a fair amount of karma (or several accounts with a fair amount of karma if you're getting multiple downvotes per comment) doing the mass downvoting. That's just weird. It's hard to imagine which high karma person on LW would engage in individual persecution like that.

Comment author: Viliam_Bur 07 June 2014 11:03:04AM 6 points [-]

It's hard to imagine which high karma person on LW would engage in individual persecution like that.

One can get sufficiently high karma rather easily. We are not necessarily speaking about the "top contributor" level here.

For example, if someone gets 10 karma points in a month, which is easy if they write regularly, they have 120 karma points in a year. If they don't downvote regularly, and only decide to drop the whole bomb on one person, that's 4×120 = 480 downvotes. Even if they spend half of it on regular downvoting, and the other half on a bomb, that's still "hundreds" of downvotes.

Comment author: philh 07 June 2014 10:32:48AM 6 points [-]

Assuming they currently have 1 karma/post on average, which seems low to me, it would only take ~2500 karma to downvote all of David, Tenoke and falenas' comments. That isn't tiny, but for example I'm not particularly prolific and I have ~1500 karma, which I'd expect to be more than sufficient.

Comment author: David_Gerard 07 June 2014 01:07:35PM *  7 points [-]

I have around 10,000 almost entirely from commenting on posts over three and a half years, it's not hard. I would assume someone with a long-running grudge. It's difficult to think of a worse (appropriate) punishment for them than continuing to be someone who would think this was a worthwhile way to spend their life, however.

Comment author: atorm 22 June 2014 12:44:54PM 2 points [-]

We've traced the call, and it turns out it was Eliezer Yudkowsky the whole time!

Comment author: Gunnar_Zarncke 07 June 2014 12:25:22PM 1 point [-]

Interesting. I didn't know about the x4 limitation. As that puts a natural limit on the downvoting I do not see any problem in principle with the 'mass' downvoting. If you do not have the freedom to actually spend your karma on (mass) downvotes, then the problem is not the downvoting but the limit.

The limit ensures that you downvotes need to be compensated by correspondingly valued contributions. If more people exercised their downvoting share this 'mass downvoting' wouldn't even have been noticable.

The problem may be that it is applied to individuals. But even though that can be perceived as unfair it is still strictly the choice available to the voter (not much different that voting on the popularity of people instead of comments which is seldom nowadays instead of in popularity (up)votes.

My proposal would be to either a) reduce the limit to x2 or b) change the limit to x1 ''per person'' (if that is possible easily).

This is conditional on attackers not artificially accumulating karma by upvoting themselves (via multiple accounts). Such self-voting can in principle be either detected or prevented by network flow algorithms like Advogato's ( http://www.advogato.org/trust-metric.html ) but that requires significant changes to the karma logic.

Note: I'm not afiliated with Advogato but I'd really like to see the basic principle (the network flow) be applied more to voting algorithms in general.

Comment author: atucker 07 June 2014 06:35:04PM 1 point [-]

I tend to think of downvoting as a mechanism to signal and filter low-quality content rather than as a mechanism to 'spend karma' on some goal or another. It seems that mass downvoting doesn't really fit the goal of filtering content -- it just lets you know that someone is either trolling LW in general, or just really doesn't like someone in a way that they aren't articulating in a PM or response to a comment/article.

Comment author: falenas108 06 June 2014 11:12:40PM 12 points [-]

I'm also one of those target. Literally every comment I have ever made has been downvoted, 10 downvotes a day, for a few months. This happened until whoever was doing it reached my oldest comment. Recent comments are also downvoted.

Not only is mass downvoting feel pretty terrible, it also messes up the purpose of voting. Voting is meant to be a signal of how useful the community thinks a person's comments are, and that's no longer true of my votes or any other victim of downvoting.

Comment author: Gunnar_Zarncke 07 June 2014 12:30:59PM -2 points [-]

downvoting feel pretty terrible

It may feel so, but that is not strictly an argument, isn't it? Why does is affect you so strongly? Isn't voting the valid expression of the voter? As there is a limit to it maybe the limit should be changed.

See also my comment here

Comment author: falenas108 07 June 2014 05:15:33PM 3 points [-]

I mean, yeah. This isn't true of everyone. But for a large portion of people, getting downvoted feels bad. Maybe the "proper" view of karma should be "the karma I receive is merely an indicator of how valuable/accurate the comment is, and I should adjust my view of that comment accordingly." But just because something is irrational doesn't mean I can change it, or that the effects aren't real. And this is fairly common, from what I can see.

As for the "valid expression" thing: It gives too much power to a single individual over the type of content that is here. From what I and some other victims of downvoting can tell, it is the action of a single, well-known user who has decided to downvote people who express a certain view. I am fairly confident (>80%) that at least a few people have been driven off this site from mass downvoting.

What this means is, one person can affect who is on the site in a somewhat substantial way, even if the community does not agree that this is a bad view.

(If you want, you can PM me for more details.)

Comment author: Gunnar_Zarncke 07 June 2014 09:11:24PM 1 point [-]

I have also been block-downvoted a bit (at least it looked that way). And it doesn't feel good. No doubt. But it doesn't pay rent to cry over it.

The voting mechanism is a technical means to heavily structure transactions of influence, status and visibilty. It is comparable to money or the ability to use a phone. Sure. These do not stand in isolation. The technical means are used in conjunction with social norms and customs. But you can't expect people to not (try to) use the means available. And some technical means are just not easily policed by social norms.

The options are: Change the norms, change the technology (or both) or - e.g. if both don't pay off - accept that this combination has failed (or doomed to do so ultimately).

Comment author: RobbBB 14 June 2014 03:41:09AM 5 points [-]

A lot of people on LW seem to feel it's a problem that we don't talk much about our feelings on this site. It does seem like a rationality-conducive community should do a whole lot of talking about feelings without reducing them all to 'here's a True preference I fully approve of and shall optimize for' and 'here's an evil bad feeling I will expunge'. Responding to 'mass downvoting feels pretty terrible' with 'well, that's a feeling, feelings aren't relevant' or "it doesn't pay rent to cry over it" doesn't seem conducive to that goal.

Comment author: Gunnar_Zarncke 14 June 2014 07:37:08AM 0 points [-]

Hm, yes. I agree with this analysis. I focussed too much on the technicality of the voting mechanism which is a kind of inhumane given. Crying about a social norm become technical reality doesn't help. But saying 'get over it' do the underlying issue justice either. I should have taken the feelings more seriously.

Comment author: RobbBB 14 June 2014 09:27:40AM 0 points [-]

Glad we agree. This is in large part a tone issue. It's still reasonable to ask why someone thinks the existence of feeling X supports policy decision Y; I think that can be done without sounding dismissive.

Comment author: buybuydandavis 06 June 2014 10:41:06PM 4 points [-]

Back in the stone ages, I believe the Extropian list had extensive configurable collaborative filtering mechanisms. I didn't use them much, but that seems to me the actual solution. Let people trust who they want, and follow who they want. I see a Karma Score configured by me.

People who mass downvote have an effect only if people choose to let them. Done.

Not to say that the implementation would be trivial, only that there are solutions.

And I like griping about how the web has gone backwards in significant ways. I can say "yay" or "boo" to a post. Oooh baby, that's high tech. The Singularity must surely be just around the corner.

Comment author: gwern 06 June 2014 11:26:04PM 10 points [-]

Back in the stone ages, I believe the Extropian list had extensive configurable collaborative filtering mechanisms. I didn't use them much, but that seems to me the actual solution. Let people trust who they want, and follow who they want. I see a Karma Score configured by me.

The failures of old mailing lists and Usenet were why social mediums universally abandoned killfiles and similar filtering mechanisms: the balance of costs was all wrong - a large number of people had to take affirmative action to ignore the small number of bad apples. It turned out to be better to actively curate the default than to thrust the burden of filtering signal from noise onto each and every user.

To give an Extropian-list-specific example: determined harassment was why Nick Szabo stopped posting there. The filters didn't help there.

Comment author: KnaveOfAllTrades 06 June 2014 11:46:31PM *  2 points [-]

To give an Extropian-list-specific example: determined harassment was why Nick Szabo stopped posting there. The filters didn't help there.

I'm curious: Can you tell me/link me more please?

Comment author: gwern 07 June 2014 01:08:43AM 7 points [-]

No; a lot of the materials are now private, I don't think Nick wants to drag old stuff up, and if the harasser was the same Detweiler dude who did some later harassing, he may well have been mentally ill and not really responsible for his actions.

Comment author: KnaveOfAllTrades 07 June 2014 01:31:03AM 2 points [-]

Thanks! I guess the main thing I wanted to check was that you meant Nick was the one being harassed rather than the other way round, which you have indeed answered.

Comment author: buybuydandavis 07 June 2014 12:01:37AM 1 point [-]

The failures of old mailing lists and Usenet were why social mediums universally abandoned killfiles and similar filtering mechanisms:

Evidence? Aren't such filters still available in Usenet readers? My theory is that such code was just never implemented in the shiny new web.

And with collaborative filtering, everyone doesn't need to make every adjustment themselves. That's the point. You delegate ratings to others, or combinations of others.

But is plopping someone in an ignore file supposed to be so difficult? Should be easier than ever. Have a plonk button on every post to add the guy to your kill file. "Hmmm, this guy is a dick. Plonk." Couldn't be easier. Just as easy as clicking a point of karma.

To give an Extropian-list-specific example: determined harassment was why Nick Szabo stopped posting there. The filters didn't help there.

What was the nature of the harassment, and how would it be prevented in the current list software?

Comment author: gwern 07 June 2014 01:07:00AM 8 points [-]

Evidence? Aren't such filters still available in Usenet readers?

I didn't specify 'failure of Usenet readers'. I specified failure of Usenet.

And with collaborative filtering, everyone doesn't need to make every adjustment themselves

Still a serious UI burden which doesn't scale. Torture vs dust specks.

But is plopping someone in an ignore file supposed to be so difficult?

It's difficult in the way that constant strain and vigilance is so difficult. Trivial inconveniences on every post.

What was the nature of the harassment, and how would it be prevented in the current list software?

By flat-out banning the harasser.

Comment author: buybuydandavis 07 June 2014 02:09:43AM 1 point [-]

Usenet fails, therefore killfiles suck? I still don't see evidence.

Still a serious UI burden which doesn't scale.

Collaborative filtering is about the only way to scale.

It's difficult in the way that constant strain and vigilance is so difficult.

No more strain or vigilance necessary than a click. I don't find that so taxing.

By flat-out banning the harasser.

Ok, so the current list software is no better. How is that an indictment of collaborative filtering or killfiles? Yeah, they can't solve all problems.

Comment author: gwern 07 June 2014 02:42:25AM 4 points [-]

Usenet fails, therefore killfiles suck? I still don't see evidence.

Usenet's failure is often attributed to the defaulting to allowing everyone and expecting users to killfile their way to a good experience, which doesn't work for keeping communities vibrant or dealing with spam. Hence, the decline of Usenet as alternatives opened up and Usenet failed to scale to Internet access getting wider.

Collaborative filtering is about the only way to scale.

Or tons of moderation and voting. Seems to work for Reddit.

No more strain or vigilance necessary than a click. I don't find that so taxing.

Trivial inconvenience.

How is that an indictment of collaborative filtering or killfiles? Yeah, they can't solve all problems.

The question is whether they solve any problems. If they're so great, why are they so rare?

Comment author: David_Gerard 07 June 2014 09:27:49AM *  3 points [-]

Usenet's failure is often attributed to the defaulting to allowing everyone and expecting users to killfile their way to a good experience, which doesn't work for keeping communities vibrant or dealing with spam. Hence, the decline of Usenet as alternatives opened up and Usenet failed to scale to Internet access getting wider.

Got a source? Having previously pretty much lived on Usenet and now not having fired up a newsreader in years - while frequenting reunions of two Usenet groups I used to be on, one on Facebook and one on G+ - I'm interested in anything written on the subject; I think it's one there's not enough well-written post-mortems of.

I don't think killfiles were a significant factor myself, but I admit I'm basing that opinion just on "it sounds wrong", not any actual data.

I'd have attributed the decline of Usenet and mailing lists to (1) not being on the Web (that's the biggie) (2) barrier to entry to create a new discussion forum (even alt.* had process). Mostly (1) - the wine-users list (for Wine, the Windows compatibility layer for Linux) has a two-way gateway to a web forum, and immediately the forum was available the volume was 10x.

I also posted some hypothesising as to why there are no good Web-based Usenet readers - and why forums aren't backed by NNTP - here, with a bunch of people I met on Usenet commenting. tl;dr that the unit of NNTP is the message, but the unit of forums is the thread. Same applies to mailing lists, which is why GMane seems weird considered as a "forum".

Comment author: gwern 07 June 2014 04:03:41PM *  7 points [-]

Got a source?

Not really. This is my own lived experience comparing Usenet to Google Groups, Reddit, web forums, and Wikipedia, and noting the explosion of user-contribution in the shift from Overcoming Bias to LessWrong. You could easily prove Usenet is declined, but I'm not sure what research you could do to prove that the incentives were structured wrong or that features like killfiles fostered complacency & reluctance to change, other than to note how all of Usenet's replacements were strikingly different from it in similar ways.

I don't think killfiles were a significant factor myself, but I admit I'm basing that opinion just on "it sounds wrong", not any actual data.

My read is that killfiles were a major aspect of systematically bad design of Usenet which made it uncompetitive and unscalable: it increased user costs it should not have, adding friction and trivial inconveniences. Killfiles express a fundamental contempt for user time: if there are 100 readers and 1 spammer, it should not take 100 reader actions to deal with the 1 spammers, as killfiles inherently tilt matters. What would be much better is if 10 readers take an action like downvoting and spare the remaining 90. Rinse and repeat. What is better, dealing with spam/trolls while using O(1) or O(n) in reader time?

I'd have attributed the decline of Usenet and mailing lists to (1) not being on the Web (that's the biggie) (2) barrier to entry to create a new discussion forum (even alt.* had process). Mostly (1) - the wine-users list (for Wine, the Windows compatibility layer for Linux) has a two-way gateway to a web forum, and immediately the forum was available the volume was 10x.

The non-Web thing is another example of this. Yes, an uber-nerd (and buybuydandavis is exemplifying this attitude in this thread) may contemptuously look at it as an irrelevant problem: 'what sort of person can't maintain a good killfile? or figure out how to deal with NNTP servers and ports and local clients?' But it's a big deal when repeatedly incurred by millions of people who do not wish to become uber-nerds and to whom costs matter.

Of course, all of this could have been fixed. But they weren't fixed in time, and so Usenet stagnated and died.

Comment author: RichardKennaway 08 June 2014 09:51:27AM 10 points [-]

Another experience here from a long-time former user of Usenet, overlapping yours to some extent.

comp.sources.* was made obsolete by the web and cheap disc space. The binaries newsgroups also, except for legally questionable content that no-one wanted the exposure of personally hosting. (I understand the binaries groups still play this role to some extent.)

I dropped sci.logic and sci.math years before I dropped Usenet altogether, and for the same reason that if I was looking today for discussion on such topics, I wouldn't look there. There's only so long you can go on skipping past the same old arguments over whether 0.999... equals 1.

rec.arts.sf.* took a big hit when LiveJournal was invented. Many of its prominent posters left to start their own blogs. Rasf carried on for years after that, but it never really recovered to its earlier level, and slowly dwindled year by year. Some rasf stalwarts mocked those who left, accusing them of wanting their own little fiefdom where they could censor opposing viewpoints. They spoke as if this was a Bad Thing. It's certainly a different thing from Usenet, but if you want a place on the net for pleasant conversation among friends, a blog under your own control is the way to have that. Rasf was that, for many of its members, for many years, but blogs do it better.

Usenet was never designed, it just grew. There were various bodies and people involved with managing it, but they generally played King Log, leaving it up to the users to manage the creation of newsgroups and stamping the resulting consensus. Kill files didn't come from a design team, they were invented one day by Larry Wall, and taken up by everyone because they saw what a brilliant idea it was. That everyone had to manage their own kill file was, from the point of view of what Usenet was, a virtue, not a flaw. Everyone could speak, no matter what they had to say, but no-one had to listen. The libertarian ideal of free speech. I say this not particularly to defend it, but just to say that that is how people saw these things, that was the animating spirit of Usenet.

Then spam was invented, eternal September began, blogging developed, and mass public access arrived. Usenet managed to respond to all of those things, but it couldn't change what it fundamentally was, because what it was was what those who loved Usenet wanted it to be.

Of course, all of this could have been fixed. But they weren't fixed in time, and so Usenet stagnated and died.

Here I disagree. Usenet could not and cannot be fixed, any more than we could have brontosauruses roaming around the modern world. Usenet was a creature of the technology of its time and the spirit of its participants. There may be some lessons to learn from the history of Usenet, or some ideas worth taking up, but in the present world there is no place for Usenet.

Comment author: NancyLebovitz 09 June 2014 10:41:03AM 1 point [-]

As I recall, at least the parts of usenet where I hung out (rec.art.sf.written, fandom, and composition, and soc.support.fat-acceptance) weren't that badly plagued by spam (there were volunteers dealing with spam for usenet), but trolls were a problem.

Comment author: PhilGoetz 07 June 2014 07:15:58PM 0 points [-]

noting the explosion of user-contribution in the shift from Overcoming Bias to LessWrong

I think it has more to do with the fact that Overcoming Bias didn't allow users to post.

Comment author: gwern 07 June 2014 09:30:57PM 10 points [-]

OB allowed users to send in emails and they would be posted, which is not a high bar (lower than, say, learning a Usenet reader) and a fair number of people contributed. It's just that LW made it much easier and unsurprisingly got way more contributions. This apparently came as a big surprise to Eliezer (but not me, because of my long experience with Wikipedia; it was a bit of a Nupedia vs Wikipedia scenario to my eyes).

Comment author: Lumifer 07 June 2014 12:25:48AM *  1 point [-]

My theory is that such code was just never implemented in the shiny new web.

vBulletin which is very popular has "ignore" mechanism: put a user on ignore and you don't see his posts. Yep, it's just as easy as pressing a button.

Comment author: PhilGoetz 07 June 2014 07:13:19PM *  0 points [-]

The failures of old mailing lists and Usenet were why social mediums universally abandoned killfiles and similar filtering mechanisms: the balance of costs was all wrong - a large number of people had to take affirmative action to ignore the small number of bad apples.

No, I don't think that's true. You're arguing that internet user interfaces become better at hosting debates over time. If I believed that, I'd also believe that the user interfaces for holding rational discussion have gradually improved, from Usenet, to bulletin boards, to Facebook and Wordpress, to Twitter and Tumblr.

Comment author: gwern 07 June 2014 09:32:59PM *  9 points [-]

You're arguing that internet user interfaces become better at hosting debates over time.

No, I'm not. I'm saying the interfaces got better at certain features of UX, like dealing with spam and trolls. Usenet could be intrinsically better at debate (in the hypothetical universe where it had a restricted userbase and wasn't dying of spam and other issues).

eg. imagine a forum where all comments had to be accompanied by an argument map but the forum didn't have any way of banning/deleting accounts. I have little doubt that the debates would be of higher quality, since argument maps have been shown repeatedly to help, but would anyone use that forum for very long? I have much doubt.

Comment author: buybuydandavis 06 June 2014 09:56:05PM 4 points [-]

Apply a negative karma award to [REDACTED 2]'s account. This makes him pay for false downvotes twice over.

They don't seem false to me. That's pretty clearly his opinion.

Comment author: pragmatist 07 June 2014 06:38:54AM 4 points [-]

I'm assuming "false" here is based on the assumption that upvotes/downvotes should be a reflection of the voter's opinion of the particular comment being voted on, not his or her opinion of the user making the comment without regard to the content of the comment itself. Mass downvoting seems like a strategy for conveying a message about a user, not a comment, and that is plausibly a subversion of the karma system's intent.

Comment author: Michaelos 06 June 2014 12:57:52PM 4 points [-]

How easy is it to change the ratio of required upvotes to allowed downvotes? As an example, I very rarely downvote, so I probably have quite a lot of spare downvotes. If you were to change the ratio to require receiving 10 upvotes per 1 downvote, I don't even think I'd notice, and I imagine that a lot of people with this type of voting pattern would be in a similar position.

On the other hand, someone who mass downvotes presumably is going to burn through their downvotes faster than even someone who downvotes fairly, but finds themselves generally more inclined to downvote overall than I would (A report from a Trike person could probably confirm or deny this.) Mass downvoters would be far more likely to bump up against a stricter ratio.

So perhaps changing the ratio would be a helpful technical solution, in addition to other policy decisions?

Comment author: David_Gerard 06 June 2014 01:39:07PM 5 points [-]

Make your downvoting ability proportional to upvotes in the past month rather than upvotes ever?

Comment author: drethelin 07 June 2014 06:38:24PM 3 points [-]

I vote for public shaming of the mass downvoter. "Banning" them is fine but creating extra accounts is fairly trivial.

Comment author: arundelo 06 June 2014 12:55:04PM 15 points [-]

[REDACTED 2], your behavior is bad and you should feel bad.

Comment author: ChristianKl 06 June 2014 09:01:35AM 7 points [-]

I don't consider banning a good option if the person wasn't warned beforehand. People can reregister and it can get messy. Speaking with the person and convince them to behave differently in the future should be the first choice. Karma punishment sounds like a good tool.

Comment author: VAuroch 10 June 2014 08:56:52PM 1 point [-]

Unless this is a different person from the person who has been the cited mass downvoter every other time it's come up, they have very definitely been warned.

Comment author: ChristianKl 12 June 2014 02:56:40PM 1 point [-]

In some sense yes, in a practical sense I don't think so. Talking with the person more directly could be enough to get them to stop.

Comment author: buybuydandavis 06 June 2014 11:01:15PM 1 point [-]

Speaking with the person and convince them to behave differently in the future should be the first choice.

Second.

Comment author: PhilGoetz 07 June 2014 06:53:53PM *  5 points [-]

As a Bayesian, you should count not a user's downvote, but P(downvote | user, facts about the post). If user X downvotes half of all posts, each downvote is 1 bit of evidence. If user X downvotes one out of 16 posts, each downvote is 4 bits of evidence.

The tricky part is how you combine facts about the post with the prior over all posts in cases where user X hasn't voted on many of user Y's posts. What if user X downvotes 1 comment in 50, and they've only voted on one of Y's comments before, and down-voted it? I could talk about how to do that correctly, but in the case of mass downvotes this is by definition not the case.

So, the site should report not sum of downvotes and upvotes, but evidence for and evidence against the utility of a post. Users could then choose to mass-downvote legitimately, knowing that would mean that each of their downvotes would count for less.

(I would be cautious about incorporating the poster's prior! I think voters already incorporate that in their voting.)

To work properly, this system should have the voting options "upvote, meh, downvote", so that we can use P(downvote | voted) rather than P(downvote | viewed) or (downvotes / (upvotes+downvotes)). The latter could motivate people to vote nearly everything up so that their downvotes would have more weight. The votes of a person who votes seldom give more evidence than the votes of a person who votes on comment.

(At the very least it would help to display "+3/-2" instead of "1 point". Yes, I know you can compute that by hovering over the score to get % positive. In which case there's not much reason not to just display +/- votes all the time!)

Comment author: moridinamael 06 June 2014 02:11:01PM *  10 points [-]

Well, here I am again, this time providing a paper backing up my claim that having a downvote mechanism at all is just pure poison.

It doesn't make any sense for this type of community. This isn't Digg. We're not trying to rate content so an algorithm can rank it as a news aggregation service.

Look at Slate Star Codex, where everybody is spending their time now - no aversive downvote mechanism, relaxed, cordial atmosphere, extremely minimal moderation. Proof of concept.

Just turn off the downvote button for one week and if LessWrong somehow implodes catastrophically ... I'll update.

Comment author: Nornagest 06 June 2014 06:42:48PM *  12 points [-]

I'd rather kill karma entirely than refactor it into an upvote-only system. If you're trying to do anything more controversial than deciding which cat picture is the best, upvote-only systems encourage nasty factional behavior that I don't want to see here: it doesn't matter how many people you piss off as long as you're getting strong positive reactions, so it's in your interests to post divisive content. That in turn leads to cliques and one-upmanship and other unpleasantness. It's a common pattern on social media, for example.

The other failure mode you get from it is lots of content-free feel-good nonsense, but we have strong enough norms against that that I don't think it'd be a problem in the short term.

Comment author: moridinamael 06 June 2014 07:11:00PM 6 points [-]

I'd be fine with that. I feel a bit silly repeating the same arguments, but we're supposed to be striving to be, like, the most rational humans as a community, yet the social feedback system we are using was chosen ... because it came packaged with Reddit and Reddit is what was chosen as the LessWrong platform because it was the hot thing of its day. There was no clever Quirrell-esque design behind our karma system designed to bring out the best in us or protect us from the worst in us. It's a relic. Let's be rid of it.

No Karma 2014

Comment author: paper-machine 06 June 2014 02:29:17PM *  12 points [-]

Specifically:

By applying our methodology to four large online news communities for which we have complete article commenting and comment voting data (about 140 million votes on 42 million comments), we discover that community feedback does not appear to drive the behavior of users in a direction that is beneficial to the community, as predicted by the operant conditioning framework. Instead, we find that community feedback is likely to perpetuate undesired behavior. In particular, punished authors actually write worse in subsequent posts, while rewarded authors do not improve significantly.

In a footnote, they discuss what they meant by "write worse":

One important subtlety here is that the observed quality of a post (i.e., the proportion of up-votes) is not entirely a direct consequence of the actual textual quality of the post, but is also affected by community bias effects. We account for this through experiments specifically designed to disentangle these two factors.

They measure post quality based on textual evidence by spinning up a mechanical turk on 171 comments and using that data to train a binomial regression model. So cool!

When comparing the fraction of upvotes received by a user with the fraction of upvotes given by a user, we find a strong linear correlation. This suggests that user behavior is largely "tit-for-tat".... However, we also note an interesting deviation from the general trend. In particular, very negatively evaluated people actually respond in a positive direction: the proportion of up-votes they give is higher than the proportion of up-votes they receive. On the other hand, users receiving many up-votes appear to be more "critical", as they evaluate others more negatively.

Incredibly interesting article. Must read.

EDIT: Consider myself updated. Therefore, I believe downvotes must be destroyed.

Comment author: Lumifer 06 June 2014 03:44:42PM 9 points [-]

The main function of downvotes in LW is NOT to re-educate the offender. Its main function is to make the content which has been sufficiently downvoted effectively invisible.

If you eliminate the downvotes, what will replace them to prune the bad content?

Comment author: TylerJay 06 June 2014 04:00:15PM *  11 points [-]

Well, if this is really the goal, then maybe disentangle downvotes from both post/comment karma and personal karma while leaving the invisibility rules in place? Make it more of a "mark as non-constructive" button that if enough people hit it, the post becomes invisible. If we want to make it more comprehensive, it could be made to weigh these votes against upvotes to make the show/hide decision.

Comment author: Lumifer 06 June 2014 04:17:29PM 2 points [-]

Could be done, though it makes karma even more irrelevant to anything.

Comment author: paper-machine 06 June 2014 03:58:29PM 1 point [-]

The main function of downvotes in LW is NOT to re-educate the offender. Its main function is to make the content which has been sufficiently downvoted effectively invisible.

Negative externalities.

If you eliminate the downvotes, what will replace them to prune the bad content?

Something else? The above study is sufficient evidence for me (and hopefully others) to start finding another solution.

Comment author: Lumifer 06 June 2014 04:13:00PM *  9 points [-]

Negative externalities.

I am aware of the concept. What exactly do you mean?

The above study is sufficient evidence for me

It says "This paper investigates how ratings on a piece of content affect its author's future behavior." I don't think LW should be in the business of re-educating its users to become good 'net citizens. I'm more interested in effective filtering of trolling, stupidity, aggression, drama, dick waving, drive-by character assassination, etc. etc.

It's not like the observation that downvoting a troll does not magically convert him into a hobbit is news.

Comment author: Pfft 06 June 2014 02:45:21PM 32 points [-]

For what it's worth I find the SSC comment section pretty unreadable, since it is just a huge jumble of good and bad comments with no way to find the good ones.

Comment author: paper-machine 06 June 2014 02:47:50PM 0 points [-]

There's also a significant amount of astroturfing from various sources that muddies the water further.

Comment author: David_Gerard 07 June 2014 06:33:28AM 2 points [-]

?? Such as?

Comment author: VAuroch 10 June 2014 09:02:01PM 3 points [-]

Presumably p-m primarily means the neoreactionaries.

Comment author: Nornagest 10 June 2014 09:17:17PM *  6 points [-]

I don't think that's astroturfing; I think it's just that Scott's one of the few semi-prominent writers outside their own sphere who'll talk to NRx types without immediately writing them off as hateful troglodytic cranks. Which is to his credit, really.

Comment author: VAuroch 10 June 2014 09:39:51PM 1 point [-]

That's fair, but I think it was probably what paper-machine was referring to.

Comment author: paper-machine 10 June 2014 10:23:34PM 0 points [-]

More or less. They're not the only ones, of course, but perhaps they're the most obvious.

Comment author: David_Gerard 11 June 2014 08:00:54AM *  1 point [-]

I wouldn't call that astroturfing, I'd say that's more wanting anyone to talk to. The lack of a rating system means people don't get downvoted to obvlion, instead they get banned if they break the house rules badly enough. (I'm surprised James A. Donald lasted as long as he did there.)

Comment author: paper-machine 11 June 2014 01:21:33PM *  0 points [-]

I don't know what "that" you and Nornagest are referring to, so I have no way of knowing if "that" is really astroturfing or not. On the other hand, six comments about the appropriateness of a single word seems like overkill. On the gripping hand, it appears the community wants more of it, so by all means, continue.

Comment author: Blazinghand 06 June 2014 06:13:08PM 7 points [-]

I do not like the voting and commenting system at Slate Star Codex.

Comment author: moridinamael 06 June 2014 06:36:06PM 2 points [-]

It is seriously broken in many ways, I was mainly highlighting the tone and the fact that it doesn't have a voting mechanism and the fact that people still use it in droves despite its huge flaws.

Comment author: David_Gerard 07 June 2014 06:32:23AM 7 points [-]

i think that has way more to do with it being a blog with interesting posts on than anything to do with the commenting system or lack of "like" buttons.

Comment author: Viliam_Bur 06 June 2014 05:18:23PM *  19 points [-]

I think people go to Slate Star Codex, because that's where Scott writes his articles, not because of the voting mechanism.

From the paper:

authors of negatively evaluated content are encouraged to post more, and their future posts are also of lower quality

Seen that at LW a few times. At some moment the user's karma became so low they couldn't post anymore, or perhaps an admin banned them. From my point of view, problem solved.

I think it would be useful to distinguish between systems where the downvoted comments remain visible, and where the downvoted comments are hidden.

I am reading another website, where the downvoted comments remain proudly visible, with the number of downvotes, and yes, it seem to enrage the user to write more and more of the same stuff. My hypothesis is that some people perceive downvotes as rewards (maybe they love to make people angry, or they feel they are on a crusade and the downvotes mean they successfully hurt the enemy), and these people are encouraged by downvoting. Hiding the comment, and removing the ability to comment, now that is a punishment.

Comment author: buybuydandavis 06 June 2014 11:35:47PM 2 points [-]

My hypothesis is that some people perceive downvotes as rewards (maybe they love to make people angry, or they feel they are on a crusade and the downvotes mean they successfully hurt the enemy)

When I think others are wrong, and in particular, the groupthink is wrong, I take downvotes as a greater indication that someone needs to get their head straight, and it could be them or me. Let's see.

I can think of at least one case where I criticized someone for something I thought was disgraceful, after his post was massively upvoted. I was massively downvoted in turn, but eventually convinced the original poster that they had crossed a line in their original post. Or at least he so indicated. Maybe he was just humoring the crazy person.

maybe they love to make people angry, or they feel they are on a crusade and the downvotes mean they successfully hurt the enemy

Downvotes are a signal. Big downvotes are a big signal.

Maybe it's not about hurting people. Maybe it's about identifying contradiction as the place to look for bad ideas that need fixing.

Comment author: Lumifer 06 June 2014 05:32:58PM 2 points [-]

My hypothesis is that some people perceive downvotes as rewards

A bog-standard troll wants attention and drama. Downvotes are evidence of attention and drama.

Comment author: duckduckMOO 07 June 2014 10:57:40PM *  1 point [-]

"some people perceive downvotes as rewards"

Is this just a dig at people vehemently defending downvoted posts or are you serious in calling this a hypothesis?

Comment author: Viliam_Bur 08 June 2014 09:56:34AM *  2 points [-]

Completely serious. Just realise that different people have different goals and/or different models of the world.

Downvote is merely a signal for "some people here don't like this". If you care about opinions of LW readers, and you want to be liked by them, then downvotes hurt. Otherwise, they don't.

For some sick person, making other people unhappy may be inherently desirable, and downvotes are an evidence they succeeded. Imagine some kind of psychopath that derives pleasure from frustrating strangers on internet. (Some people suggest that this actually explains a lot of internet trolling.) Or someone may model typical LW users -- or, in other forum, typical users of the forum X -- as their enemies whose opinions have to be opposed, and downvotes are an evidence that they succeeded to write an "inconvenient truth". Imagine a crackpot, or a heavily mindkilled person. Or a spammer.

Comment author: Lumifer 08 June 2014 01:13:27AM 1 point [-]

To trolls any attention (including downvotes) is a reward.

Comment author: PhilGoetz 11 June 2014 12:43:56AM *  6 points [-]

Digging into the paper, I give them an A for effort--they used some interesting methodologies--but there's a serious problem with it that destroys many of its conclusions. Here's 3 different measures they used of a post's quality:

  • q': Quality as determined by blinded users given instructions on how to vote.
  • p: upvotes / (upvotes + downvotes)
  • q: Prediction for p, based on bigram frequencies of the post, trained on known p for half the dataset

q is the measure they used for most of their conclusions. Note that it is supposed to represent quality, but is based entirely on bigrams. This doesn't pass the sniff test. Whatever q measures, it isn't quality. At best it's grammaticality. It is more likely a prediction of rating based on the user's identity (individuals have identifiable bigram counts) or politics ("liberal media" and "death tax" vs. "pro choice" and "hate crime").

q is a prediction for p. p is a proxy for q'. There is no direct connection between q' and q -- no reason to think they will have any correlation not mediated by p.

R-squared values:

  • q to p: 0.04 (unless it is a typo when it says "mean R = 0.22" and should actually say "mean R^2 = 0.22")
  • q to q': 0.25
  • q' to p: 0.12

First, the R-squared between q', quality scores by judges, and p, community rating, is 0.12. That's crap. It means that votes are almost unrelated to post quality.

Next, the strongest correlation is between q and q', but the maximum possible causal correlation between them is 0.04 * 0.12 = 0.0048, because there is no causal connection between them except p.

That means that q, the machine-learned prediction they use for their study, has an acausal correlation with q', post quality, that is 50 times stronger than the causal correlation.

In other words, all their numbers are bullshit. They aren't produced by post quality, nor by user voting patterns. There is something wrong with how they've processed their data that has produced an artifactual correlation.

Comment author: David_Gerard 06 June 2014 08:54:02PM 11 points [-]

Tricky one. I had a look at the Facebook group and was slightly horrified. You know all the weird extrapolations-from-sequences lunacy we don't get any more at LW? Yeah, it's all there. I think because there are no downvotes there.

Comment author: moridinamael 06 June 2014 09:34:16PM *  0 points [-]

That's true, but there are other salient differences between Facebook and LessWrong. Like the fact the Facebook has a picture of your real face right there, incentivizing everyone to play nice, while we are hobbled with only aliases here. Or the absence of a nested discussion threading system on Facebook. Or the fact the Eliezer posts on Facebook all the time now and rarely here anymore. But I tend to agree that the aversiveness of karma drives people away.

Comment author: fubarobfusco 07 June 2014 04:45:48AM 8 points [-]

Like the fact the Facebook has a picture of your real face right there, incentivizing everyone to play nice, while we are hobbled with only aliases here.

My impression is that real-names-and-faces systems incentivize everyone to play to their expected audience's biases, not to be nice. If the audience enjoys being nasty to someone, real-names-and-faces systems strongly disincentivize expressions of toleration.

Comment author: David_Gerard 07 June 2014 09:44:06AM 8 points [-]

The very nastiest trolls I've encountered really just do not give a shit. Name, address, phone number, all publicly available.

Comment author: David_Gerard 07 June 2014 06:31:09AM *  7 points [-]

Like the fact the Facebook has a picture of your real face right there, incentivizing everyone to play nice

This is the "real names make people nicer online" claim, which is one of those ideas people keep putting forth and for which there is no evidence it works this way. I say there is no evidence because every time it comes up I ask for some (and particularly during the G+ nymwars) and don't get any, but if you have some I'd love to see it.

edit: and by the way, here's my "photo".

Comment author: NancyLebovitz 07 June 2014 10:20:59AM 2 points [-]

Using a photograph of yourself on Facebook is optional.

Comment author: RichardKennaway 07 June 2014 06:58:55PM 2 points [-]

It would be interesting to run the voting data for LW through the analyses they made.

Comment author: drethelin 07 June 2014 06:43:57PM 1 point [-]

this paper seems to say exactly the opposite of complaints I've heard from people about how posting on lesswrong is scary because they don't want to get downvoted.

Comment author: Tenoke 06 June 2014 07:03:43AM 7 points [-]

If the offender really is at fault (which should be quite easy to tell in most cases), then they should probably be banned since this is a pretty disruptive behaviour.

At any rate, have you checked with Eliezer - he used to claim that it is impossible to check a user's voting history, so he might have some other plans that you are not aware of.

Comment author: Kaj_Sotala 06 June 2014 12:04:16PM *  2 points [-]

I'm figuring that he'll see this post sooner or later.

Comment author: army1987 07 June 2014 07:26:22AM 4 points [-]

Soooo... The #0 issue is that votes are supposed to be for ranking content, but people take them to be for rewarding/punishing writers. I'd try whether stopping calculating users' total and last-30-days karma would ameliorate this.

Comment author: RichardKennaway 09 June 2014 07:51:21AM 2 points [-]

(1) is clearly the appropriate action to take in the first instance.

Comment author: fubarobfusco 11 June 2014 01:27:36AM 0 points [-]

It's harmless and could be beneficial. It doesn't close the case, though.

Comment author: Decius 09 June 2014 04:35:24AM 2 points [-]

As far as harming the goals of the community, is mass downvoting of a single user any different from mass upvoting of a single user?

Comment author: VAuroch 10 June 2014 08:58:29PM *  2 points [-]

Yes. Comments with -1 or lower karma are remarkable and generally glided over as provisionally bad unless the reader takes care not to, comments with 1-3 karma are not notable.

Comment author: RichardKennaway 09 June 2014 07:31:10AM 0 points [-]

Has mass upvoting of a single user (by a single user -- that is what we are talking about here) ever happened?

Comment author: NancyLebovitz 09 June 2014 10:55:14AM 1 point [-]

I don't know, but mass upvoting is less likely to be complained about.

Comment author: RichardKennaway 09 June 2014 12:28:02PM 1 point [-]

I wouldn't complain about a mysterious leap in my karma, but I'd find it unusual enough to wonder what was happening -- as when the scoring rules were changed to make votes on top-level posts in Main count for 10 points instead of 1.

Comment author: ITakeBets 06 June 2014 01:22:14PM 4 points [-]

Downvotes are bad. They decrease trust and cause defection spirals. I am confident that the existence of downvotes makes the community less enjoyable, less welcoming and less productive on net.

That said, I'm not sure we should do anything to punish people using them in an extra-bad way.

Comment author: drethelin 07 June 2014 06:53:40PM 7 points [-]

"being welcoming" is not actually good for a community if you want standards to be high.

Comment author: ITakeBets 08 June 2014 04:46:24PM 2 points [-]

I'd agree that it's a two-edged sword, but 1) Keeping standards high is not our only goal, and being welcoming is good for other purposes, and 2) I think there are better ways to be unwelcoming to low-quality people that cause less collateral unwelcomingness to good people.

Comment author: Nornagest 06 June 2014 05:55:04PM 2 points [-]

and cause defection spirals.

Example?

Comment author: moridinamael 06 June 2014 06:14:43PM 6 points [-]

I assume they mean "you downvoted me so I downvote you, and every subsequent comment in this discussion, this ruining any chance we had at maintaining a cordial tone."

Happens all. the. time.

Comment author: Nornagest 06 June 2014 06:22:49PM *  4 points [-]

This is why I don't generally downvote people I'm talking to, unless I'm commenting specifically to explain a downvote.

Comment author: philh 06 June 2014 10:30:58PM *  7 points [-]

This is also why Hacker News disables downvoting on replies to your comments.

Comment author: Nornagest 06 June 2014 10:43:36PM *  2 points [-]

Not a bad feature. It wouldn't solve the main problem we're discussing, but I do think it'd make LW a slightly more pleasant place to be.

You know, modulo the usual problems with getting the feature into production.

Comment author: David_Gerard 07 June 2014 06:37:18AM 3 points [-]

Yeah. Having basically no code contributors emerge from the community (given there are how many good programmers here?) is odd.

Comment author: Viliam_Bur 07 June 2014 11:06:39AM 4 points [-]

Have you seen the LW code? I looked at it once, and gave up immediately.

Rewriting the whole thing from scratch would probably be easier, although this could be just some bias speaking.

Comment author: David_Gerard 07 June 2014 12:52:04PM 2 points [-]

Heh. That's a quite plausible explanation :-)

Comment author: philh 07 June 2014 08:18:32PM 1 point [-]

It wouldn't solve the main problem we're discussing

Actually, now that I think about it, it would increase the cost of doing this without giving yourself away, since now you'd need a sockpuppet to downvote their replies to you.

One potential problem is that you could frame someone, but it would be fairly easy for them to clear their name.

Comment author: Slider 06 June 2014 10:34:32AM 3 points [-]

I have seen advice that you can vote however you want. If centralizing your downvotes is an action that is faced with punishment a vote use is prohibited. Thus I am thinking there is a line drawn in the water on accepted vote policies.

For those that have beef with users and not posts maybe a channel for those could be developed as a voteable user karma (maybe require a reason for user-downvotes?). Mass downvoters go for the posts as a proxy for the user.

For what kinds of legit use is the association between an username and post used for? Could we do without it so you would receive karma from your post but voters can't use as a basis for their vote? If there is a reason to know the writer and it is to apply different standards to the post, why is the overtly pessimistic attitude off the table?

Comment author: Viliam_Bur 06 June 2014 05:35:15PM *  3 points [-]

I have seen advice that you can vote however you want.

I guess it was silently assumed that you would read the things, and then vote, not just execute a content-independent voting mechanism.

Comment author: buybuydandavis 06 June 2014 11:02:23PM 1 point [-]

I have seen advice that you can vote however you want.

From other comments, that's not actually true. You can only downvote 4 times your own karma. I'm guessing few knew that.

Comment author: peterward 09 June 2014 02:19:42AM 2 points [-]

What's the point of the up/down votes in the first place? If the object is reducing bias, doesn't making commenting a popularity contest run counter to this purpose?

Comment author: MugaSofer 03 July 2014 03:57:11PM *  0 points [-]

Quality control. Ideally, people should not upvote/downvote based on conclusions they disagree with.

I recall hearing that the highest-karma comment ever was criticism MIRI, which would suggest that this works as intended. I'm not sure how to check this, though.

ETA: found it.

Comment author: Kawoomba 06 June 2014 12:40:05PM 1 point [-]

Whatever happened to "no penalty without a law"(nulla poena sine lege)? How did we go from "what should our policy on this be" to "let's do a public spectacle, come up with some rules and apply them retroactively"? LW, I am disappointed.

Comment author: drethelin 07 June 2014 06:54:45PM 6 points [-]

Even if we don't apply the rules retroactively to whoever this is, it's a perfect opportunity to come up with some rules and them apply them in the future.

Comment author: fubarobfusco 06 June 2014 07:40:23PM 10 points [-]

This isn't a legal system; it's a blog forum. Legal systems impose themselves on non-consenting participants, and therefore are properly subject to procedural and moral restrictions that do not apply to consensual social systems.

Trying to apply the proper restrictions of a legal system to an informal, consensual social system leads to all sorts of weirdly biased results. Another example is the popular notion that "innocent until proven guilty" applies to conversation or personal opinion about a person who is believed to have done something wrongful — at least, when the accused is a member of my tribe, and thus someone who I empathize with.

Comment author: Tenoke 06 June 2014 01:51:33PM 8 points [-]

This isn't really very retroactive - mass downvoting has always been disallowed/looked down upon, it is just that [it was claimed that] there was no way to tell who is an offender in order to punish them.

Comment author: Kawoomba 06 June 2014 02:04:23PM *  4 points [-]

It really is, though. There is a large difference between looking down upon a behavior, and punishment (public shaming or whatever else, though the first is particularly distasteful). (Not that there has been a clear consensus on the topic*, anyways. I, for one, can see circumstances under which it is warranted, and circumstances under which it's not. Of course, once there's an official policy on it, I'd defer to that.)

There is plenty of behavior I observe every day (in "real life") which I look down upon / which is generally looked down upon. That is not at all the same as those people being fined / thrown in jail / whatever analogue you envision.

* Case in point: This discussion exists.

Comment author: Tenoke 06 June 2014 03:01:09PM 0 points [-]

I, for one, can see circumstances under which it is warranted

Go on

Comment author: Kawoomba 06 June 2014 03:26:19PM *  2 points [-]

Go on

I'm sure you're imaginative enough to come up with such circumstances yourself. But since you asked, Sherlock, enjoy a hypothetical villain's soliloquy:

'Well kept gardens die by pacifism' can be applied here (though it cuts both ways). This community's unique characteristic is its high signal-to-noise ratio. Consider if someone consistently flooded the board with perceived-no-value-ergo-noise comments. Given the low frequency of comments on this board overall, such a dribble would easily dominate/drown out the daily comment feed. Consistent downvotes to tell that commenter to, in effect, "go away", could be one response, especially given the near-trivial effort, the use of provided mechanisms and the lack of clear guidelines against (as evident by the frequent discussions on the issue, which did not focus on the technical aspect alone). Note that outright telling someone "this is the wrong place for you, go away" has also occurred. Is this subjective? Of course it is, what else would it be? Note that this was just one example. I could provide many more (as, I hope, could you), depending on time constraints and how finely we partition the categories.

That being said, I see more circumstances under which it is not warranted.

Comment author: Tenoke 06 June 2014 04:02:25PM *  0 points [-]

Can you provide an example where this wouldn't be obvious to the moderator examining the case?

I honestly put a very low probability on the occurrence of even a single punishment due to a false positive..

Comment author: Kawoomba 06 June 2014 04:05:16PM 2 points [-]

I'm not going to call out specific users here.

You know, privacy concerns, a strong preference against public shaming, and all that.

Comment author: Tenoke 06 June 2014 04:22:32PM 0 points [-]

s/example/hypothetical/

Comment author: Kawoomba 06 June 2014 04:36:26PM 0 points [-]

To suggest that a user whose comments you'd find both ubiquitous and worthless would also be so judged by a moderator "examining the case" seems like folly to me. Do you by extension suggest that people always vote the exact same, too? When you downvote a comment, would you expect everyone else to also downvote that comment, because the downvote would be "obvious"? Why would it be different with a moderator.

Such things are evidently subjective. There is a difference between using your own voting to convey a message, and bringing in some authority figure to "examine the case". All these courses of action are not equal.

I'm sure you can imagine comments that you yourself find interesting, while others find worthless. I myself have written many such comments, little puns in particular. Just imagine a long string of them. There you go.

Comment author: Tenoke 06 June 2014 05:01:03PM 0 points [-]

If a user's history is controversial (both upvoted and downvoted) versus only downvoted, then punishing you for downvoting all (90%+) of their comments (if they have more than a few) is completely justified.

At any rate, here is an extra filter to prevent false positives even further - if you look at the comments where only the offender has downvoted and you see neutral comments (those which would have neither been downvoted nor upvoted normally) there, then you know there is a problem.

Comment author: ChristianKl 06 June 2014 02:09:17PM 2 points [-]

Whatever happened to "no penalty without a law"(nulla poena sine lege)?

I haven't seen any mention of that principle in any of the CEV or TDT articles. If you want to argue that the principle should be a substantial part of a decision framework I invite you to write an article laying out your reasoning in detail.

Comment author: army1987 07 June 2014 07:28:52AM 1 point [-]

Let sleeping basilisks lie!

Comment author: Kawoomba 06 June 2014 02:18:02PM -1 points [-]

Eh, it seems somewhat self-evident that it does not make a lot of sense to expect agents to avoid punishments which do not exist at the time, as such. CEV, to the point that CEV(mankind) wouldn't be an empty-set anyways, would probably include it just by fiat of it being one of the pillars of the rule of law, and since it's about "the people we'd want to be", presumably your CEV at least would contain it, as would mine. There is no relation to TDT, since we're talking about preferences of groups of agents, not general instrumental rationality.

Comment author: ChristianKl 06 June 2014 02:31:09PM *  2 points [-]

There is no relation to TDT, since we're talking about preferences of groups of agents

You mean we aren't talking about the choice whether or not to punish someone? I don't see how that holds. If you only discuss a decision theory in the abstract but are not willing to use it for practical decisions than you are likely going to have a bad decision theory.

Don't compartmentalize and stop using your decision theory when things get political.

In this case punishing people for doing something that's bad for the community can discourage other people from doing something bad for the community against which we don't have explicit rules.

Even if I agree that nation state should only punish in a court of law based on explicit rules that doesn't mean that I think the same is true for privately owned online communities. If I throw a party and someone misbehaves I can throw that person out even without him violating a previously explicitly stated rule. A lot of social interaction works about people simply observing implicit rules and focusing on being nice to each other.

Comment author: Kawoomba 06 June 2014 02:43:32PM 1 point [-]

You mean we aren't talking about the choice whether or not to punish someone?

TDT can't tell you how to optimally arrange the flavors of an ice cream cone if you don't input which flavors you like. "But how can that be, that is a choice too?" Decision theory tells you which decisions are optimal, given your preferences. My preference is rule of law (which also makes sense as an instrumental value), I suspect yours is too. TDT doesn't care. It can't tell you your preferences (though it can tell you which instrumental values would make sense).

Don't compartmentalize and stop using your decision theory when things get political.

I don't understand. TDT isn't my practical decision theory (I'm a meatbag, not an abstract agent), nor did I bring it up. Nor is it applicable anyways. Optimality is viewpoint dependent.

Comment author: ChristianKl 06 June 2014 04:47:26PM *  2 points [-]

My preference is rule of law (which also makes sense as an instrumental value), I suspect yours is too.

For nation states with a monopoly on power I consider the rule of law to be valuable but I don't consider it to be a terminal value for online communities or when I host a party. In most social interaction punishing people for violating implicit community norms is quite common.

The person who engages in the block downvoting might even think of themselves as punishing someone else for doing something bad.

I don't understand. TDT isn't my practical decision theory (I'm a meatbag, not an abstract agent), nor did I bring it up. Nor is it applicable anyways.

If it isn't applicable then what's wrong with TDT? How do we fix it?

Even if you don't personally follow TDT, you are here on LW and while you are here making the argument that the policies you are advocated make sense under TDT has merits if you want to convince others.

Comment author: Kawoomba 06 June 2014 04:59:05PM *  0 points [-]

Let's stop with the reference class tennis. This community does have established and explicit rules, such as "no proposing violence, not even hypothetically". It is not like one of your parties, I suspect. And while you may tell someone to leave you alone, or to get out, I wouldn't say that official punishments against breaking inofficial "norms" are the rule. At least hopefully nowhere I'd like to be. Note how this community has grappled time and again with coming up with a clearly defined norm on this, which would decohere the congruence even if LW were like a party. Meet-ups have resorted to clip-on notes whether hugging is ok with that person. So much for implicit norms for social interaction.

Someone who engages in block downvoting would be sending a signal, using tools as were provided. What's the obsession with the punishment-concept? (Warning, flippant aside: Do we need a good public beating, or what?)

If it isn't applicable then what's wrong with TDT?

In a word, nothing. If you use TDT going off of "I don't want agents to be punished for actions against which there are no rules", then TDT will include that when giving you the optimal course of action. If you use TDT going off of "I don't care whether agents are punished for actions against which there are no rules", then it won't include that. It's the reason why paperclippers and anti-paperclippers both can use TDT. TDT doesn't judge your preferences :-).

Comment author: ChristianKl 07 June 2014 06:26:21AM 4 points [-]

This community does have established and explicit rules, such as "no proposing violence, not even hypothetically".

Those rules are not in a place where a new member would easily find them. Some people even think there a rule against politics when there's no such thing on LW.

Someone who engages in block downvoting would be sending a signal, using tools as were provided. What's the obsession with the punishment-concept?

You were the person who started speaking about punishment. For my part when I was forum moderator I didn't think of myself punishing weeds when I tried to rip them out of my healthy garden. I did ban people but not to punish them but because I thought the forum would be healthier without them.

Comment author: ChristianKl 07 June 2014 06:47:00AM 2 points [-]

Meet-ups have resorted to clip-on notes whether hugging is ok with that person.

That doesn't tell you everything there to know about hugging. There are still issues like the length of the hug that isn't fixed by the rules.

Especially in a community of munchkins you don't want to allow people to game the rules by moving exactly within them but violating their spirit.

Comment author: Kaj_Sotala 07 June 2014 09:41:51AM *  0 points [-]

This community does have established and explicit rules, such as "no proposing violence, not even hypothetically".

The rules also explicitly include a no harassment of individual users clause.

Comment author: Kawoomba 07 June 2014 10:47:39AM *  -1 points [-]

At least read the explanation of that rule first, would you? There you go:

If we determine that you're e.g. following a particular user around and leaving insulting comments to them, we reserve the right to delete those comments.

Your leading OP title, including the phrase "mass-downvote harassment" is insincere reasoning, because it is circular. It has never been established whether mass-downvoting should always be considered "harassment". You'd consider it so. I don't. Come now, be so courteous as to assume other people have reasons for their behavior.

Not even the wiki, which does include an example, makes mention of mass downvoting even though the topic has come up many times. The reason for that is not "well, we can't list everything, we don't list hacking a server, for example". That would be a ridiculous argument. One is using established feedback mechanisms, one isn't. New rule: You must always give reasons for each and every vote, otherwise you'll be publicly shamed for harassment.

Downvotes are a user's individual and private choice. He/She can use it to confer whatever message he/she so chooses. Don't like it? Make a rule against it. Such as an upper bound on allowed downvotes. Oh wait, such an upper-bound has already been implemented? And it doesn't disallow downvoting most of a user's comments? Maybe your moral intuitions on the matter aren't as general as you'd like them to be.

Signing off on the topic, though I'll leave you the last word, if you so choose.

Comment author: David_Gerard 07 June 2014 01:28:10PM 1 point [-]

The basic rule of all social spaces is "don't be a dick"; more detailed rules are elaborations of this. This seems to be considered a pretty clear violation.

Comment author: Kawoomba 07 June 2014 01:41:11PM 2 points [-]

One person's pedantry is another person's dickishness. One person's nitpicking is another person's "what a jerk". One person's pruning the weeds is another person's harassment. We all frequent social spaces all the time. You say the basic rule of all social spaces was "don't be a dick", and yet ... which is fine, some situations call for decisive signals (which may include "being a dick").

I've always appreciated the no-sugar-coating clear feedback signals this community sends, while others have bemoaned exactly that. I don't see signals using provided feedback mechanisms as out of bounds, absent a clear rule.

Comment author: David_Gerard 07 June 2014 06:57:38PM *  0 points [-]

Well yeah, but that's why it requires discussion. (More "constitutional article" than "rule", maybe.) OTOH, this appears to be the sort of behaviour that causes new rules, and may cause retrospective ones.

Comment author: drethelin 07 June 2014 06:58:34PM -2 points [-]

Is there an explicit law against publicly and retroactively applying rules to someone? No? Shut up.

Comment author: MugaSofer 03 July 2014 03:44:58PM 0 points [-]

Indeed. How is banning anyone going to provide a stronger signal than an announcement saying "this is a banworthy offence starting now"?

It seems to me that all we can possible accomplish here is throwing away possibly-constructive commenters.

It's highly probably that anyone with enough karma to do any sort of damage with this is a high or medium-value user; downvotes have a cap based on one's own karma total.

One could argue that this sort of behavior is antisocial and implies the perpetrator is probably not someone we want on the site. But that's exactly the logic that leads to downvoting everything a person has posted!

As one of the people who was downvoted, I find it highly probably that whoever was responsible (in my case, and probably others) was acting in good faith. How could they have known to abide by a rule we are just now introducing?

Comment author: somervta 06 June 2014 01:00:48PM 0 points [-]

I don't see a public spectacle - the names were redacted, etc. And Kaj's post seems to be asking "what should our policy on this be" to me.

Comment author: Kawoomba 06 June 2014 01:12:59PM 0 points [-]

I was referring to an upvoted (at the time) comment calling for public shaming. I thought this community especially would be more sensitive to the whole public shaming thing.

Also, OP should a) have messaged other editors first and b) not presumed that a valid reason for redacting private information is the "presumption of innocence". The reason for not disclosing private information is that it's private. D'uh.

Comment author: Viliam_Bur 06 June 2014 05:32:01PM 7 points [-]

I was referring to an upvoted (at the time) comment calling for public shaming.

Would you also object if I said (which I am not saying, just asking hypothetically) that I suggest the public shaming only for the downvotes that will happen in the future, after this rule is agreed upon? In other words, is retroactivity your true rejection?

I consider the retroactivity not a good rule for a website, because a creative person can find more behaviors that are obviously wrong, but still not forbidden yet. For example, is there an official rule against hacking the server and deleting someone else's account? (Or, as an extreme example, finding the other user in real life and hurting them?) If someone did it, would it be okay to defend them saying: "well, it wasn't said explicitly that such behavior is forbidden, therefore we should protect their privacy"?

Retroactivity and similar rules are made for countries, which have more time and resources to debug the laws, more power to apply, et cetera. LW is not a country, it does not have to follow the same rules.

Comment author: Kawoomba 06 June 2014 05:40:16PM 2 points [-]

Would you also object if I said (which I am not saying, just asking hypothetically) that I suggest the public shaming only for the downvotes that will happen in the future, after this rule is agreed upon? In other words, is retroactivity your true rejection?

It's a good remark, but the answer is yes, I would still object. Public shaming, near-regardless of whether there was an overstepping of an explicit, an implicit, a retroactively applied or a (insert attribute) rule, is a topic I have very strong opinions on. It can cause large amounts of mental anguish, especially given a susceptible population as I suspect the LW'ers INTJ crowd tends to be. It's simply not worth it, it's toxic, especially when there are so many other options left to resort to (PM's, technical limitations, etc.).

If there was one public shaming of anyone condoned by the editors (providing private information for the purpose of punishment), I'd leave this community, never looking back. Rule or no rule.

Also, I object to your slippery slope argument. I see a fundamental difference in using tools as provided (downvote buttons), and hacking a server.

Comment author: drethelin 07 June 2014 07:03:25PM 0 points [-]

and getting mass downvoted isn't stressful? someone hounding another person through all their comments isn't stressful? someone doing that should be ashamed. We can't make them ashamed without public shaming. It's either that or banning them. I don't care about whether what they're doing is technically allowed by the system. They're doing something bad for the community, and they should be stopped.

Comment author: MugaSofer 05 July 2014 08:42:15PM *  0 points [-]

and getting mass downvoted isn't stressful?

ahem

It's not fun, but having a single, anonymous individual express dislike through such an abstract means is nowhere near comparable to public shaming by a community you identify with, I assure you.

I'm sorry, was that a rhetorical question intended to slip an unsupported hypothesis?

(For the record, in case it isn't clear: if it weren't for the fact that being mass-downvoted means I'm currently unable to, I would definitely have downvoted your above comment.)

Comment author: drethelin 05 July 2014 09:40:52PM 0 points [-]

sure, that's why it works. Public shaming is supposed to be stressful, in order to get that person to STOP. One is a socially mitigated system of enforcing how the ingroup behaves, whereas mass downvoting someone you own is an individual attempt to enforce how the group behaves. My point was that it being stressful was not a good reason not to do it. If someone identifies with your ingroup and you think they're ruining it, then there is a mismatch between group identities. No group is obligated to associate with anyone who wants to be in it.

Comment author: MugaSofer 05 July 2014 10:00:17PM *  1 point [-]

To be clear: It being stressful is a reason not to do it, but it may be outweighed by the benefits, right?

Two points: one, you pretty openly compared the two. Since they are different by several orders of magnitude, I think it impacts your point somewhat: should we do A Very Bad Thing to punish/disincentivize something far less unethical or harmful?

Two, I'm having a conversation with a mass-downvote-er in another tab. They seem pretty ... corrected. I seriously doubt they will do this again.

And yet, amazingly, this happened without me choosing to so much as hint who they were, let alone "publicly shaming" them.

Comment author: Kawoomba 06 July 2014 05:18:44PM 0 points [-]

Two, I'm having a conversation with a mass-downvote-er in another tab. They seem pretty ... corrected. I seriously doubt they will do this again.

That sounds like a budding bromance. Hopefully not some kind of Stockholm syndrome.

Comment author: drethelin 06 July 2014 01:52:48PM 0 points [-]

I'm not sure why you think your own personal definitions of what's an order of magnitude more or less x or your anecdote about getting someone to change their ways is helpful. I personally think punishing someone for fucking with the community is less bad than someone taking it on themselves to scare people away. But you clearly disagree. I don't know who you're having this conversation with, but multiple people approached eugine neier and tried to talk to him about it. So clearly that's not a solution that will always work.

Side note:

"Funny how that works" is pure rhetorical shit. It has no place in trying to convince someone of anything. All it does is show how "superior" you are to people who already agree with you.

Comment author: Kawoomba 07 June 2014 07:09:28PM 0 points [-]

Just stop with downvotes altogether then, since even smaller amounts can be stressful. Allowing spammy no-value posters to drown out the few valuable comments is also bad for the community, but whatever.

What's with the whole "shut up" routine (in your other comment)? You're shaming yourself, here. Not going to engage with you anymore.

Comment author: buybuydandavis 06 June 2014 11:07:35PM 1 point [-]

Would you also object if I said (which I am not saying, just asking hypothetically) that I suggest the public shaming only for the downvotes that will happen in the future, after this rule is agreed upon?

That's better, at least. People should know what they're in for. It would be a large breach of trust for the moderators to make public what had been assumed private.

Comment author: MugaSofer 05 July 2014 08:57:35PM 0 points [-]

I consider the retroactivity not a good rule for a website, because a creative person can find more behaviors that are obviously wrong, but still not forbidden yet. For example, is there an official rule against hacking the server and deleting someone else's account? (Or, as an extreme example, finding the other user in real life and hurting them?) If someone did it, would it be okay to defend them saying: "well, it wasn't said explicitly that such behavior is forbidden, therefore we should protect their privacy"?

This raises the question: why do we bother posting rules at all, then?

And the answer, of course, is that such unwritten "rules" are not immediately obvious to everybody.

Comment author: Kaj_Sotala 06 June 2014 01:21:30PM 6 points [-]

Also, OP should a) have messaged other editors first

I'm sure that the infamously antiauthoritarian LW community would just have loved it if the editors had just decided on a course of action behind closed doors.

Comment author: Kawoomba 06 June 2014 01:44:49PM *  2 points [-]

I've been actively modding /r/DebateReligion (not exactly a topic which preempts drama) over on Reddit (not exactly a community which dislikes drama) for years, and at least from my experiences there I wouldn't dream of putting such questions to the community (especially with delicious "redacted" drama bait) before coming to some sort of consensus with my fellow moderators. You could of course argue (and I'd agree) that this is a more mature community.

Also, I wouldn't cite "presumption of innocence" when apparently unaware of much more pertinent principles (no retroactively applied punishments, not even hinting at a disclosure of legitimately presumed-private data). I do agree that a specific rule going forward would be a good idea, given how often this topic crops up. To establish such a rule -- via public discussion, if you so choose --, dangling (however unwittingly) the allure of a witch-hunt would have best been left out entirely.

Comment author: ChristianKl 06 June 2014 02:17:04PM *  1 point [-]

I've been actively modding /r/DebateReligion

At the moment nobody is actively modding LW so the comparison doesn't really hold. The community mostly mods itself by downvoting posts it doesn't like.

Comment author: buybuydandavis 06 June 2014 11:04:45PM 1 point [-]

But they could have just pinged the guy and said he was causing a problem they didn't want to deal with. Maybe he would have let it go.

The best solution is to have the problem just go away.

Comment author: drethelin 07 June 2014 07:04:00PM 1 point [-]

no: The best solution is for the problem to go away and never come back. signs point to there being multiple sources of mass downvotes.

Comment author: MugaSofer 05 July 2014 09:23:15PM *  1 point [-]

Yup. Can confirm there were at least two. [Cite.]

Comment author: Viliam_Bur 06 June 2014 08:37:52AM *  1 point [-]

Can any user downvote, or is some karma needed? It would be good if only users with karma at least, say, 20 could downvote, because that would prevent creating a new account for safe mass downvoting. (Similar system is used at StackExchange.) I'm saying this because if we adopt a policy of detecting and punishing mass downvoters, their logical next step would be to mass-downvote using a different account.

My opinion (but I have low confidence in my ability to correctly handle these situations) is the following:

If an obvious case of mass-downvoting is detected, there should be an ad-hoc tribunal made by three people from MIRI / CFAR / Trike. The tribunal should decide whether the situation deserves punishment or not. It is their choice whether their decision includes asking the offender's explanation. If the tribunal agrees that the situation deserves punishment, then:

The punishment should be public. A Discussion article describing what happened, who downvoted whom, and what is the punishment. Not a public debate about the punishment; only a public announcement of the final verdict. (The reason for this is that in my estimate most likely a member of some political faction was mass-downvoting a member of an opposing faction, and the public debate would bring too much attention to the factions; possibly suspicion or accusations that people are recommending more/less punishment because of their sympathies to one of the factions.)

If possible (if we have the necessary data), all votes made by the offender (both upvotes and downvotes, to anyone) during the last X months should be reverted. This is to say "we don't value your opinion". (Value of X is decided by tribunal, recommended value 3 or 6.)

If for technical reasons reverting recent downvotes is impossible, the victim should have restored 90% of the karma lost by mass downvoting to their account. (I say 90% because some downvoting is allowed.) Also, the same amount of karma should be removed from the offender's account.

Optionally (depending on tribunal's decision) the offender could be banned. The rule of thumb is that if it happened first time, and was only against one person, banning is not necessary; repeated offense or mass-downvoting of many people deserves banning.

Summary: Mass downvoting should be punished publicly, karma restored, repeated offences should lead to ban. The details should be decided by an ad-hoc tribunal of site owners/moderators, not by a community debate.

Comment author: NancyLebovitz 06 June 2014 11:01:25AM 4 points [-]

I agree with most of your points, but there is absolutely no way to prevent discussion. Even if it is somehow blocked on LW, it will happen elsewhere.

Comment author: David_Gerard 06 June 2014 01:41:27PM 9 points [-]

Yeah, blocking topics of discussion on LW is one of those things that doesn't work out so well.

Comment author: paper-machine 06 June 2014 01:43:34PM 2 points [-]

Understatement of the year! :D

Comment author: David_Gerard 06 June 2014 09:03:08PM *  6 points [-]

One of my proudest stupid moments on the Internet was when I was chatting to Mike Godwin (I know him through Wikimedia, he was their lawyer for a while) and I compared someone to Neville Chamberlain. ... talking to Mike Godwin. He just said "don't talk to me about WWII stuff, there's no happy ending to that discussion."

Comment author: Viliam_Bur 06 June 2014 05:41:40PM *  3 points [-]

there is absolutely no way to prevent discussion

I didn't mean: "you are not allowed to discuss this". I meant: "this is our decision, and it's final; you can discuss it if you wish, but it won't change the outcome".

In other words, I recommend against deciding a penalty for a specific case by a community vote. Because it could easily become a poll about whether the offender's faction is more powerful than the victim's faction, or vice versa.

Comment author: Tenoke 06 June 2014 08:55:34AM *  4 points [-]

Can any user downvote, or is some karma needed? It would be good if only users with karma at least, say, 20 could downvote, because that would prevent creating a new account for safe mass downvoting. (Similar system is used at StackExchange.) I'm saying this because if we adopt a policy of detecting and punishing mass downvoters, their logical next step would be to mass-downvote using a different account.

You can't give more than 4 * your karma number of downvotes.

If an obvious case of mass-downvoting is detected, there should be an ad-hoc tribunal made by three people from MIRI / CFAR / Trike. The tribunal should decide whether the situation deserves punishment or not. It is their choice whether their decision includes asking the offender's explanation.

This will waste too much of their time and it is a bit too subjective.

Comment author: buybuydandavis 06 June 2014 10:48:53PM 1 point [-]

You can't give more than 4 * your karma number of downvotes.

That would be a lot of downvotes for someone who has been around a while. I'd get bored with downvoting long before I used up my quota.

Comment author: Viliam_Bur 07 June 2014 11:17:24AM 4 points [-]

I'd get bored with downvoting long before I used up my quota.

That's exactly why I use the downvoting scripts.

:-D

Sorry, couldn't resist.

Comment author: Viliam_Bur 06 June 2014 05:42:56PM 0 points [-]

This will waste too much of their time

Only if mass downvoting is frequent. (Not sure if that's the case.)

Comment author: trist 06 June 2014 11:25:19AM 0 points [-]

Might a one half point penalty for down voting change the incentives enough to prevent mass down voting? Perhaps combined with Viliam_Bur's minimum karma suggestion. Generally I favor ideas that don't make more work for the moderators.

(I am not imagining having half karma points, rather docking one karma for every two (or n) down votes.)

Comment author: Nornagest 06 June 2014 06:35:43PM 4 points [-]

It'd make it somewhat more salient at the very least, but technical patches like these often come with unintended side effects. The moderation burden here is pretty light as it stands; as long as the tools exist to do the analysis I don't feel it's an undue burden on the mods to empower them to deal with things like this.

I'll also note that it's historically been a lot easier to get mod time than to get dev time.

Comment author: wobster109 10 June 2014 04:17:26PM -1 points [-]

Has Redacted2 broken any explicit site rules? I personally feel that unwritten rules of etiquette are not punishable. For that reason I strongly oppose options 4 and 5. For comparison, if Redacted2 had hacked the site to get around the karma requirements for downvoting that would be very different. As it is Redacted2 clicked the readily-available thumbs down button while following karma requirements. This is not a punishable offense.

That doesn't make it correct, and it doesn't mean Less Wrong's policies can't change. If the policies change, then options 4 and 5 can be considered for future use.

Comment author: Nornagest 10 June 2014 04:40:45PM *  10 points [-]

Strongly disagree. I've been involved in user-facing administration before, and binding yourself to a narrow set of policy rules (especially on a site like LW, where they aren't well documented) is about as useful as drinking antifreeze. It's tempting, sure, since we've all been socialized to believe in the rule of law and no ex post facto punishment and all that good stuff. But the truth is that that only works in government because government runs a well-developed legal framework that's had centuries to fill in its loopholes and smooth its rough edges. And it still requires a lot of discretion on the part of its various enforcers.

You can't make loophole-free policy that's more specific than "don't be a jerk", not if users are going to be interacting with each other in a reasonably natural way. You don't have the time or the expertise. That means you'll occasionally need to extend or invent policy to deal with cases that aren't well covered, and that means you'll occasionally piss people off. It's okay. It comes with the territory.

That said, block downvoting is common enough behavior that we probably should have policy to deal with it. Ideally policy and code, but that's probably not going to happen.

Comment author: Dorikka 06 June 2014 06:05:38AM 1 point [-]

A simple-ish solution is for a mod to PM the offender and ask for an explanation, and figure out a corrective (if necessary) and retributive (if necessary and appropriate for deterrent) solution. Then implement it, make a public note, and be done with it. Very imperfect, mostly due to personal impatience with forum meta, but mostly gets the job done.

Comment author: JQuinton 06 June 2014 08:55:10PM 0 points [-]

I suggested in another thread that successive downvotes on (1) one person's account (2) over a certain number of downvotes (3) within a set period of time should prompt the system to tell the user that they have to sacrifice personal karma until (x) days later in order to use up/downvotes.

Something like this is already in place, where a person has to sacrifice karma in order to comment on a post that itself is below a certain karma threshold.

Comment author: Dentin 06 June 2014 03:47:17PM 0 points [-]

Make all downvotes cost one karma point, and make it so downvotes are weaker - perhaps 5 or 10 downvotes needed to cancel a single upvote. This really disincentivizes downvoting, but you'll do it anyway if something is just too over the top.

The only reason I'd keep downvoting in general is for use against things like the pedophilia posts from a few months ago - if downvoting tells someone that we don't want them around, there are cases where I'm ok with that.

Comment author: Viliam_Bur 06 June 2014 05:37:02PM 9 points [-]

People already rarely downvote. (Well, except for those who do the mass downvoting.) Making the downvotes 5 times weaker, that's almost like removing them completely. Almost no one would bother.

Comment author: Dentin 06 June 2014 07:09:00PM 0 points [-]

What's your source for this?

Regarding making votes weaker and more expensive, I was thinking about that from the standpoint of 'downvoting in general is bad', and I would still bother.

One other possibility for downvoting might be the 'conversation of ninjutsu' trope: a person's downvoting power might decrease as the number of downvotes increases, so that one person can really only nuke 10-20 karma by block downvoting instead of an arbitrary amount.

Comment author: Viliam_Bur 07 June 2014 11:58:26AM *  5 points [-]

What's your source for this?

Look at a random LW thread, or perhaps this one. Comments with positive karma are many, comments with negative karma are rare. (Someone could make a script to look at the latest N articles and determine the exact ratio, but I'm to lazy.)

Maybe that just means that we have a smart and civilized discussion here, so the system is working as intended -- people upvote more than downvote because they are satisfied more often than dissatisfied.

The more I think about it, the more it seems to me that the problem is that the karma system was not designed to prevent this kind of abuse (downvote-bombing an enemy), so it is vulnerable here... but that the proposed solutions would be vulnerable to other kinds of abuse. (What happened with holding off on proposing solutions?) Perhaps we should start by declaring the properties we want the system to have, listing a few examples of possible abuse, and then could try designing a system that has the desired properties and can resist the abuse. Maybe we don't even agree on what those properties are.

For example: Really bad content (disliked by most people) should be hidden before everyone has to read it. People who write really bad content should be prevented from writing much. On the other hand, the system should not allow one person to "destroy" their enemy, if other people have no problem with what the person writes. It shouldn't be possible to get more power merely by creating a dozen sockpuppet accounts. Etc.

The current system is not perfect, but it seems to be close to these properties (more than many other websites). For example, even the downvote-bomber can give you only one downvote per comment, so if your average comment karma is greater than one, you will survive. And if your comments are good, then perhaps instead of a person who stupidly downvoted them, we should blame the people who liked the comments, but didn't upvote. -- I see an analogy to a country where a majority of people refuses to vote, and then they are unhappy about the results of the election. But unlike this political analogy, you don't vote for an existing party (which may all suck); you vote directly on the comments. So if most people who like something remain quiet, and the majority who dislikes it expresses their opinion, who exactly is to blame, or what can we do to improve the situation? I feel we shouldn't go as far as to say that even a little liking always trumps any amount of disliking (which is what removing downvotes means). Not sure that making 1 like equal to 2 or 5 or 10 dislikes solves the problem; it feel to me like solving the wrong problem. Maybe it's just that when most readers refuse to provide a signal, we can't just magically create it from the noise.

If what we know is that user A liked a comment, and user B disliked it, should we try to statistically detect the possibility that "actually 20 users liked the comment, but 19 didn't bother voting, only A did; and the downvote was actually a result of B's personal grudge against the author, unrelated to the specific comment... and therefore this comment should be highlighted"? -- Actually, if we could detect this somehow, reliably, maybe we should. At least, it could be worth trying. I mean, if we could extract that information, then why not use it? It' could be easier than trying to change human nature. But such a solution, if possible, would require math, not just a random idea. So we should approach it as a serious mathematical problem, create models, test algorithms on them, etc.