Last month I saw this post: http://lesswrong.com/lw/kbc/meta_the_decline_of_discussion_now_with_charts/ addressing whether the discussion on LessWrong was in decline.  As a relatively new user who had only just started to post comments, my reaction was: “I hope that LessWrong isn’t in decline, because the sequences are amazing, and I really like this community.  I should try to write a couple articles myself and post them!  Maybe I could do an analysis/summary of certain sequences posts, and discuss how they had helped me to change my mind”.   I started working on writing an article.

Then I logged into LessWrong and saw that my Karma value was roughly half of what it had been the day before.   Previously I hadn’t really cared much about Karma, aside from whatever micro-utilons of happiness it provided to see that the number slowly grew because people generally liked my comments.   Or at least, I thought I didn’t really care, until my lizard brain reflexes reacted to what it perceived as an assault on my person.

 

Had I posted something terrible and unpopular that had been massively downvoted during the several days since my previous login?  No, in fact my ‘past 30 days’ Karma was still positive.  Rather, it appeared that everything I had ever posted to LessWrong now had a -1 on it instead of a 0. Of course, my loss probably pales in comparison to that of other, more prolific posters who I have seen report this behavior.

So what controversial subject must I have commented on in order to trigger this assault?  Well, let’s see, in the past week  I had asked if anyone had any opinions of good software engineer interview questions I could ask a candidate.  I posted in http://lesswrong.com/lw/kex/happiness_and_children/ that I was happy to not have children, and finally, here in what appears to me to be by far the most promising candidate:http://lesswrong.com/r/discussion/lw/keu/separating_the_roles_of_theory_and_direct/  I replied to a comment about global warming data, stating that I routinely saw headlines about data supporting global warming. 

 

Here is our scenario: A new user is attempting to participate on a message board that values empiricism and rationality, posted that evidence supports that climate change is real.  (Wow, really rocking the boat here!)    Then, apparently in an effort to ‘win’ this discussion by silencing opposition, someone went and downvoted every comment this user had ever made on the site.   Apparently they would like to see LessWrong be a bastion of empiricism and rationality and [i]climate change denial[/i] instead? And the way to achieve this is not to have a fair and rational discussion of the existing empirical data, but rather to simply Karmassassinate anyone who would oppose them?

 

Here is my hypothesis: The continuing problem of karma downvote stalkers is contributing to the decline of discussion on the site.    I definitely feel much less motivated to try and contribute anything now, and I have been told by multiple other people at LessWrong meetings things such as “I used to post a lot on LessWrong, but then I posted X, and got mass downvoted, so now I only comment on Yvain’s blog”.  These anecdotes are, of course, only very weak evidence to support my claim.  I wish I could provide more, but I will have to defer to any readers who can supply more.

 

Perhaps this post will simply trigger more retribution, or maybe it will trigger an outswelling of support, or perhaps just be dismissed by people saying I should’ve posted it to the weekly discussion thread instead.   Whatever the outcome, rather than meekly leaving LessWrong and letting my 'stalker' win, I decided to open a discussion about the issue.  Thank you!

New to LessWrong?

New Comment
129 comments, sorted by Click to highlight new comments since: Today at 10:10 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It looks like the person who has been downvoting you is the same person mentioned in this thread. Follow-up queries also indicated that the same person had been downvoting several others who had previously complained of downvote stalking.

The said person failed to respond to my first private message on the subject; because there's the chance that they might have just missed it, I finally got around sending them another message yesterday, explicitly mentioning the possibility of a ban unless they provide a very good explanation within a reasonable time. I apologize for taking so long - I procrastinated on this for a while, as I find it quite uncomfortable to initiate conflict with people.

Just to encourage you, I want to put things in context:

  • This is one person that significantly destroys the social capital of the LW community. And in our community, social capital is scarce.

  • They probably do this to promote their political views; to silence perceived political opponents. (Including new users.) This is completely against LW values.

If you'd just block their account without further notice right now, I would say: "Well done!". It is extremely generous to give them a chance to explain themselves; and there probably is no good explanation anyway, so it's just playing for time.

I mean, really, if one person keeps terrorizing the community, and the community is unwilling to defend themselves, then all the lessons about how rationalists are supposed to win have failed.

A person who did so much damage does not deserve a second chance. If you decide to give them the second chance, I won't complain. But I would complain against inaction while they continue to do more damage. If you are the only person who has an access to the "Ban User" button, just press it already, before everyone leaves.

EDIT: This whole thread (and it is far from being the first one) ... (read more)

I mean, really, if one person keeps terrorizing the community, and the community is unwilling to defend themselves, then all the lessons about how rationalists are supposed to win have failed.

I agree, Rationalists should win! And in this case, winning doesn't mean turning into straw-man Vulcans who say "you shouldn't have any emotional reactions to people mass downvoting you" as I see a couple other places in this thread. Rather, it means that we should be able to design a community system that makes everyone feel cared for, and also provides them useful feedback for how they should or shouldn't post things.

Emotions matter, and making people feel valued and loved by other members is how a community thrives. (Thats why religions can do so well even though they make silly claims about the nature of reality).

I suggest that whether they're banned or not, unless they do provide a very good explanation their identity and a description of the mass-downvoting they've done should be posted on LW, and (if anyone has the bandwidth to do it) mass-downvoting should be exposed when it's done in the future, and it should be known that it will be.

Because otherwise the obvious response to "hey, we're banning you for abusing the system" is "OK, thanks. I'll make another account.".

Because otherwise the obvious response to "hey, we're banning you for abusing the system" is "OK, thanks. I'll make another account.".

I don't necessarily disagree, but given that the offender will lose > 9k Karma, and will have to grind a bit to be able to keep mass-downvoting, I'd say it is more than a trivial inconvenience.

You only need maybe 10 karma to be able to significantly hurt new users.

Maybe there should be some treshold, e.g. 100 karma before you can downvote. And then, you can downvote as much as you can today. This probably could be done by one "if" line in the code.

We need downvoting, but we don't quite need to have new users able to destroy other new users.

2Jinoc10y
Actually, I was wondering about this: do we need downvoting ? I mean, is there a discussion somewhere on the relative merits of up/down-voting versus upvoting only ?

is there a discussion somewhere on the relative merits of up/down-voting versus upvoting only ?

Yes, it came up here the last time someone made a Discussion post about retributive downvoting. Not to toot my own horn, but I feel I outlined some reasonable issues with that plan in my response.

(Short version: I feel that upvote-only systems encourage cliques and pandering, neither of which align well with LW's culture or goals.)

2Jinoc10y
Thank you !
3Luke_A_Somers10y
I think downvoting is good to have, but I'm not at all sure that we need downvoting to below 0.

That depends on the comment. Some comments display so much ignorance, that they deserve to be downvoted and hidden.

Imagine a new user, who would just assert that theory of relativity is wrong, and provide their own "theory" based on some mumbo-jumbo or misunderstanding of the basic concepts of physics. That specific comment deserves to be downvoted below zero. It is not a spam, it is not offensive, so it should not be reported to moderators. It is just too stupid. Zero is for the "meh" comments, this would be below that level.

This is different from mass-downvoting all comments of other users because someone does not agree with them for political reasons.

It seems to me that many people are thinking in a direction "design a system that cannot be abused, and it will not be abused". But anything can be abused. Imagine that we would adopt a system with upvotes only, and then we would have a separate button for "report spam". Would this be safe against abuse? A malicious user could decide to mass-report all comments of their political enemies as spam. And then, what? If the spam reports are handled automatically, it would mean that new users would ... (read more)

I am now convinced that going negative is useful.

0Dentin10y
What about requiring a karma payment to downvote negative?
-4Squark10y
Personally, I'm in favor of a system similar to stackexchange: a comment cannot be downvoted but can be "flagged as inappropriate" to draw moderator attention.
4Viliam_Bur10y
Realistically, considering how much time does it take to change anything about LW software, I don't see it as likely. But I can imagine that this system could work if we had multiple moderators. I mean, so the website would not be completely abandoned if one moderator spends a day offline. Also, to provide the moderators some kind of plausible deniability, so they wouldn't feel they start a personal conflict with someone whenever they remove a comment.
0Squark10y
Regarding changes to LW software, I think the process can be improved if the persons responsible will allow LWers with coding skills to volunteer their time.
9Vladimir_Nesov10y
It's open source, and contributions (at least on some issues) are welcome. * Contributing to Less Wrong * Issue tracker
0Squark10y
jackk, Vladimir, thx for commenting! I think those links should be on the main page to be easier to discover.
5jackk10y
Part of my job is to review pull requests.
3Nornagest10y
That depends on two things we don't have: (a) an active mod community that's reasonably large in proportion to the userbase, and (b) a culture that accepts and ideally applauds an authoritarian approach to dealing with trolls and other assorted troublemakers. Having the button without having the support for it is useless at best, and at worst can be actively counterproductive by creating an expectation that the mods can't possibly meet, or by encouraging an adversarial relationship between mods and users. Scott Alexander's got a similar system going over at slatestarcodex (which, to be fair, is excellent in terms of top-level content, and above average in terms of commentariat as long as you don't mind the occasional insane diatribe), and it doesn't seem to be doing a very good job of deterring the type of commentary it was instituted to prevent.
-1Squark10y
We can set up a system in which mods are elected. This might provide a sufficient amount of mods and wouldn't be authoritarian.
2NancyLebovitz10y
Does anyone have experience with a board that elects its mods? I'm not saying it's a bad idea, though it seems like it's got some interesting complications, such has who gets to vote and keeping the voting honest-- I've just only been on boards where the mods were chosen from the top.
4Nornagest10y
Formal elections are rare, but vague consensus processes (along the lines of "anyone who cares can nominate a mod; we'll pick whoever gets the most nods as long as they aren't blatantly electioneering") seem pretty common. Honestly I think I'd prefer the latter to the former.
3[anonymous]10y
I've seen a board occasionally elect a moderator (with other mods appointed). The resulting drama was way too high for whatever benefits the election may have had.
0Squark10y
AFAIK, Wikipedia and StackExchange use elected mods. They don't seem to be faring too bad.
8ialdabaoth10y
The person in question has got Rationality Quotes karma-mining down to a science. Ban them, and they'll be back up to 5K karma on their new account within weeks. HEY! Suggestion: Can the Rationality Quotes threads be pulled off into their own section, where upvotes and downvotes still happen but don't affect the user's karma? This makes sense for multiple reasons: * you shouldn't get karma for just quoting things someone else said, without analysis or context; if you can't be original, at least be relevant/topical. * it prevents karma-mining. * it keeps the Rationality Quotes threads from turning into a distracting meta-game.
1buybuydandavis10y
So that's the trick!
1Dentin10y
It's possible to make hundreds of karma with minutes of effort simply by copy/pasting somebody else's awesome quote into a monthly quote thread. The amount of grinding required is paltry, and not at all a stumbling block to persistent offenders.
0ThisSpaceAvailable10y
By "identity", I take it you mean not merely the user name, but whatever other identifying information the mods have? I don't understand how your second paragraph follows from your first. What is your motive for wanting the information released? If it's retribution, that has nothing to do with your second paragraph. I don't see a deterrence value, since anyone concerned about keeping their information private to avoid downvote stalking will presumably just not use their actual information in registering in the first place. I don't see a preventative justification, either; if the mods can verify identity, they should just block any new account from that person, and if they can't verify identity, then how is this an answer to people making new accounts?
0gjm10y
I meant the user name, not any other information the moderators may have. The second paragraph is intended to follow from the first because: * I expect posting information about mass-downvoting to reduce its effectiveness, because * people will feel less bothered by getting lots of downvotes if they know they come from a low-quality mass-downvoter * readers who know that A has been mass-downvoting B will be aware of that when looking at B's comments and may discount downvotes on them accordingly. * I expect posting information about mass-downvoting to reduce its attractiveness, because * prospective mass-downvoters will anticipate getting exposed, with likely consequences for their own reputation (and in particular their ability to amass the karma they need for the mass-downvoting). * I expect the promise of future exposure to inhibit mass-downvoting by a further mechanism: * prospective mass-downvoters will fear that they may get not only exposed but banned, which would (at least) be an inconvenience.
8jsteinhardt10y
Thanks for following up on this. Any possibility we can know what "within a reasonable time" means concretely? (E.g. days, weeks, months? I think a quicker resolution will be better, though I empathize with your situation.)

Around a week.

6ChristianKl10y
Yes, when it comes to instances like that and asking people to respond in a reasonable timeframe, setting is useful. It makes it easier for you to simply wait for the deadline instead of asking every day yourself: "Is enough time passed that I should do something?"
6shminux10y
No need for a conflict or a ban, just let them know that their user name will be made public. Not sure why the parent is upvoted. If you have trouble confronting people, you make a poor admin. Is there another active admin on LW who is more competent? EDIT: I assumed too much, Kaj was probably not expected to moderate and ended up in this position by default. Sorry.

If you have trouble confronting people, you make a poor admin.

Can we please act like we actually know stuff about practical instrumental rationality given how human brains work, and not punish people for openly noticing their weaknesses.

You could have more constructively said something like "Thank you for taking on these responsibilities even though it sometimes makes you uncomfortable. I wonder if anyone else who is more comfortable with that would be willing to help out."

not punish people for openly noticing their weaknesses.

Thanks! Yes, that's a good point. On the other hand, willingness to confront problem users is one of the absolute minimum requirements for a forum moderator. I suppose Kaj was not expected to do the moderator's job, probably just behind-the-scene maintenance, and I assumed too much. Sorry, Kaj!

That said, a competent active forum moderator is required to deal with this particular issue, and I am yet to see one here.

Preferably more than one moderator.

6jackk10y
Quoting from the other thread about downvote stalking:
1Kaj_Sotala10y
No problem. :-)
9David_Gerard10y
I'm brash, extroverted, outgoing, confrontative, have the subtlety of a head-on collision with a Mack truck and still find this sort of admin duty unpleasant. So this leads me to suspect it's just horrible work.

I have been told by multiple other people at LessWrong meetings things such as “I used to post a lot on LessWrong, but then I posted X, and got mass downvoted, so now I only comment on Yvain’s blog”.

That's interesting, and is causing me to update in the direction of thinking that this is a real problem that resources should be devoted to solving. I think I know of one other person who's not you who has left LW because of downvoting. It's interesting how seriously we take the arbitrary numbers associated with our profiles & contributions. (I do it too.)

And it looks as though there are many people who have reported similar in this thread. Maybe talk to Kaj Sotala? Perhaps he is privately reprimanding mass downvoters?

I do think this comment of yours was a reasonable downvote candidate:

Then why do I see reddit links to NOAA articles, every single month, with titles like: "May 2014 the hottest May since 1880. Four of the five warmest Mays on record have occurred in the past five years. May 2014 marked the 39th consecutive May and 351st consecutive month (more than 29 years) with a global temperature above the 20th century average."

Not because I think you are wr... (read more)

is causing me to update in the direction of thinking that this is a real problem that resources should be devoted to solving

I don't believe that it's more than a day or two of work for a developer. The SQL queries one would run are pretty simple, as we previously discussed, and as Jack from Trike confirmed. The reason that nothing has been done about it is that Eliezer doesn't care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).

My guess is that he cares not nearly as much about LW in general now as he used to, as most of the real work is done at MIRI behind the scenes, and this forum is mostly noise for him these days. He drops by occasionally as a distraction from important stuff, but that's it.

The reason that nothing has been done about it is that Eliezer doesn't care.

This sounds like moralizing to me. Of the following two scenarios, which do you have in mind?

  • Someone had an idea for a solution to the problem and ran it by Eliezer. Eliezer vetoed it (because he was feeling spiteful?)

  • Eliezer is a busy person trying to do lots of things. Because Eliezer has historically been LW's head honcho, no one feels comfortable taking decisive action without his approval. But folks are hesitant to bother him because they know he's got lots to do, or else they do send him email but he doesn't respond to them because he's behind on his email, or he skips reading those emails in favor of higher-priority emails.

I think the second scenario is far more likely. If the second scenario is the case, I don't see any reason to bother Eliezer. We just have to stop acting as though all important forum decisions must go through him. Personally I don't see any reason why Eliezer would know best how to run LW. Expertise at blogging is not the same as expertise at online community management. And empirically, there have been lots of complaints about the way LW is moderated, which is ... (read more)

Eliezer is a busy person trying to do lots of things. Because Eliezer has historically been LW's head honcho, no one feels comfortable taking decisive action without his approval.

If you are a busy person wanting to get a lot of things done, delegate tasks and give someone else the authority do solve the task. To the extend that he doesn't want to solve tasks like this himself, he should delegate the authority clearly to someone else.

Of course it's the second scenario. My point is that this forum has dropped in priority for Eliezer and MIRI in general in the last year or so. And, as I said, probably for a good reason.

The reason that nothing has been done about it is that Eliezer doesn't care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).

My guess is that he cares not nearly as much about LW in general now as he used to...

This. Eliezer clearly doesn't care about LessWrong anymore, to the point that these days he seems to post more on Facebook than on LessWrong. Realizing this is a major reason why this comment is the first anything I've posted on LessWrong in well over a month.

I know a number of people have been working on launching a LessWrong-like forum dedicated to Effective Altruism, which is supposedly going to launch very soon. Here's hoping it takes off—because honestly, I don't have much hope for LessWrong at this point.

7XiXiDu10y
He receives a massive number of likes there, no matter what he writes. My guess is that he needs that kind of feedback, and he doesn't get it here anymore. Recently he requested that a certain topic should not be mentioned on the HPMOR subreddit, or otherwise he would go elsewhere. On Facebook he can easily ban people who mention something he doesn't like.
2[anonymous]10y
Given that you directly caused a fair portion of the thing that is causing him pain (i.e., spreading FUD about him, his orgs, and etc.), this is like a win for you, right? Why don't you leave armchair Internet psychoanalysis to experts?

I'm not sure how to respond to this comment, given that it contains no actual statements, just rhetorical questions, but the intended message seems to be "F you for daring to cause Eliezer pain, by criticizing him and the organization he founded."

If that's the intended message, I submit that when someone is a public figure, who writes and speaks about controversial subjects and is the founder of an org that's fairly aggressive about asking people for money, they really shouldn't be insulated from criticism on the basis of their feelings.

-4[anonymous]10y
You could have simply not responded. It wasn't, no. It was a reminder to everyone else of XiXi's general MO, and the benefit he gets from convincing others that EY is a megalomaniac, using any means necessary.
5David_Gerard10y
You keep saying this and things like it, and not providing any evidence whatsoever when asked, directly or indirectly.
-6[anonymous]10y
3XiXiDu10y
Circa 2005 I had a link to MIRI (then called the Singularity Institute) on my homepage. Circa 2009 I've even been advertising LessWrong. I am on record as saying that I believe most of the sequences to consist of true and sane material. I am on record as saying that I believe LessWrong to be the most rational community. But in 2010, due to some incidence that may not be mentioned here, I noticed that there are some extreme tendencies and beliefs that might easily outweigh all the positive qualities. I also noticed that a certain subset of people seems to have a very weird attitude when it comes to criticism pertaining Yudkowsky, MIRI or LW. I've posted a lot of arguments that were never meant to decisively refute Yudkowksky or MIRI, but to show that many of the extraordinary claims can be weakened. The important point here is that I did not even have to do this, as the burden of evidence is not on me disprove those claims, but on the people who make the claims. They need to prove that their claims are robust and not just speculations on possible bad outcomes.
5XiXiDu10y
A win would be if certain people became a little less confident about the extraordinary claims he makes, and more skeptical of the mindset that CFAR spreads. A win would be if he became more focused on exploration rather than exploitation, on increasing the robustness of his claims, rather than on taking actions in accordance with his claims. A world in which I don't criticize MIRI is a world where they ask for money in order to research whether artificial intelligence is an existential risk, rather than asking for money to research a specific solution in order to save an intergalactic civilization. A world in which I don't criticize Yudkowsky is a world in which he does not make claims such as that if you don’t sign up your kids for cryonics then you are a lousy parent. A world in which I don't criticize CFAR/LW is a world in which they teach people to be extremely skeptical of back-of-the-envelope calculations, a world in which they tell people to strongly discount claims that cannot be readily tested. I speculate that Yudkowsky has narcissistic tendencies. Call it armchair psychoanalysis if you like, but I think there is enough evidence to warrant such speculations.
1Squark10y
I call it an ignoble personal attack which has no place on this forum.
6XiXiDu10y
Sorry. It wasn't meant as an attack, just something that came to my mind reading the comment by Chris Hallquist. My initial reply was based on the following comment by Yudkowsky: And regarding narcissism, the definition is: "an inflated sense of one's own importance and a deep need for admiration." See e.g. this conversation between Ben Goertzel and Eliezer Yudkowsky (note that MIRI was formerly known as SIAI): Also see e.g. this comment by Yudkowsky: ...and from his post... And this kind of attitude started early. See for example what he wrote in his early "biography": Also see this video:
7Nornagest10y
That's the dictionary definition. When throwing around accusations of mental pathology, though, it behooves one not to rely on pattern-matching to one-sentence definitions; it overestimates the prevalence of problems, suggests the wrong approaches to them, and tends to be considered rude. Having a lot of ambition and an overly optimistic view of intelligence in general and one's own intelligence in particular doesn't make you a narcissist, or every fifteen-year-old nerd in the world would be a narcissist. (That said, I'm not too impressed with Eliezer's reasons for moving to Facebook.)
5Viliam_Bur10y
I feel that similar accusation could be used against anyone who feels that more is possible and instead of whining tries to win. I am not an expert on narcissism (though I could be expert at it, heh), but seems to me that a typical narcissistic person would feel they deserve admiration without doing anything awesome. They probably wouldn't be able to work hard, for years. (But as I said, I am not an expert; there could be multiple types of narcissism.)
1buybuydandavis10y
Thinking that one person is going to save the world, and you're him, qualifies as "an inflated sense of one's own importance", IMO. First mistake: believing that one person will be saving the world. Second mistake: there is likely only one person that can do it, and he's that person.
1Viliam_Bur10y
To put the first quotation into some context, Eliezer argued that his combination of high SAT scores and spending a lot of effort in studying AI puts him in a unique position that can make a "difference between cracking the problem of intelligence in five years and cracking it in twenty-five". (Which could make a huge difference, if it saves Earth from destruction by nanotechnology, presumably coming during that interval...) Of course, knowing that it was written in 2000, the five-years estimate was obviously wrong. And there is a Sequence about it, which explains that Friendly AI is more complicated than just any AI. (Which doesn't prove that the five-years estimate would be correct for any AI.)
2buybuydandavis10y
Most people very seriously studying AI probably have high SATs too. High IQs. High lots of things. And some likely have other unique qualities and advantages that Eliezer doesn't. Unique in some qualities doesn't mean uniquely capable of the task in some timeline. My main objection is that until it's done, I don't think people are very justified in claims to know what it will take to get done, and therefore unjustified in claiming some particular person is best able to do it, even if he is best suited to pursue one particular approach to the problem. Hence, I conclude he is overestimating his importance, per the definition. Not that I see it as some heinous crime. He's over confident. So what? It seems to be an ingredient to high achievement. Better to be over confident epistemologically than under confident instrumentally.
-2TheAncientGeek10y
Private overconfidence is harmless. Public overconfidence is how cults start.
0Nornagest10y
I'd say that's, at the very least, an oversimplification; when you look at the architecture of organizations generally recognized as cults, you end up finding they share a fairly specific cluster of cultural characteristics, one that has more to do with internal organization than claims of certainty. My favorite framework for this is the amusingly named ABCDEF: though aimed at new religions in the neopagan space, it's general enough to be applied outside it. (Eliezer, of course, would say that every cause wants to be a cult. I think he's being too free with the word, myself.)
2Squark10y
Well, I'm sorry but when you dig up quotes of your opponent to demonstrate purported flaws in his character, it is a personal attack. I didn't expect to encounter this sort of thing in LessWrong. Given the number of upvotes your comment received, I can understand why Eliezer prefers Facebook.
3XiXiDu10y
Yudkowsky tells other people to get laid. He is asking the community to downvote certain people. He is calling people permanent idiots. He is a forum moderator. He asks people for money. He wants to create the core of the future machine dictator that is supposed to rule the universe. Given the above, I believe that remarks about his personality are warranted, and not attacks, if they are backed up by evidence (which I provided in other comments above). But note that in my initial comment, which got this discussion started, I merely uttered a guess on why Yudowsky might now prefer Facebook over LessWrong. Then a comment forced me to escalate this by providing further justification for uttering this guess. Your comments further forced me to explain myself. Which resulted in a whole thread about Yudkowsky's personality.
0Viliam_Bur10y
Just curious: what else do you consider the big problems of CFAR (other than being associated with MIRI)?

Indeed, it is perfectly fine if someone downvoted that post. I probably deserved a -3 there. However, rather than be given the opportunity to learn from that feedback in the way karma is supposed to work, I instead received one downvote to every post I ever made on the site.

1buybuydandavis10y
I don't think people are entirely on the same page about how karma is "supposed to work". For some, it may be feedback to get people to post better. For others, it may be stifling the posts from who they perceive as a low quality poster. Karma bombing seems rather jerk faced to me, but do you really need to care? You've got enough karma to post articles. You have good evidence that the karma drop was due to one lone jerk off. Therefore, what does he matter? Why is this a problem for you?

Why is this a problem for you?

I suppose if you use comment karma to evaluate how people like what you write, blank downvoting masks the useful signal.

Yes. A while ago I suddenly lost like 50 points (which is a lot for me). The signal that gives isn't 'don't write stuff like this', but 'we don't want you here, go away', and I almost did.

3buybuydandavis10y
But he knows the source of the karma drop, therefore the useful signal has been unmasked.

Therefore, what does he matter? Why is this a problem for you?

I don't see the point in telling people that they shouldn't have the emotional reactions that they keep having. It may be possible to fade those reactions out in the long haul, but caring about karma is a typical reaction (and it seems to be at least common), then it's better to take it into account.

-9buybuydandavis10y
3ThisSpaceAvailable10y
If you were mugged, but the cops caught the mugger and you got all your money back, would you not care about the mugging? You seem to be putting results over process.
-1buybuydandavis10y
I would want to see the guy strung up. But I wouldn't refrain from going out of my house because I had once been mugged. I consider that a dysfunctional response. If I knew someone who was "living" that way, I'd encourage them to change. See previous comment (downvoted into oblivion) on people refraining from posting because people downvoted them. I walk the talk. It just isn't that hard. http://lesswrong.com/lw/kfj/downvote_stalkers_driving_members_away_from_the/b255
4polymathwannabe10y
Then taking the trouble of explaining why the comment is problematic is much more helpful to the discussion than simply clicking on the thumbdown.

I hope you'll forgive me for reiterating what other commenters have already said, but I want to add my own voice here. The problem is not just serial karmakillers. The problem is the culture of using downvotes as a disagree button rather than as a moderation tool. I talked about it before, but ironically most of my comments got downvoted.

I've commented also that the karma system, as it is currently, causes less participation on the site. Just to save time I'll paste it here.

"The fundamental flaw that I see with LessWrong's main site is that its karma/moderating system has the effect of silencing and banning people for being disagreed with or misunderstood. This is a major problem. You cannot mix "I don't agree with you" or "I don't understand you" with "you will be punished and silenced."

People who spam, flame, or otherwise destroy conversation are the on... (read more)

Are you seriously implying that the facebook group for LessWrong has better discussions than the site? I can't say that I agree.

4polymathwannabe10y
For the past couple of months, I've found the Facebook LW group to debate more interesting subjects than the LW website. But that's only my appraisal of what's interesting and what's not.
7John_Maxwell10y
Good criticism is frequently upvoted on LW. But overall, I agree with you that this is an issue.
6philh10y
Do such people usually get downvoted on LW? Outside of this one downvote stalker, that is. (This is a separate question from whether or not people think they'll get downvoted for constructive disagreement, which is also important.)
0EGarrett10y
Well I'm sure each person downvotes for their own reasons, but I have noticed several people who, when they are disagreeing with someone, tend to have a consistent series of "-1" votes showing up on the posts of the person with whom they're disagreeing. If they are doing what it seems, I would say this is an example of the problem. Downvoting also allows people to express disagreement without having to give reasons or even pay much attention to what's said. I think this also goes against the purpose of the site.
6Nornagest10y
Maybe not disagreement as such, but it's very often good to express disapproval without detailing the reasons for it. The basic issue here is that a response increases visibility (more, in fact, than an upvote does), and you generally don't want to make things you disapprove of more visible. The classic example would be deliberate trolling, where a lovingly crafted response detailing everything that's wrong with the post is precisely what you don't want: it wastes your time and encourages the troll. But it's not much different for incoherent crankery or political diatribes or cat pictures: the author might not be encouraged by a response, but you're still wasting other people's time as long as the thread's clogging up Recent Comments. That said, while I don't feel that downvoting your conversational partners to express disapproval is an abuse of the system in the same way that block downvoting is, I do think it's a bad idea and wouldn't be opposed to a feature limiting it.
2EGarrett10y
I think you're definitely right that we need to be able to control people who stop the site from being an honest exchange of ideas or good-faith discussion. It might be better to have a button to report trolling, flaming or spamming, but not an all-purpose downvote that might be used for other reasons. The example I think about is a Religious Forum. If they had a "downvoting" feature that was implemented in the same way that the Less Wrong feature is...anyone showing up who asks too many skeptical questions could just be downvoted out of existence without anyone answering their arguments. Perhaps this demonstrates how it could be an Anti-Rational tool or encourage groupthink...which I think is dangerous.
7Nornagest10y
Bidirectional voting has its disadvantages, but I don't think this is one of them. Sure, if you get a seed culture that's skewed enough in one direction, karma-like systems can be used to enforce conformity with it. But that's hardly unique; if you wander into a LiveJournal (a voteless format) or a Facebook discussion (a unidirectional format) and start spouting off opinions outside the local Overton window, you'll quickly find yourself getting shouted down. There's no purely technical way I know of to break what I'll politely describe as an ideological consensus cluster. That being the case, I find myself thinking more of the incentives karma creates in an ideologically mixed environment that values things other than conformity, like clarity and originality. Sure, offending someone's ideology is risky; but people on the other side aren't mindless political monsters, they care about those other values as much as you do, and if you respect them you won't get many downvotes. But ignore those norms to dribble content-free "hooray for our side", and the best you can hope for is a few upvotes from people suffering from halo effects. What happens if you don't have the option of downvoting? Well, suddenly it doesn't matter what your opponents think, since they can't effectively punish you for it. People don't stop caring about discourse norms, they still have the same reactions to following them that they always did, but the thing is that being clever and polite and original is hard; it takes effort and care and some facility with the language. Repeating buzzwords for a few safe upvotes from true believers, on the other hand, doesn't. Stripped of downside, that's what people are going to fall back on -- which of course leads to a self-perpetuating cycle of radicalization. (Twitter and Tumblr make salient examples, although they both have other issues going on. Open Facebook comment threads are a somewhat purer case.)
3Viliam_Bur10y
And if some user decides to use this button to report all comments of the people with different political opinion, then what? Would it then be acceptable to ban the user, because they abused the button? Well, they are abusing the downvote button now. At some moment you just have to use the banhammer. It could as well be now.
3EGarrett10y
To Viliam, Trolling/flaming/spamming report buttons are clearly labeled for their purpose. The downvote button isn't. To Nornagest, Here's the big difference: On Facebook, you can't stop OTHER people from seeing what the person has to say, no matter how much you scream at them. With the system here, you can. Their posts will be hidden and they can even lose their posting privileges when they are downvoted. And when I say that's a big difference, I mean that's a BIG difference. Again, think of the religious forum. This same karma system would allow them to literally stop you from speaking to or influencing people who are on the fence or more open to rationality, instead of just posting replies that highlight their own immaturity or irrationality. I think the issue here is clear. Secondly, when you refer to (I presume) LessWrong as "an ideologically-mixed environment that values things other than conformity," you're assuming that everyone here views it that way. If everyone saw the downvote button in the same idealized form, we wouldn't have a problem. The issue is that the downvote button does not have such a clear and apparent definition, and there doesn't appear to be any actual enforced policy by the LessWrong admins to stop people from using the downvote button to simply express disagreement.
1gwern10y
Can't you? Eliezer cites the easiness of clicking a button and making the other person Go Away as a major perceived advantage of posting on FB rather than LW. And even if you downvote someone on LW, well, someone can undo that with an upvote.
0EGarrett10y
Hi gwern, I'm not sure exactly what you mean. In Facebook groups, you can ignore someone, but the person in question can still participate in discussions that don't involve you, or discuss what you've said outside of your own threads. I think this is actually a good thing, since it lets you avoid unconstructive people, but doesn't allow you to censor people from being heard by others if that person has something valuable to add. Regarding downvoting vs upvoting, counteracting mass downvoters (who apparently have gone to the extent of downvoting someone over 1000 times) is a huge burden on other people and not something they should have to do.
3gwern10y
I believe Eliezer was referring to starting posts. So the question is, which is better, a banhappy omnipotent OP or gradual undoable community moderation? And indeed, it's not something that happens often. Eugine is so far the only person to be banned for mass downvoting in the ~5 year history of a very active site.
1Nornagest10y
Sorry, didn't see this until now. In future, it works better if you put responses to a post under that post; I'm not alerted if you respond to me in another branch of the thread. I'm presuming no such thing; I was talking about the composition of LW, not the purpose of the downvote button. People's personal downvote policies are going to vary (quite a bit, really), but as long as the forum as a whole contains people with a mix of values similar to those I mentioned, their votes are going to average out to something like the behavior I described: some votes for conformity, some for contrarianism, some for unrelated norms. Note however that this doesn't take into account retributive downvoting; there needs to be policy in place to deal with that, but hey! Now there is, and we've just seen it in action. The visibility effects of karma, I suspect, are overrated as a driver of behavior except in the case of top-level posts (where they're taken off most of the interface and become something of a pain to get to): leaving that "downvoted below threshold" notification seems to incite people's curiosity as much as anything. Some of my highest-ranked posts are replies to comments below the threshold; they wouldn't have gotten there if people weren't reading the thread. The karma toll for replying to heavily downvoted comments does shape behavior, but I've only seen one person get that low for politely expressing political views, and he was a white supremacist.
0EGarrett10y
Hi Nornagest, I'm used to forums with a multi-quote feature. I wasn't aware it wouldn't notify you if I just replied to the bottom comment. This doesn't work in practice precisely because mass and retributive downvoting are disproportionately effective. One person with a skewed concept of downvoting can outweigh tons of other people who are using the functions as intended. I might vote up a comment by someone I like, but I'm not going to go through their profiles and give them hundreds (or even thousands) of upvotes, while we've seen the downvote-abusers do exactly this. So they won't average out properly. We don't have a lot of clear data on this because an "ugh field" or people refraining from posting are often an invisible cost. I've had several times that I had a notion that I wanted to post about here, even considering an entire sequence or at least largely new area of discussion, then thought of some of this type of behavior and changed my mind. Even if the "downvote below threshold" might incite curiosity, the person in question still loses privileges on site. Lastly, the Eugine_Nier news is quite encouraging and may indicate some solutions to this issue.
0Nornagest10y
See the next sentence of my comment. That's a very different case. Downvoting a person into losing privileges can by done by a single user if the target's posted a lot of marginal or controversial comments, but unless they're very new it takes a lot of patience or a downvote script (Eugine seems to have been using patience), and AFAICT most people have karma ratios high enough that it'd take sockpuppets or other abuses that could be targeted by narrower rules. I only know of one illegitimate case, although others may emerge as the consequences of Eugine's behavior become more apparent. Conversely, downvoting a post below the visibility threshold is much more common but can't be done by a single user.
0EGarrett10y
Yes, but I feel that problem nullifies the paragraph. I would have agreed that the patience required is a barrier, until I found out about the 1000 vote attacks. Also, even giving someone a smaller amount of downvotes can become a problem if it's disproportionate to the upvotes. Such as downvoting the person's last 30-50 comments. It simply requires a larger number of people to be doing it. When there was no indication that there would be mass downvote moderating, I actually downvoted Eugine several times in a row out of annoyance when I realized what he was doing to other people...since I figured there was no other option to control it. Anyway, it may be of course that Eugine is the first person to be outed for this behavior and it will become a regular thing. In which case this issue may cease to be a problem at all.
0Nornagest10y
Eugine may be the only person to have (recently) been using this as a tool of policy, aside from a couple people downvoting him in retribution. If you look at the patterns of people targeted for retributive downvoting (here, here, and here, plus this thread and its relatives), most of the situations seem to fit his MO and apparent set of grievances. Perhaps most tellingly, I don't know of anyone besides Eugine himself who's been mass-downvoted by two users (which is easy to tell from karma on obscure or unremarkable posts). (I'm not sure about Will_Newsome, but that was three years ago.)
0philh10y
For what it's worth, I haven't noticed that myself, and I don't think it's ever happened to me here. But I agree that when it happens, it's an example of the problem you're talking about. I agree with this too. I think maybe we have just different intuitions of how commonly it's actually used like that.
2EGarrett10y
You probably have more experience than I do with how people as a whole do the voting. I'm just concerned with potential problems.
4David_Gerard10y
In my personal experience, I have posted things that are quite critical of LW ideas, but if I show I've done my homework they get upvotes.

Would it be problematic to put a blanket ban on upvotes and downvotes of posts that are older than 30 days? Changes in karma to old posts are no longer an especially useful signal to their author anyway. Such a ban could be a cheap way to mitigate downvote stalking without significantly impacting current discussions.

An attacker could still use multiple accounts to mass-downvote everything from a user in the past 30 days. On the other hand, it's possible that some users' comments were uniformly bad. For the purpose of providing a useful signal, I think ... (read more)

Would it be problematic to put a blanket ban on upvotes and downvotes of posts that are older than 30 days?

This is one of those little things I really like about LW; I would miss it if it was gone. The best content here is on posts that are years old, and discouraging discussion/engagement there would just make the current content problem worse.

The karma of a particular comment could be capped at no worse than, say, -3, regardless of how many downvotes it received. That would be a cheap way to reduce the possibility of malicious mass-downvoting.

This doesn't do anything to solve the problem of one mass-downvoter.

4selylindi10y
To be sure, commenting on old posts is great. That definitely shouldn't be banned. It's not so clear about the karma system, which serves several functions, one of which is signalling "more like this" or "less like this" in varying degrees to users so that they can modify their commenting habits. For you and all those who value upvoting/downvoting old comments for its function of engaging with old conversations, perhaps there could be an alternative course between banning late votes and maintaining the status quo? For instance, the upvote/downvote buttons could still increment/decrement scores on comments after 30 days, but not the karma of the commenters. Since a commenter would still have to look back through their old posts to notice the change anyway, the signalling effect would remain unchanged from the status quo, but the possibility of using old posts to attack karma would be removed. (Downside: karma wouldn't be the sum of comment scores.) Right, the problem it was stated to mitigate is that "An attacker could still use multiple accounts to mass-downvote everything from a user in the past 30 days." I forgot to state but also intended it as helping with the problem Ander brought up in the OP that getting a single comment massively downvoted has discouraged people from staying around LW. Jiro correctly pointed out below that vigilence is the technologically simplest solution, albeit more laborious for everyone involved. My preference would be a community that prevented the problem rather than punished it afterwards. There's no guarantee that there exists a rule that would be the perfect solution, but no doubt we can come up with simple rules that put trivial inconveniences (or nontrivial ones) in the way of undesirable behavior! There are probably many such imperfect-but-helpful rules.
4Jiro10y
The simplest solution would be 1) to show the names of downvoters and 2) to have moderators who are willing to kick people out for abusive downvoting 1) could be dispensed with if users could ask moderators to look for abusive downvoting and publicize the name, but that would be more work for moderators.
7NancyLebovitz10y
Having a "gave most downvotes in the past month" list (with the numbers of downvotes, of course) would be awesome.
4Nornagest10y
Well, I don't think that'd have most of the social effects that make me think open votes are a bad idea. It does have some odd features, though -- not everyone votes (or indeed contributes) at the same rate, so a prolific contributor with perfectly normal voting habits might end up being flagged over a less prolific retributive downvoter. Not that looking at downvote ratios would be much better -- those would be fairly easy to mask. Either option would be a disincentive to downvoting in general, and I'm not sure that's a good thing. Still, this doesn't strike me as an obviously bad idea. I'd probably prefer something more narrowly targeted at retributive behavior, but if that's not in the cards this might be a good option.
2satt10y
A variation on NancyLebovitz's idea: instead of listing individual users with the most downvotes in the past month, list the pairs of users A & B with the highest number of downvotes given by A to B in the past month. With the latter, merely prolific users should rank visibly below the blanket downvoters.
4SilentCal10y
On the technical solution side, how feasible would it be to institute a more complex karma aggregation algorithm, with diminishing effects from repeated downvotes from the same user?

The ones who matter wouldn't care anyways.

Do we have more than one downvote stalker? If true, it really sucks that it only takes a single person to bring down and entire community.

Your intuition appears to be good. There was a recent paper published on this very topic.

http://arxiv.org/abs/1405.1429