The noncentral fallacy - the worst argument in the world?

157 Post author: Yvain 27 August 2012 03:36AM

Related to: Leaky Generalizations, Replace the Symbol With The Substance, Sneaking In Connotations

David Stove once ran a contest to find the Worst Argument In The World, but he awarded the prize to his own entry, and one that shored up his politics to boot. It hardly seems like an objective process.

If he can unilaterally declare a Worst Argument, then so can I. I declare the Worst Argument In The World to be this: "X is in a category whose archetypal member gives us a certain emotional reaction. Therefore, we should apply that emotional reaction to X, even though it is not a central category member."

Call it the Noncentral Fallacy. It sounds dumb when you put it like that. Who even does that, anyway?

It sounds dumb only because we are talking soberly of categories and features. As soon as the argument gets framed in terms of words, it becomes so powerful that somewhere between many and most of the bad arguments in politics, philosophy and culture take some form of the noncentral fallacy. Before we get to those, let's look at a simpler example.

Suppose someone wants to build a statue honoring Martin Luther King Jr. for his nonviolent resistance to racism. An opponent of the statue objects: "But Martin Luther King was a criminal!"

Any historian can confirm this is correct. A criminal is technically someone who breaks the law, and King knowingly broke a law against peaceful anti-segregation protest - hence his famous Letter from Birmingham Jail.

But in this case calling Martin Luther King a criminal is the noncentral. The archetypal criminal is a mugger or bank robber. He is driven only by greed, preys on the innocent, and weakens the fabric of society. Since we don't like these things, calling someone a "criminal" naturally lowers our opinion of them.

The opponent is saying "Because you don't like criminals, and Martin Luther King is a criminal, you should stop liking Martin Luther King." But King doesn't share the important criminal features of being driven by greed, preying on the innocent, or weakening the fabric of society that made us dislike criminals in the first place. Therefore, even though he is a criminal, there is no reason to dislike King.

This all seems so nice and logical when it's presented in this format. Unfortunately, it's also one hundred percent contrary to instinct: the urge is to respond "Martin Luther King? A criminal? No he wasn't! You take that back!" This is why the noncentral is so successful. As soon as you do that you've fallen into their trap. Your argument is no longer about whether you should build a statue, it's about whether King was a criminal. Since he was, you have now lost the argument.

Ideally, you should just be able to say "Well, King was the good kind of criminal." But that seems pretty tough as a debating maneuver, and it may be even harder in some of the cases where the noncentral Fallacy is commonly used.


Now I want to list some of these cases. Many will be political1, for which I apologize, but it's hard to separate out a bad argument from its specific instantiations. None of these examples are meant to imply that the position they support is wrong (and in fact I myself hold some of them). They only show that certain particular arguments for the position are flawed, such as:

"Abortion is murder!" The archetypal murder is Charles Manson breaking into your house and shooting you. This sort of murder is bad for a number of reasons: you prefer not to die, you have various thoughts and hopes and dreams that would be snuffed out, your family and friends would be heartbroken, and the rest of society has to live in fear until Manson gets caught. If you define murder as "killing another human being", then abortion is technically murder. But it has none of the downsides of murder Charles Manson style. Although you can criticize abortion for many reasons, insofar as "abortion is murder" is an invitation to apply one's feelings in the Manson case directly to the abortion case, it ignores the latter's lack of the features that generated those intuitions in the first place2.

"Genetic engineering to cure diseases is eugenics!" Okay, you've got me there: since eugenics means "trying to improve the gene pool" that's clearly right. But what's wrong with eugenics? "What's wrong with eugenics? Hitler did eugenics! Those unethical scientists in the 1950s who sterilized black women without their consent did eugenics!" "And what was wrong with what Hitler and those unethical scientists did?" "What do you mean, what was wrong with them? Hitler killed millions of people! Those unethical scientists ruined people's lives." "And does using genetic engineering to cure diseases kill millions of people, or ruin anyone's life?" "Well...not really." "Then what's wrong with it?" "It's eugenics!"

"Evolutionary psychology is sexist!" If you define "sexist" as "believing in some kind of difference between the sexes", this is true of at least some evo psych. For example, Bateman's Principle states that in species where females invest more energy in producing offspring, mating behavior will involve males pursuing females; this posits a natural psychological difference between the sexes. "Right, so you admit it's sexist!" "And why exactly is sexism bad?" "Because sexism claims that men are better than women and that women should have fewer rights!" "Does Bateman's principle claim that men are better than women, or that women should have fewer rights?" "Well...not really." "Then what's wrong with it?" "It's sexist!"

A second, subtler use of the noncentral fallacy goes like this: "X is in a category whose archetypal member gives us an emotional reaction. Therefore, we should apply that same emotional reaction to X even if X gives some benefit that outweighs the harm."

"Capital punishment is murder!" Charles Manson-style murder is solely harmful. This kind of murder produces really strong negative feelings. The proponents of capital punishment believe that it might decrease crime, or have some other attending benefits. In other words, they believe it's "the good kind of murder"3, just like the introductory example concluded that Martin Luther King was "the good kind of criminal". But since normal murder is so taboo, it's really hard to take the phrase "the good kind of murder" seriously, and just mentioning the word "murder" can call up exactly the same amount of negative feelings we get from the textbook example.

"Affirmative action is racist!" True if you define racism as "favoring certain people based on their race", but once again, our immediate negative reaction to the archetypal example of racism (the Ku Klux Klan) cannot be generalized to an immediate negative reaction to affirmative action. Before we generalize it, we have to check first that the problems that make us hate the Ku Klux Klan (violence, humiliation, divisiveness, lack of a meritocratic society) are still there. Then, even if we do find that some of the problems persist (like disruption of meritocracy, for example) we have to prove that it doesn't produce benefits that outweigh these harms.

"Taxation is theft!" True if you define theft as "taking someone else's money regardless of their consent", but though the archetypal case of theft (breaking into someone's house and stealing their jewels) has nothing to recommend it, taxation (arguably) does. In the archetypal case, theft is both unjust and socially detrimental. Taxation keeps the first disadvantage, but arguably subverts the second disadvantage if you believe being able to fund a government has greater social value than leaving money in the hands of those who earned it. The question then hinges on the relative importance of these disadvantages. Therefore, you can't dismiss taxation without a second thought just because you have a natural disgust reaction to theft in general. You would also have to prove that the supposed benefits of this form of theft don't outweigh the costs.

Now, because most arguments are rapid-fire debate-club style, sometimes it's still useful to say "Taxation isn't theft!" At least it beats saying "Taxation is theft but nevertheless good", then having the other side say "Apparently my worthy opponent thinks that theft can be good; we here on this side would like to bravely take a stance against theft", and then having the moderator call time before you can explain yourself. If you're in a debate club, do what you have to do. But if you have the luxury of philosophical clarity, you would do better to forswear the Dark Arts and look a little deeper into what's going on.

Are there ever cases in which this argument pattern can be useful? Yes. For example, it may be a groping attempt to suggest a Schelling fence; for example, a principle that one must never commit theft even when it would be beneficial because that would make it harder to distinguish and oppose the really bad kinds of theft. Or it can be an attempt to spark conversation by pointing out a potential contradiction: for example "Have you noticed that taxation really does contain some of the features you dislike about more typical instances of theft? Maybe you never even thought about that before? Why do your moral intuitions differ in these two cases? Aren't you being kind of hypocritical?" But this usage seems pretty limited - once your interlocutor says "Yes, I considered that, but the two situations are different for reasons X, Y, and Z" the conversation needs to move on; there's not much point in continuing to insist "But it's theft!"

But in most cases, I think this is more of an emotional argument, or even an argument from "You would look silly saying that". You really can't say "Oh, he's the good kind of criminal", and so if you have a potentially judgmental audience and not much time to explain yourself, you're pretty trapped. You have been forced to round to the archetypal example of that word and subtract exactly the information that's most relevant.

But in all other cases, the proper response to being asked to subtract relevant information is "No, why should I?" - and that's why this is the worst argument in the world.

 

Footnotes

1: On advice from the community, I have deliberately included three mostly-liberal examples and three-mostly conservative examples, so save yourself the trouble of counting them up and trying to speculate on this article's biases.

2: This should be distinguished from deontology, the belief that there is some provable moral principle about how you can never murder. I don't think this is too important a point to make, because only a tiny fraction of the people who debate these issues have thought that far ahead, and also because my personal and admittedly controversial opinion is that much of deontology is just an attempt to formalize and justify this fallacy.

3: Some people "solve" this problem by saying that "murder" only refers to "non-lawful killing", which is exactly as creative a solution as redefining "criminal" to mean "person who breaks the law and is not Martin Luther King." Identifying the noncentral fallacy is a more complete solution: for example, it covers the related (mostly sarcastic) objection that "imprisonment is kidnapping".

4: EDIT 8/2013: I've edited this article a bit after getting some feedback and complaints. In particular I tried to remove some LW jargon which turned off some people who were being linked to this article but were unfamiliar with the rest of the site.

5: EDIT 8/2013: The other complaint I kept getting is that this is an uninteresting restatement of some other fallacy (no one can agree which, but poisoning the well comes up particularly often). The question doesn't seem too interesting to me - I never claimed particular originality, a lot of fallacies blend into each other, and the which-fallacy-is-which game isn't too exciting anyway - but for the record I don't think it is. Poisoning the well is a presentation of two different facts, such as "Martin Luther King was a plagiarist...oh, by the way, what do you think of Martin Luther King's civil rights policies?" It may have no relationship to categories, and it's usually something someone else does to you as a conscious rhetorical trick. Noncentral fallacy is presenting a single fact, but using category information to frame it in a misleading way - and it's often something people do to themselves. The above plagiarism example of poisoning the well is not noncentral fallacy. If you think this essay is about bog-standard poisoning the well, then either there is an alternative meaning to poisoning the well I'm not familiar with, or you are missing the point.

Comments (1742)

Comment author: sullyj3 09 September 2015 07:02:23PM 0 points [-]

Just encountered an interesting one:

Eradication of the Parasitoid Wasp is genocide!

Comment author: jdgalt 28 July 2015 04:03:32AM 1 point [-]

I find this only a partly useful concept, since it is sometimes used to "discredit" arguments I consider quite valid, such as your last two examples. At most, if called on to defend either of those examples I would have to say more about why our usual condemnation of racism should apply to the entire category, and of why taking others' property without their consent should be condemned even when done by a group that some people consider ought to be allowed special privileges.

Comment author: Eitan_Zohar 01 November 2014 06:34:40AM 0 points [-]

If it's worth mentioning, the Israel-Palestine debate basically is this fallacy.

Comment author: onigame 15 October 2014 11:28:38PM 2 points [-]

I can't help but notice that all of your examples are that which elicit negative emotional reactions. I think it might be illustrative to also have some examples of this fallacy for situations where the group X elicits positive emotional reactions. For example, wild deer are cute, and therefore any movement to kill them must be bad. Or, rape victims are all deserving of our sympathy, therefore any portrayal of a rape victim as anything but pure innocence is bad. (These aren't great examples, I admit.)

Comment author: JackV 02 July 2014 02:38:53PM 1 point [-]

For what it's worth, I quibbled with this at the time, but now I find it an incredibly useful concept. I still wish it had a more transparent name -- we always call it "the worst argument in the world", and can't remember "noncentral fallacy", but it's been really useful to have a name for it at all.

Comment author: wallowinmaya 25 November 2013 06:30:02AM 3 points [-]

I think there are two cases where you forgot to type the word "fallacy" after the word noncentral.

But in this case calling Martin Luther King a criminal is the noncentral.

This is why the noncentral is so successful.

Comment author: KnaveOfAllTrades 20 December 2012 06:22:27AM *  2 points [-]

Then there's another half--when the wrongness of something is missed because it does not (technically by an approximate dictionary definition) fall into a pre-existing category in the 'Wrong Cluster'. Examples: Forced consent, dishonesty that's 'technically not lying', extortion that's 'technically not stealing' getting a free ride.

So we have a general 'linguistic ethical determinism' (better name anybody?) fallacy, wherein something is considered wrong if and only if it comes under an existing Category of Wrong according to a pedantic definition. (This is itself, of course, a corollary of human obsession with linguistic categories, which I gather is covered in A Human's Guide to Words.)

Comment author: Sengachi 20 December 2012 12:27:58AM 2 points [-]

I like this article very much, and I think it's an important fallacy to take note of. I do not however, think it is the worst fallacy. I think the worst fallacy is: I don't need a reason/argument to believe what I believe.

Comment author: Bluehawk 21 April 2013 05:08:41AM 1 point [-]

I'm having a little trouble actually articulating what I find wrong here, and I'm not sure if that's a fault in what I'm supposedly intuiting or in my ability to articulate.

That's not so much a "logical fallacy" as a mistaken belief that belief is incontrovertible (or a mistaken over-valuing of "the personal opinion"). You've also substituted Argument for Fallacy.

The one you've outlined might also be less important here because it's a lot easier to recognise for what it is, and is likely to be recognised as a stonewall rather than as a convincing argument in a Dark Arts debate. The convincing Bad Argument does a heck of a lot more damage.

Which argument is "worst" comes down to semantics: does Worst Argument resolve to "Argument That Does Most Harm", or to "Argument That Is Least Correct", or to "Argument That Is Least Convincing", or to "Argument That Is Least Likely To Be Useful"?

Comment author: smoofra 16 October 2012 09:43:11PM 1 point [-]

I don't think you've chosen your examples particularly well.

Abortion certainly can be a 'central' case of murder. Immagine aborting a fetus 10 minutes prior to when it would have been born. It can also be totally 'noncentral': the morning after pill. Abortions are a grey area of central-murder depending on the progress of neural devlopment of the fetus.

Affermative action really IS a central case of racism. It's bad for the same reason as segregation was bad, because it's not fair to judge people based on their race. The only difference is that it's not nearly AS bad. Segregation was brutal and oppressive, while affermative action doesn't really affect most peopel enough for them to notice.

Comment author: taw 29 September 2012 10:08:52AM 0 points [-]

Your argument depends on choosing what's "central" or "archetypal" example, and that's completely arbitrary, since this doesn't seem to mean "most common" or anything else objective.

It really falls apart on that.

Comment author: Yvain 29 September 2012 05:17:00AM 5 points [-]

I've edited this in a way that hopefully removes some of the controversy. Thanks to everyone who voted in the poll here. Actually, wait, no, the opposite of that. The two options ended out perfectly balanced, plus a bunch of people wanted me to make it even snarkier, and it was super confusing.

Anyway, I decided to respect the split poll by making a combination of the two drafts. The name has been changed to "the marginal fallacy", credit to James_G (sorry, Konkvistador, but I really do think that the fallacy of accident is something slightly different), but I kept Worst Argument In The World as a subtitle.

I deleted the euthanasia example, both because it was overkill on the "X is murder" examples and to exactly balance the liberal and conservative examples at three each. Then I heavily edited most of the others, and added to the end a paragraph about how maybe this pattern could be useful in sparking conversation. Then I added some footnotes and just a tiny bit of snark to satisfy the pro-snark contingent.

Hopefully this will be a less than entirely unsatisfactory compromise.

Comment author: shminux 29 September 2012 05:40:01AM 1 point [-]

Not sure why you are intent on renaming the Association Fallacy.

Comment author: Exetera 29 September 2012 01:51:36PM *  2 points [-]

They're not quite the same. The association fallacy takes the form "A is a C and A is a B therefore all B are C," whereas this argument takes the form "A is arguably a B and Bs are often C therefore if I call A a B I can implicitly accuse it of being C without having to justify it." It's not a standard logical fallacy in the sense that it relies a lot on fuzzy, human definitions of things.

Comment author: Eliezer_Yudkowsky 29 September 2012 05:32:01AM 4 points [-]

Er... "marginal fallacy" sounds like it should involve failure to think on the margins. Sorry I'm late, but how about "the noncentral fallacy" or "the categorization fallacy"?

Comment author: fallaciousd 19 September 2012 09:38:48AM 18 points [-]

Long time ago, me and my sockpuppet lonelygirl15, we was scrollin' down a long and boring thread. All of a sudden, there shined a shiny admin... in the middle... of the thread.

And he said, "Give a reason for your views, or I'll ban you, troll."

Well me and lonely, we looked at each other, and we each said... "Okay."

And we said the first thing that came to our heads, Just so happened to be, The Worst Argument in the World, it was the Worst Argument in the World.

Look into my brain and it's easy to see This A is B and that B is C, So this A is C. My heuristic isn't justified But I know it's right 'cause of how it feels From the inside...

Comment author: RomeoStevens 29 September 2012 05:52:40AM 0 points [-]

This A is B and that B is C, So this A is C

thank you for allowing me to store this in my head efficiently.

Comment author: knb 16 September 2012 05:27:23AM *  -1 points [-]

Many will be political, for which I apologize,

This is like saying, "I'm really sorry about how I'm going to slap you". If you have need to apologize for something, just don't do it. If what you are doing isn't wrong, don't apologize.

But really how hard did you try to avoid politics? I doubt it was very hard.

This strikes me as a form of "logical rudeness". Since these are "just little examples", you get to make little jabs at people you disagree with by tarring them with a strawman argument. And if anyone responds to these rude little dismissals of enemy political ideas (and all of these "worst arguments" happen to be things most LWers oppose) , they look like idiots who are missing the point of the article.

If you want to avoid a comments derail (i.e. eridu's), avoid insulting people's political ideas. Frankly, this post should have been downvoted to oblivion for that alone. Congratulations though, you managed to hit the usual LW applause lights well enough to avoid that.

Comment author: simplicio 18 September 2012 12:36:11AM 1 point [-]

I think it is time to think critically about LW's "politics is the mind-killer" meme. As interesting as it is to discuss free will and AI metaethics (no sarcasm, I do think they're interesting), there are two main things for which LW really, genuinely has the potential to be quite useful: (1) instrumental rationality as applied to daily life (what job to take, what to invest in, personal ethics); (2) political issues.

Refusing to talk about the latter is missing out on a lot of low-hanging fruit.

Also, for what it's worth, all of these ARE instances of The Worst Argument. Yvain never implied that e.g., anti-abortion people are necessarily driven only by TWAITW, only that that particular argument "abortion is murder" is in fact TWAITW.

Comment author: Hawisher 16 September 2012 05:23:53AM 1 point [-]

And "lesswrong.com" just went from my bookmarks to my speed dial. Anyway, I would like to say that rather than your hypothetical and "ideal" retort of "MLK was the good kind of criminal," I would prefer the more sophisticated response you put forth for other situations, but more generalized: "I fail to see how that is relevant."

"But... but... abortion is MURDER!" (Please note that I am against abortion for reasons I categorically refuse to discuss due to several harrowing experiences on spacebattles.com forums, although this site seems much more civil) "I fail to see how that is relevant."

Comment author: Desrtopa 16 September 2012 05:30:55AM 0 points [-]

That response may be technically true (you don't acknowledge the relevance of the argument,) but I don't think it's usually appropriate, since the idea that something falling into a negative category could be irrelevant probably falls across a gap of inferential distance for your interlocutor. If they already got it, they probably wouldn't have made the argument in the first place.

Comment author: Hawisher 16 September 2012 05:12:49PM 1 point [-]

That's fair, but I'd certainly still prefer it to "x is the GOOD kind of y," which I feel has an infantile feel to it. Not that I think Yvain was actually saying he would use that construction.

Comment author: primemountain 14 September 2012 11:30:58AM 0 points [-]

First an admission: I did not read all the comments, there are too many for me, just the top 150 or so, so someone might have mentioned this before, if so never mind. This is for Yvain , an example of the worst argument I ever faced: The logic is as follows, since the sky is blue you are stupid. That is the end of the argument, since you SEE, you are stupid thus your argument is stupid. So what can you do then, except walk out? What can you do when one side is not only unreasonable, but irrational ?

Comment author: DaFranker 14 September 2012 09:00:33PM *  5 points [-]

The reason Yvain's proposed argument is arguably much worse is that the argument you propose is a clear, visible fallacy with spectacular failure modes and many people will indeed simply walk away or mark the person making the argument as crazy, while Yvain's argument, in the situations where it is the worst argument, is not only wrong and erroneous logic but also still manages to convince uninformed people that it is valid, and so they will accept its conclusion as true, while at the same time tricking opponents into debating the wrong points and formulating the wrong counter-arguments.

Yvain's WAitW is much more destructive, pervasive, memetically powerful, tricky to counter when there are large audiences and high stakes, and also much easier to do accidentally even when you know that it's a mistake - while pretty much anyone versed in the basic rules of causality and logic will understand and easily avoid the kind of arguments you've given an example of. Sure, some variants of what you describe, like "You heathens don't believe in God so any argument you make is invalid, only devouts of my religion can speak Truth!" can be pretty bad too, and this has been demonstrated, but it doesn't require as much mastery of logic to avoid committing.

As for what you can do, well... you could try to make them reasonable or rational, either through helping them achieve their existing goal of becoming more so, or through convincing them that they want to, or through other forms of manipulation... or you could always just do one of a plethora of other things you could do, like walking out, or learning physics, or killing them, or getting other people which this person considers as their Holy Authority to persuade them that they are wrong, etc. etc.

And there's always the sharpened bones of hufflepuffs.

Comment author: primemountain 14 September 2012 09:22:35PM 0 points [-]

Thank you for the clarification on why those are WAitW.

Comment author: chaosmosis 14 September 2012 08:51:33PM *  0 points [-]

Arguments can't function unless both sides agree on things, such as what rules of logic work and what rules don't. Generally, people will admit they were wrong if they see a prediction fail obviously and spectacularly. But, if someone doesn't want to admit that logic exists or you just disagree with someone as to what logic is, there's really nothing to be done but to walk away.

This is for Yvain

noooooooooooooooooooooooooooooo

Comment author: TheOtherDave 14 September 2012 09:13:47PM 2 points [-]

But, if someone doesn't want to admit that logic exists or you just disagree with someone as to what logic is, there's really nothing to be done but to walk away.

That's not necessarily true. If we disagree on what logic is, I can work out the rules of what you consider logic and decide whether, using those rules, I come to a different conclusion than you do (in which case I can try to convince you of that different conclusion using your rules), or I can attempt to convince you that you're wrong via illogical means (like telling you a convincing story, or using question-begging language, or etc.). I can also do the latter if you reject logic altogether.

Comment author: chaosmosis 14 September 2012 09:36:10PM 0 points [-]

Truth, thanks.

Comment author: Eliezer_Yudkowsky 13 September 2012 01:47:24PM 1 point [-]

I've banned all of eridu's recent comments (except a few voted above 0) as an interim workaround, since hiding-from-Recent-Comments and charge-fee-to-all-descendants is still in progress for preventing future threads like these.

I respectfully request that you all stop doing this, both eridu and those replying to him.

Comment author: NancyLebovitz 14 September 2012 06:24:23AM 0 points [-]

Minor point: Do we have evidence on eridu's gender?

Comment author: Nornagest 14 September 2012 06:38:51AM 2 points [-]

Yes, he described himself as male here. Not that it particularly matters, except insofar as it makes playing the pronoun game easier.

Comment author: NancyLebovitz 14 September 2012 06:46:51AM 4 points [-]

Thanks. I'm impressed with the story in the link, but also more convinced that he might as well be treated as a troll because he criticized someone for being a man explaining feminism to women.

Comment author: Nornagest 14 September 2012 06:57:02AM 3 points [-]

Eh, that's a relatively minor sin of argument, all things considered. It's pretty easy to think that you're excused from such a thing thanks to greater relative knowledge or better subcultural placement.

Comment author: Eugine_Nier 14 September 2012 07:03:42AM 1 point [-]

Or simply fundamental attribution error.

Comment author: anon895 14 September 2012 02:39:58AM *  9 points [-]

I (and any other casual visitor) now have only indirect evidence regarding whether eridu's comments were really bad or were well-meaning attempts to share feminist insights into the subject, followed by understandable frustration as everything she^Whe said was quoted out of context (if not misquoted outright) and interpreted in the worst possible way.

Comment author: fubarobfusco 14 September 2012 05:14:10AM 9 points [-]

Agreed. I would prefer that a negative contributor be prospectively banned (that is, "prevented from posting further") rather than retrospectively expunged (that is, "all their comments deleted from the record"), so as to avoid mutilating the record of past discussions.

For precedent, consider Wikipedia: if a contributor is found to be too much trouble (starting flamewars, edit-warring, etc.) they are banned, but their "talk page" discussion comments are not expunged. However, specific comments that are merely flaming, or which constitute harassment or the like, can be deleted.

Comment author: NancyLebovitz 14 September 2012 06:26:44AM 5 points [-]

Agreed. In this case, what I read of the discussion which included eridu indicated that they weren't worth engaging with, but I'm actually rather impressed with what I saw of the community's patience.

Comment author: [deleted] 13 September 2012 04:35:51PM *  12 points [-]

I've banned all of eridu's recent comments (except a few voted above 0) as an interim workaround

Is "ban" meaning "delete" a reddit-ism?

When I hear "ban" I think "author isn't allowed to post for a while".

Comment author: Wei_Dai 13 September 2012 04:50:32PM *  12 points [-]

"Ban" here means "make individual posts and comments invisible to everyone except moderators". (I agree "ban" is confusing.)

Comment author: Eliezer_Yudkowsky 14 September 2012 01:54:13AM 5 points [-]

Correct. Sorry, the button I use says "Ban".

Comment author: DaFranker 14 September 2012 02:06:13AM 8 points [-]

Bad button!

Sorry, it was very tempting. =P

Comment author: wedrifid 13 September 2012 04:16:03PM 13 points [-]

I've banned all of eridu's recent comments (except a few voted above 0)

Bravo. I have no idea whether that was someone pretending to be ignorant and toxic for the purpose of discrediting a group he was impersonating or whether it was sincere (and ignorant and toxic). Fortunately I don't need to know and don't care either way. Good riddance!

as an interim workaround, since hiding-from-Recent-Comments and charge-fee-to-all-descendants is still in progress for preventing future threads like these.

Is it just me or do others also find that Eliezer coming of as a tad petulant with the way he is handling people systematically opposing and downvoting his proposal? Every time he got downvoted to oblivion he just came back with a new comment seemingly crafted to be more belligerent, whiny, condescending and cynical about the community than the last. (That's hyperbole---in actuality it peaked in the middle somewhere.) Now we just keep getting reminded about it at every opportunity as noise in unrelated threads.

Comment author: Eliezer_Yudkowsky 14 September 2012 01:57:45AM -1 points [-]

Is it just me or do others also find that Eliezer coming of as a tad petulant with the way he is handling people systematically opposing and downvoting his proposal? Every time he got downvoted to oblivion he just came back with a new comment seemingly crafted to be more belligerent, whiny, condescending and cynical about the community than the last. (That's hyperbole---in actuality it peaked in the middle somewhere.) Now we just keep getting reminded about it at every opportunity as noise in unrelated threads.

I observe that wedifrid has taken advantage of this particular opportunity to remind everyone that he thinks I am belligerent, whiny, condescending, and cynical.

(So noted because I was a bit unhappy at how the conversation suddenly got steered there.)

Comment author: wedrifid 14 September 2012 11:26:10AM 12 points [-]

I observe that wedifrid has taken advantage of this particular opportunity to remind everyone that he thinks I am belligerent, whiny, condescending, and cynical.

I notice that my criticism was made specifically regarding the exhibition of those behaviors in the comments he has made about the subject he has brought up here. We can even see that I made specific links. Eliezer seems to be conflating this with a declaration that he has those features as part of his innate disposition.

By saying that wedrifid is reminding people that he (supposedly) believes Eliezer has those dispositions he also implies that wedrifid has said this previously. This is odd because I find myself to be fairly open with making criticisms of Eliezer whenever I feel them justified and from what I recall "belligerent, whiny, condescending, and cynical [about the lesswrong community]" isn't remotely like a list of weaknesses that I actually have described Eliezer as having in general or at any particular time that I recall.

Usually when people make this kind of muddled accusation I attribute it to a failure of epistemic rationality and luminosity. Many people just aren't able to separate in their minds a specific criticism of an action and belief about innate traits. Dismissing Eliezer as merely being incompetent at the very skills he is renowned for would seem more insulting than simply concluding that he is being deliberately disingenuous.

So noted because I was a bit unhappy at how the conversation suddenly got steered there.

My suggestion is that Eliezer would be best served by not bringing the conversation here repeatedly. It sends all sorts of signals of incompetence. That 'unhappy' feeling is there to help him learn from his mistakes.

Comment author: V_V 14 September 2012 10:11:03AM 10 points [-]

If that bothers you, you may consider that whining that people find you whiny might not be the optimal strategy for making them change their mind.

Comment author: DaFranker 14 September 2012 02:22:00AM -2 points [-]

I also observe that wedrifid's opinion of you doesn't appear to be steered with equal expected posterior probability in light of how you react versus his predictions of your reactions.

I'm curious as to whether I'm on to something there, or whether I just pulled something random and my intuitions are wrong.

Comment author: wedrifid 14 September 2012 11:26:55AM *  3 points [-]

I also observe that wedrifid's opinion of you doesn't appear to be steered with equal expected posterior probability in light of how you react versus his predictions of your reactions.

I can't even decipher what it is you are accusing wedrifid of here. Apart from being wrong and biased somehow.

Comment author: DaFranker 14 September 2012 07:06:18PM *  1 point [-]

I'm referring to a specific part of bayesian updating, conservation of expected evidence. Specifically:

On pain of paradox, a low probability of seeing strong evidence in one direction must be balanced by a high probability of observing weak counterevidence in the other direction.

This rule did not seem respected in what little I've seen of interactions between you and Eliezer, and I was looking for external feedback and evidence (one way or another) for this hypothesis, to see if there is a valid body of evidence justifying the selection of this hypothesis for consideration or if that simply happened out of bias and inappropriate heuristics.

I suspect that, if the latter, then there was probably an erroneous pattern-matching to the examples given in the related blogpost on the subject (and other examples I have seen of this kind of erroneous thinking).

I don't know how to submit this stuff for feedback and review without using a specific "accusation" or wasting a lot of time creating (and double-checking for consistency) elaborating complex counterfactual scenarios.

Comment author: TheOtherDave 13 September 2012 04:36:03PM 10 points [-]

Mostly he's coming across to me as having lost patience with the community not being what he wants it to be, and having decided that he can fix that by changing the infrastructure, and not granting much importance to the fact that more people express disapproval of this than approval.

Comment author: Eliezer_Yudkowsky 14 September 2012 01:43:22AM 7 points [-]

Keep in mind that it's not "more people" it's more "people who participate in meta threads on Less Wrong". I've observed a tremendous divergence between the latter set, and "what LWers seem to think during real-life conversations" (e.g. July Minicamp private discussions of LW which is where the anti-troll-thread ideas were discussed, asking what people thought about recent changes at Alicorn's most recent dinner party). I'm guessing there's some sort of effect where only people who disagree bother to keep looking at the thread, hence bother to comment.

Some "people" were claiming that we ought to fix things by moderation instead of making code changes, which does seem worth trying; so I've said to Alicorn to open fire with all weapons free, and am trying this myself while code work is indefinitely in progress. I confess I did anticipate that this would also be downvoted even though IIRC the request to do that was upvoted last time, because at this point I've formed the generalization "all moderator actions are downvoted", either because only some people participate in meta threads, and/or the much more horrifying hypothesis "everyone who doesn't like the status quo has already stopped regularly checking LessWrong".

I'm diligently continuing to accept feedback from RL contact and attending carefully to this non-filtered source of impressions and suggestions, but I'm afraid I've pretty much written-off trying to figure out what the community-as-a-whole wants by looking at "the set of people who vigorously participate in meta discussions on LW" because it's so much unlike the reactions I got when ideas for improving LW were being discussed at the July Minicamp, or the distribution of opinions at Alicorn's last dinner party, and I presume that any other unfiltered source of reactions would find this conversation similarly unrepresentative.

Comment author: [deleted] 14 September 2012 06:39:32PM 4 points [-]

In case you need assurance from the online sector. I wholeheartedly welcome any increase in the prevalence of the banhammer, and the "pay 5 karma" thing seems good too.

During that Eridu fiasco, I kept hoping a moderator would do something like "this thread is locked until Eridu taboos all those nebulous affect-laden words."

Benevolent dictators who aren't afraid of dissent are a huge win, IMO.

Comment author: MBlume 14 September 2012 06:26:03PM 4 points [-]

At risk of failing to JFGI: can someone quickly summarize what remaining code work we'd like done? I've started wading into the LW code, and am not finding it quite as impenetrable as last time, so concrete goals would be good to have.

Comment author: Eliezer_Yudkowsky 15 September 2012 03:31:01AM 3 points [-]
Comment author: Yvain 14 September 2012 06:16:20PM *  13 points [-]

I will be starting another Less Wrong Census/Survey in about three weeks; in accordance with the tradition I will first start a thread asking for question ideas. If you can think of a good list of opinions you want polled in the next few weeks, consider posting them there and I'll stick them in.

Comment author: Alicorn 14 September 2012 04:32:51AM 12 points [-]

You... know I don't optimize dinner parties as focus groups, right? The people who showed up that night were people who like chili (I had to swap in backup guests for some people who don't) and who hadn't been over too recently. A couple of the attendees from that party barely even post on LW.

Comment author: [deleted] 14 September 2012 03:06:27PM 7 points [-]

FWIW, I eat chili but I don't think the strongest of the proposed anti-troll measures are a good idea.

Comment author: wedrifid 14 September 2012 12:01:10PM *  17 points [-]

You... know I don't optimize dinner parties as focus groups, right?

It is perhaps more importantly dinner parties are optimised for status and social comfort. Actually giving honest feedback rather than guessing passwords would be a gross faux pas.

Getting feedback at dinner parties is a good way to optimise the social experience of getting feedback and translate one's own status into the agreement of others.

Comment author: Eliezer_Yudkowsky 14 September 2012 11:18:43AM 0 points [-]

You... know I don't optimize dinner parties as focus groups, right?

That's kinda the point.

Comment author: CCC 14 September 2012 08:03:03AM 5 points [-]

If I were to guess, I'd guess that the main filter criteria for your dinner parties is geographical; when you have a dinner party in the Bay area, you invite people who can be reasonably expected to be in the Bay area. This is not entirely independant of viewpoint - memes which are more common local to the Bay area will be magnified in such a group - but the effect of that filter on moderation viewpoints is probably pretty random (similarly, the effect of the filter of 'people who like chili' on moderation viewpoints is probably also pretty random).

So the dinner party filter exists, but it less likely to pertain to the issue at hand than the online self-selection filter.

Comment author: komponisto 14 September 2012 09:08:17AM 5 points [-]

The problem with the dinner party filter is not that it is too strong, but that it is too weak: it will for example let through people who aren't even regular users of the site.

Comment author: komponisto 14 September 2012 04:28:34AM 38 points [-]

Let me see if I understand you correctly: if someone cares about how Less Wrong is run, what they should do is not comment on Less Wrong -- least of all in discussions on Less Wrong about how Less Wrong is run ("meta threads"). Instead, what they should do is move to California and start attending Alicorn's dinner parties.

Have I got that right?

Comment author: SilasBarta 18 September 2012 11:05:16PM 4 points [-]

Don't worry, I'm sure that venue's attendees are selected neutrally.

Comment author: wedrifid 14 September 2012 11:40:38AM 25 points [-]

Let me see if I understand you correctly: if someone cares about how Less Wrong is run, what they should do is not comment on Less Wrong -- least of all in discussions on Less Wrong about how Less Wrong is run ("meta threads"). Instead, what they should do is move to California and start attending Alicorn's dinner parties.

That's how politics usually works, yes.

Comment author: Eliezer_Yudkowsky 14 September 2012 11:18:07AM 0 points [-]

All you have to do is run into me in any venue whatsoever where the attendees weren't filtered by their interest in meta threads. :)

Comment author: DaFranker 18 September 2012 08:38:08PM 7 points [-]

Can "Direct email, skype or text-chat communications to E.Y." count as a venue? Purely out of curiosity.

Comment author: Eliezer_Yudkowsky 18 September 2012 08:56:06PM 0 points [-]

The problem is that if you initiate it, it's subject to the Loss Aversion effect where the dissatisfied speak up in much greater numbers.

Comment author: DevilWorm 19 September 2012 08:27:33PM *  4 points [-]

it's subject to the Loss Aversion effect where the dissatisfied speak up in much greater numbers

But Eliezer Yudkowsky, too, is subject to the loss aversion effect. Just as those dissatisfied with changes overweight change's negative consequences, so does Eliezer Yudkowsky overweight his dissatisfaction with changes initiated by the "community." (For example, increased tolerance of responding to "trolling.")

Moreover, if you discount the result of votes on rules, why do you assume votes on other matters are more rational? The "community" uses votes on substantive postings to discern a group consensus. These votes are subject to the same misdirection through loss aversion as are procedural issues. If the community has taken a mistaken philosophical or scientific position, people who agree with that position will be biased to vote down postings that challenge that position, a change away from a favored position being a loss. (Those who agree with the newly espoused position will be less energized, since they weight their potential gain less than their opponents weigh their potential loss.)

If you think "voting" is so highly distorted that it fails to represent opinion, you should probably abolish it entirely.

Comment author: komponisto 19 September 2012 10:08:45AM 25 points [-]

I don't see what this has to do with "loss aversion" (the phenomenon where people think losing a dollar is worse than failing to gain a dollar they could have gained), though that's of course a tangential matter.

The point here is -- and I say this with all due respect -- it looks to me like you're rationalizing a decision made for other reasons. What's really going on here, it seems to me, is that, since you're lucky enough to be part of a physical community of "similar" people (in which, of course, you happen to have high status), your brain thinks they are the ones who "really matter" -- as opposed to abstract characters on the internet who weren't part of the ancestral environment (and who never fail to critique you whenever they can).

That doesn't change the fact that this is is an online community, and as such, is for us abstract characters, not your real-life dinner companions. You should be taking advice from the latter about running this site to about the same extent that Alicorn should be taking advice from this site about how to run her dinner parties.

Comment author: Alicorn 19 September 2012 05:45:51PM 3 points [-]

Alicorn should be taking advice from this site about how to run her dinner parties.

Do you have advice on how to run my dinner parties?

Comment author: RichardKennaway 19 September 2012 12:59:26PM -2 points [-]

since you're lucky enough to be part of a physical community of "similar" people (in which, of course, you happen to have high status), your brain thinks they are the ones who "really matter" -- as opposed to abstract characters on the internet who weren't part of the ancestral environment (and who never fail to critique you whenever they can).

Was Eliezer "lucky" to have cofounded the Singularity Institute and Overcoming Bias? "Lucky" to have written the Sequences? "Lucky" to have founded LessWrong? "Lucky" to have found kindred minds, both online and in meatspace? Does he just "happen" to be among them?

Or has he, rather, searched them out and created communities for them to come together?

That doesn't change the fact that this is is an online community, and as such, is for us abstract characters, not your real-life dinner companions. You should be taking advice from the latter about running this site to about the same extent that Alicorn should be taking advice from this site about how to run her dinner parties.

The online community of LessWrong does not own LessWrong. EY owns LessWrong, or some combination of EY, the SI, and whatever small number of other people they choose to share the running of the place with. To a limited extent it is for us, but its governance is not at all by us, and it wouldn't be LessWrong if it was. The system of government here is enlightened absolutism.

Comment author: DaFranker 18 September 2012 09:04:53PM *  4 points [-]

True. For that to be an effective communication channel, there would need to be a control group. As for how to create that control group or run any sort of blind (let alone double-blind) testing... yeah, I have no idea. Definitely a problem.

ETA: By "I have no idea", I mean "Let me find my five-minute clock and I'll get back to you on this if anything comes up".

Comment author: DaFranker 19 September 2012 02:15:06PM *  2 points [-]

So I thought for five minutes, then looked at what's been done in other websites before.

The best I have is monthly surveys with randomized questions from a pool of stuff that matters for LessWrong (according to the current or then-current staff, I would presume) with a few community suggestions, and then possibly later implementation of a weighing algorithm for diminishing returns when multiple users with similar thread participation (e.g. two people that always post in the same thread) give similar feedback.

The second part is full of holes and horribly prone to "Death by Poking With Stick", but an ideal implementation of this seems like it would get a lot more quality feedback than what little gets through low-bandwidth in-person conversations.

There are other, less practical (but possibly more accurate) alternatives, of course. Like picking random LW users every so often, appearing at their front door, giving them a brain-scan headset (e.g. an Emotiv Epoc), and having them wear the headset while being on LW so you can collect tons of data.

I'd stick with live feedback and simple surveys to begin with.

Comment author: [deleted] 14 September 2012 04:07:41PM 11 points [-]

But now that you've stated this, you have the ability to rationalize any future IRL meta discussion...

Comment author: fubarobfusco 14 September 2012 05:08:15AM 15 points [-]

Can we call this the social availability heuristic?

Comment author: Alicorn 14 September 2012 04:34:25AM *  17 points [-]

Also, you have to attend dinner parties on a day when Eliezer is invited and doesn't decline due to being on a weird diet that week.

Comment author: Nornagest 14 September 2012 04:07:16AM *  14 points [-]

I've moderated a few forums before, and with that experience in mind I'd have to agree that there's a huge, and generally hugely negative, selection bias at play in online response to moderator decisions. It'd be foolish to take those responses as representative of the entire userbase, and I've seen more than one forum suffer as a result of such a misconception.

That being said, though, I think it's risky to write off online user feedback in favor of physical. The people you encounter privately are just as much a filtered set as those who post feedback here, though the filters point in different directions: you're selecting people involved in the LW interpersonal community, for one thing, which filters out new and casual users right off the bat, and since they're probably more likely to be personally friendly to you we can also expect affect heuristics to come into play. Skepticism toward certain LW norms may also be selected against, which could lead people to favor new policies reinforcing those norms. Moreover, I've noticed a trend in the Bay Area group -- not necessarily an irrational one, but a noticeable one -- toward treating the online community as low-quality relative to local groups, which we might expect to translate into antipathy towards its status quo.

I don't know what the weightings should be, but if you're looking for a representative measure of user preferences I think it'd be wise to take both groups into account to some extent.

Comment author: Bugmaster 14 September 2012 02:41:47AM 9 points [-]

That's fair, and your strategy makes sense. I also agree with DaFranker, below, regarding meta-threads.

This said, however, at the time when I joined Less Wrong, my model of the site was something like, "a place where smart people hold well-reasoned discussions on a wide range of interesting topics" (*). TheOtherDave's comment, in conjunction with yours, paints a different picture of what you'd like Less Wrong to be; let's call it Less Wrong 2.0. It's something akin to, "a place where Eliezer and a few of his real-life friends give lectures on topics they think are important, with Q&A afterwards".

Both models have merit, IMO, but I probably wouldn't have joined Less Wrong 2.0. I don't mean that as any kind of an indictment; if I were in your shoes, I would definitely want to exclude people like this Bugmaster guy from Less Wrong 2.0, as well.

Still, hopefully this one data point was useful in some way; if not, please downvote me !

(*) It is possible this model was rather naive.

Comment author: Emile 14 September 2012 08:34:43AM 4 points [-]

This said, however, at the time when I joined Less Wrong, my model of the site was something like, "a place where smart people hold well-reasoned discussions on a wide range of interesting topics" (*). TheOtherDave's comment, in conjunction with yours, paints a different picture of what you'd like Less Wrong to be; let's call it Less Wrong 2.0. It's something akin to, "a place where Eliezer and a few of his real-life friends give lectures on topics they think are important, with Q&A afterwards".

No; you're conflating "Eliezer considers he should have the last word on moderation policy" and "Eliezer considers LessWrong's content should be mostly about what he has to say".

The changes of policy Eliezer is pushing have no effect on the "main" content of the site, i.e. posts that are well-received, and upvoted. The only disagreement seems to be about sprawling threads and reactions to problem users. I don't know where you're getting "Eliezer and a few of his real-life friends give lectures on topics they think are important" out of that, it's not as if Eliezer has been posting many "lectures" recently.

Comment author: Bugmaster 14 September 2012 06:47:00PM 2 points [-]

I was under the impression that Eliezer agreed with TheOtherDave's comment upthread:

Mostly [Eliezer is] coming across to me as having lost patience with the community not being what he wants it to be...

Combined with Eliezer's rather aggressive approach to moderation (f.ex. deleting downvoted comments outright), this did create the impression that Eliezer wants to restrict LessWrong's content to a narrow list of specific topics.

Comment author: TheOtherDave 14 September 2012 03:08:59AM 6 points [-]

EY has always seemed to me to want LW to be a mechanism for "raising the sanity waterline". To the extent that wide-ranging discussion leads to that, I'd expect him to endorse it; to the extent that wide-ranging discussion leads away from that, I'd expect him to reject it. This ought not be a surprise.

Nor ought it be surprising that much of the discussion here does not noticeably progress this goal.

That said, there does seem to be a certain amount of non-apple selling going on here; I don't think there's a cogent model of what activity on LW would raise the sanity waterline, so attention is focused instead on trying to eliminate the more blatant failures: troll-baiting, for example, or repetitive meta-threads.

Which is not a criticism; it is what it is. If I don't know the cause, that's no reason not to treat the symptoms.

Comment author: Rain 14 September 2012 02:40:55AM *  9 points [-]

I very much appreciate the attempts at greater moderation, including the troll penalty. Thank you.

Comment author: Sarokrae 14 September 2012 06:11:18AM 9 points [-]

Me too. Troll posts and really wrong people are too distracting without some form of intervention. Not sure the current solution is optimal (but this point has been extensively argued elsewhere), but I applaud the effort to actually stick one's neck out and try something.

Comment author: Eliezer_Yudkowsky 14 September 2012 11:19:51AM 5 points [-]

Thank you both. Very much, and sincerely.

Comment author: Will_Newsome 14 September 2012 05:25:39PM *  10 points [-]

Accepting thanks with sincerity, while somewhat-flippantly mostly-disregarding complaints? ...I must be missing some hidden justification?

Comment author: philh 14 September 2012 05:40:00PM 5 points [-]

People who agree are more likely to keep quiet than people who disagree. Rewarding them for speaking up reduces that effect, which means comments get closer to accurately representing consensus.

Comment author: TheOtherDave 14 September 2012 05:47:29PM 1 point [-]

Can you summarize your reasons for believing that people who agree are more likely to keep quiet than people who disagree?

Comment author: wedrifid 14 September 2012 05:29:59PM 5 points [-]

Accepting thanks with sincerity, while somewhat-flippantly mostly-disregarding complaints? ...I must be missing some hidden justification?

He is thanking them for their support, not their information.

Comment author: TheOtherDave 14 September 2012 02:16:44AM 1 point [-]

Fair enough. All I see is the vote-counts and online comments, but the real-life commenters are of course also people, and I can understand deciding to attend more to them.

Comment author: shminux 14 September 2012 02:20:59AM 0 points [-]

I think his point is that there is less selection bias IRL.

Comment author: TimS 14 September 2012 02:25:00AM 6 points [-]

But that's almost certainly false. IRL input has distinct selection bias from viewing meta threads, but not no selection bias.

Comment author: TheOtherDave 14 September 2012 02:59:38AM 4 points [-]

Yeah, exactly. Which is why I took it to mean a simple preference for considering the community of IRL folks. Which is not meant as a criticism; after all, I also take more seriously input from folks in my real life than folks on the internet.

Comment author: komponisto 14 September 2012 08:50:20AM 5 points [-]

I also take more seriously input from folks in my real life than folks on the internet.

Even when the topic on which you are receiving input is how to run an internet forum (on which the real-life folks don't post)?

Comment author: TheOtherDave 14 September 2012 01:59:47PM 3 points [-]

Well, I don't do that, clearly, since I don't run such an Internet forum.

Less trivially, though... yeah, I suspect I would do so. The tendency to take more seriously people whose faces I can see is pretty strong. Especially if it were a case like this one, where what the RL people are telling me synchronizes better with what I want to do in the first place, and thus gives me a plausible-feeling justification for doing it.

I suspect you're not really asking me what I do, though, so much as implicitly suggesting that what EY is doing is the wrong thing to do... that the admins ought to attend more to commenters and voters who are actually participating on the thread, rather than attending primarily to the folks who attend the minicamp or Alicorn's dinner parties.

If so, I don't think it's that simple. Fundamentally it depends on whether LW's sponsors want it to be a forum that demonstrates and teaches superior Internet discourse or whether it wants to be a forum for people interested in rational thinking to discuss stuff they like to discuss. If it's the latter, then democracy is appropriate. If it's the former, then purging stuff that fails to demonstrate superior Internet discourse is appropriate.

LW has seemed uncertain about which role it is playing for as long as I've been here.

Comment author: shminux 14 September 2012 02:43:00AM 0 points [-]

Then he is OK with this particular selection bias :)

Comment author: DaFranker 14 September 2012 02:00:10AM *  7 points [-]

Sometimes AKA the "Forum Whiners" effect, well known in the PC games domain:

When new PC games are released, almost inevitably the main forums for the game will become flooded with a large surge of complaints, negative reviews, rage, rants, and other negative stuff. This is fully expected and the absence of such is actually a bad sign. People that are happy with the product are playing the game, not wasting their time looking for forums and posting comments there - while people who have a problem or are really unhappy often look for an outlet or a solution to their issues (though the former in much greater numbers, usually). If no one is bothering to post on the forums, then that's evidence that no one cares about the game in the first place.

I see a lot of similarities here, so perhaps that's one thing worth looking into? I'd expect some people somewhere to have done the math already on this feedback (possibly by comparing to overall sales, survey results and propagation data), though I may be overestimating the mathematical propensity of the people involved.

Regarding the stop-watching-threads thing, I've noticed that I pretty much always stop paying attention to a thread once I've gotten the information I wanted out of it, and will only come back to it if someone directly replies to one of my comments (since it shows up in the inbox). This has probably been suggested before, but maybe a "watchlist" to mark some threads to show up new comments visibly somewhere and/or a way to have grandchildren comments to one of your own show up somehow could help? I often miss it when someone replies to a reply to my comment.

Comment author: Bugmaster 14 September 2012 02:20:46AM 7 points [-]

Upvoted for the "watchlist" idea, I really wish Less Wrong had it.

Comment author: [deleted] 14 September 2012 10:24:28PM 4 points [-]

Each individual post/comment has its own RSS feed (below your user name, karma scores etc. and above “Nearest meetups” in the right sidebar).

Comment author: wedrifid 13 September 2012 05:01:49PM *  4 points [-]

and not granting much importance to the fact that more people express disapproval of this than approval.

Those who actually don't care about such things what people think don't tend to convey this level of active provocation and defiance.

Comment author: TheOtherDave 13 September 2012 05:38:50PM 2 points [-]

Sure. I can't speak for EY, clearly, but there are many things (including what other people think) that I find myself caring about, often a lot, but I don't think are important. This is inconsistent, I know, but I find it pretty common among humans.

Comment author: thomblake 13 September 2012 04:19:22PM 12 points [-]

Is it just me

It's not just you.

I'm starting to think there should be community-elected moderators or something, and Eliezer should stop being allowed to suggest things.

Comment author: thomblake 13 September 2012 04:15:27PM 5 points [-]

While the discussion arguably veered off-topic with respect to the original article, I don't think we actually have a rule against that. And I don't think eridu was actually trolling, though they do seem to have an overly-dismissive attitude towards the community. I do think there's a place for social constructivist / radical feminist views to be aired where they apply on this site, and I don't think eridu was doing a particularly bad job of it.

If we have a diversity of views, then people will disagree about fundamental sorts of things and we'll end up with people thinking each other are "not even wrong" about some issues, which certainly seems downvote-worthy at the time. But we do want a diversity of views (it's one of the primary benefits of having multiple people interacting in the first place), and so banning comments which are merely unpopular is not called-for, and will simply shunt out potential members of the community.

Of course, I'm basically guessing about your rationale in banning these comments, so if you'd like to provide some specific justification, that would be helpful.

Comment author: bogus 13 September 2012 04:38:58PM *  1 point [-]

While the discussion arguably veered off-topic with respect to the original article,

I disagree. It was a perfect example of how the Worst Argument In The World (rather, an especially irritating subtype of the same) is often deployed in the field.

Comment author: wedrifid 13 September 2012 04:21:07PM 12 points [-]

I do think there's a place for social constructivist / radical feminist views to be aired where they apply on this site, and I don't think eridu was doing a particularly bad job of it.

Right now that sounds like one of the most brutal criticisms you could have made of radical feminism.

Comment author: thomblake 13 September 2012 04:23:27PM 4 points [-]

I should note that I'm not a fan, so that sort of thing should be expected.

Comment author: fezziwig 13 September 2012 03:35:52PM 14 points [-]

I think Eridu's downvotes were mostly well-deserved.

I don't think this is a good idea.

I wonder if we could solve this problem from another direction. The issue from your perspective, as I understand it, is that you want to be able to follow every interesting discussion on this site, in semi-real time, but can't. You can't because your only view into "all comments everywhere" is only 5 items long, so fast-moving pointless discussions drown out the stuff you're interested in. An RSS feed presumably isn't sufficient either, since it pushes comments as they occur and doesn't give the community a chance to filter them.

So if I've reasoned all this out correctly, you'd prefer a view of all comments, sorted descending by post time and configurably tree-filtered by karma and maybe username. But we haven't the dev resources to build that, and measures like the ones you describe are a cheap, good-enough approximation.

Do I have that right?

Comment author: Eugine_Nier 14 September 2012 07:06:10AM 1 point [-]

You can't because your only view into "all comments everywhere" is only 5 items long

If you click on the recent comments link you get a longer view.

Comment author: Emile 13 September 2012 04:25:05PM 6 points [-]

The issue from your perspective, as I understand it, is that you want to be able to follow every interesting discussion on this site, in semi-real time, but can't. You can't because your only view into "all comments everywhere" is only 5 items long, so fast-moving pointless discussions drown out the stuff you're interested in.

I think it's more than that - he also doesn't want other people to notice the pointless discussions, so that

1) people stop fanning the flames and feeding the trolls

2) people post in the worthwhile threads, resulting in more quality there

(and I agree with this point of view)

Comment author: Eliezer_Yudkowsky 14 September 2012 01:50:10AM 6 points [-]

Above all:

3) Newcomers who arrive at the site see productive discussion of new ideas, not a flamewar, in the Recent Comments section.

4) Trolls are not encouraged to stay; people who troll do not receive attention-reward for it and do not have their brain reinforced to troll some more. Productive discussion is rewarded by attention.

Comment author: NancyLebovitz 14 September 2012 06:52:07AM 7 points [-]

The discussion with eridu was probably worth ending, but I saw someone say it was the best discussion of those issues they'd ever seen, and I'd said so myself independently in a location that I've promised not to link to.

I am very impressed with LW that we managed to make that happen.

Comment author: [deleted] 14 September 2012 03:57:50PM 5 points [-]

That discussion sucked. I was appalled at LW when I came back after a few hours and still "patriarchy" "abuse" etc hadn't been tabooed.

Comment author: NancyLebovitz 14 September 2012 07:02:38PM 1 point [-]

You could have asked for them to be tabooed.

Comment author: [deleted] 14 September 2012 07:03:18PM *  3 points [-]
Comment author: NancyLebovitz 14 September 2012 07:48:50PM *  4 points [-]

Thanks.

That's interesting-- as I recall, requests for words to be tabooed are usually at least somewhat honored.

Comment author: shminux 14 September 2012 08:00:13PM 1 point [-]

Not in my experience.

Comment author: Wei_Dai 14 September 2012 08:29:32AM 7 points [-]

I am very impressed with LW that we managed to make that happen.

Did you learn something useful or interesting, or were you just impressed that the discussion remained relatively civil? If the former, can you summarize what you learned?

Comment author: NancyLebovitz 14 September 2012 03:45:57PM 3 points [-]

I learned something that might turn out to be useful.

I got a bit of perspective on the extent to which I amplify my rage and distrust at SJ-related material (I had a very rough time just reading a lot of racefail)-- I'm not sure what I want to do with this, but it's something new at my end.

The civility of the discussion is very likely to have made this possible.

Comment author: Wei_Dai 14 September 2012 11:51:08PM 4 points [-]

I got a bit of perspective on the extent to which I amplify my rage and distrust at SJ-related material (I had a very rough time just reading a lot of racefail)-- I'm not sure what I want to do with this, but it's something new at my end.

I'm having trouble understanding this sentence. First, I guess SJ = "social justice" and racefail = "a famously controversial online discussion that was initially about writing fictional characters who are people of color"? But what does it mean to amplify your rage and distrust at some material? Do you mean some parts of the SJ-related materials made you angry and distrustful? Distrustful of who? Which parts made you feel that way? Why? And how did the eridu discussion help you realize the extent?

Comment author: wedrifid 14 September 2012 11:29:15AM 0 points [-]

Did you learn something useful or interesting, or were you just impressed that the discussion remained relatively civil? If the former, can you summarize what you learned?

I'm curious myself. I honestly didn't see anything useful said. (Perhaps I just took all the valid points for granted as obvious?)

Comment author: Bugmaster 13 September 2012 04:57:13PM 6 points [-]

I dislike this solution, for several reasons.

  • I realize that we want to get rid of trolls, and I agree that this is a worthy goal, but one single person shouldn't be in charge of deciding who's a troll and who isn't.
  • Now that everyone knows that downvotes can cause a person to lose their ability to comment (I assume that's what "ban" means, could be wrong though), unscrupulous community members (and we must have some, statistically speaking, as unpleasant as that thought is) can use their downvotes offensively -- sort of like painting a target with a laser, allowing the Eliezer-nuke to home in.
  • Downvoting a comment does not always imply that the commenter is a troll. People also use downvotes to express things like "your argument is weak and unconvincing", and "I disagree with you strongly". We want to discourage the latter usage, and IMO we should encourage the former, but Eliezer's new policy does nothing to achieve these goals, and in fact harms them.
Comment author: DaFranker 13 September 2012 08:59:29PM *  4 points [-]

If the problem is differentiating between trolls and simply weak, airy, or badly formed comments/arguments, I think the obvious simple solution would be to do what has worked elsewhere and add a "Report" or "Troll-Alert" option to bring the comment/post to the attention of moderators or send it to a community-review queue.

It certainly seems easier to control for abuse of a Report feature than to control for trolling and troll-feeding using a single linear score that doesn't even tell you whether that -2 is just 2 * (-1) (two people think the poster is evil) or whether it's +5 -7 (five cultists approve, seven rationalists think it's a troll) (unless moderators can see a breakdown of this?).

Comment author: Alicorn 13 September 2012 09:50:03PM *  0 points [-]

Do you not see a Report button? There at least used to be one; I can't see because I only see a Ban button.

Comment author: Vladimir_Nesov 14 September 2012 06:11:00AM 1 point [-]

See Issue 272. The report button was removed during a past redesign, as (I gather) redesigners didn't feel it was motivated sufficiently to bother preserving it. The issue's been in accepted/contributions-welcome mode since Sep 2011.

Comment author: Alicorn 13 September 2012 11:16:14PM 1 point [-]

Okay, if there's no longer a Report button, I at least am willing to field PMs from people who think I should consider banning specific comments.

Comment author: TheOtherDave 13 September 2012 10:27:39PM 6 points [-]

There is a Report button when I view comments that are replies to my comments, or when I view private messages.
There is no Report button when I view comments normally.

Comment author: DaFranker 13 September 2012 10:54:40PM *  0 points [-]

Oh, you're right! Didn't remember that, but the inbox does have "Context" and "Report" links instead of the standard buttons.

Edit: I suppose a clever bit of scripting could probably fix it browser-side, then, but that's a very hacky solution and there's still value in having a built-in report button for, say, people who don't have the script or often access lesswrong from different browsers/computers.

Comment author: DaFranker 13 September 2012 10:20:31PM 1 point [-]

Nope, no report button here. Upvote/downvote on the left, Parent/Reply/Permalink on the right (+Edit/Retract when own posts).

Comment author: Bugmaster 13 September 2012 10:19:33PM 0 points [-]

I see no such button, FWIW.

Comment author: katydee 13 September 2012 10:10:10PM 2 points [-]

I do not see a Report button.

Comment author: Emile 13 September 2012 05:51:17PM 0 points [-]

one single person shouldn't be in charge of deciding who's a troll and who isn't.

There are several moderators, I don't think Eliezer is the most active.

Now that everyone knows that downvotes can cause a person to lose their ability to comment (I assume that's what "ban" means, could be wrong though)

It doesn't, "ban" just means the comment is hidden.

I agree that there are downsides, they just don't seem that terrible..

Comment author: mrglwrf 13 September 2012 08:58:20PM 0 points [-]

I agree that there are downsides, they just don't seem that terrible..

What about the never-ending meta discussions, or are you counting on those dying down soon? Because I wouldn't, unless the new policy is either dropped, or an extensive purge of the commentariat is carried out.

Comment author: Bugmaster 13 September 2012 06:47:39PM 4 points [-]

There are several moderators, I don't think Eliezer is the most active.

I am aware of this, but Eliezer came off as being particularly invested in personally combating people whom he perceives as trolls.

It doesn't, "ban" just means the comment is hidden.

Ah, I stand corrected then, thanks for the info.

Comment author: komponisto 13 September 2012 02:28:11PM 8 points [-]

charge-fee-to-all-descendants is still in progress

Once again, please don't do that. (Hiding-from-Recent-Comments is totally okay, however.)

Comment author: Eliezer_Yudkowsky 13 September 2012 04:03:58AM 3 points [-]

Meta-note: Right now, as I check the top comments for today, all the top comments for today are replies to heavily downvoted comments. This is the behavior the downvoted-thread-killer was meant to prevent, but we don't yet have "troll-toll all descendants" feature. Noting this because multiple people asked for examples and how often something like it happened.

Comment author: Vladimir_Nesov 14 September 2012 06:03:51AM 4 points [-]

The eridu-generated threads show that the direct reply toll doesn't seem to work, or at least it didn't in this case. I still don't like the idea of the indiscriminate whole-thread toll, but I'm no longer expecting the current alternative to be effective.

I've thought of another option: maybe prohibit a user from posting anywhere in a subthread under any significantly-downvoted comments of their own? This is another feature of all bad threads that could be used to automatically recognize them: the user in a failure mode keeps coming back to the same thread, so if this single user is prohibited from doing so, this seems to be sufficient.

Comment author: wedrifid 17 September 2012 06:15:16AM *  2 points [-]

I've thought of another option: maybe prohibit a user from posting anywhere in a subthread under any significantly-downvoted comments of their own?

I'd prefer the subthread to be outright locked than this. (I only very mildly oppose the latter but the former would be abhorrent.)

Comment author: Wei_Dai 14 September 2012 05:49:44PM 4 points [-]

I still don't like the idea of the indiscriminate whole-thread toll

It looks like that idea has already been replaced with hiding subthreads rooted on comments that are -3 or lower from recent and top comments.

I like the idea of hiding bad subthreads, but wish it's a manual moderator action instead of based on votes. A lot of discussions that descend from downvoted comments are perfectly fine and do not need to be hidden.

I've thought of another option: maybe prohibit a user from posting anywhere in a subthread under any significantly-downvoted comments of their own?

I don't think that's a good idea. What if its a non-troll user who just made a bad comment? They wouldn't be able to come back and admit their mistake or clarify their argument. An actual troll on the other hand could just make a new account and keep going in that thread.

Comment author: Wei_Dai 17 September 2012 01:57:14AM 0 points [-]

It looks like that idea has already been replaced with hiding subthreads rooted on comments that are -3 or lower from recent and top comments.

I just noticed that cousin_it suggested this last year. Also, Eliezer asked:

Does anyone have any strong reasons why LW is better off six months from now if there's a preference option instead of just an automatic behavior to hide such comments? If not, I would just like to see the behavior.

If anyone can think of a strong reason, they should probably follow the link above and comment there.

Comment author: Vladimir_Nesov 15 September 2012 05:17:39AM *  -1 points [-]

Thanks for the link. I don't expect that filtering of what's presented is a good strategy, as it aims at shaping the perception of the community culture, not at shaping the culture itself. It's more important to shape the culture, and perception can't be automatically filtered in a way that presents a picture that's significantly different from the unfiltered picture (for some sense of "significantly").

Comment author: Wei_Dai 15 September 2012 08:14:25PM *  3 points [-]

I think the idea is that if people don't see new replies to the hidden subthread in recent comments, they'll be much less likely to respond to those replies, so such threads will die out much more quickly. This will also cause trolls to not have as much fun trolling here so they'll be more likely to leave us alone in the future.

ETA: On the other hand, perhaps we should talk about non-technical ways to change the culture as well. Do you have any ideas? ETA2: A lot of previous discussion can be found here.

Comment author: TheOtherDave 14 September 2012 06:03:53PM *  1 point [-]

hiding subthreads rooted on comments that are -3 or lower from recent and top comments.

I endorse this, incidentally. (Not that there's any particular reason for anyone to care, but I've expressed my opposition to various other suggestions, so it seems only fair to express my endorsement as well.)

I also share the belief that automatic actions are more likely to apply in situations their coders would not endorse. That said, I also endorse the desire to reduce the workload on administrators. (And I appreciate the desire to diffuse social pressure on those administrators to avoid or reverse the action, though I'm more conflicted about whether I endorse that.)

Comment author: shminux 14 September 2012 05:58:18PM *  2 points [-]

What if its a non-troll user who just made a bad comment?

A trivial low-cost solution, roundly ignored by EY and the rest of the forum management.

A related quote:

"Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats." -- Howard Aiken

Comment author: Wei_Dai 14 September 2012 11:34:13PM 1 point [-]

If you want to try harder at this "ramming", you could follow the link I posted above and present your idea there as a comment. :)

Comment author: shminux 15 September 2012 12:03:04AM 0 points [-]

Done.

Comment author: TheOtherDave 14 September 2012 02:30:59PM 1 point [-]

I'll observe that this will also prevent the "Huh. Can someone explain why this comment has been so heavily downvoted?" sorts of comments, as well as the "Oh. I now see what was wrong with my comment, thanks all" sorts of comments.
Or, rather, it will prevent those comments from appearing where they would naturally go in a thread. Of course this won't necessarily prevent people from making the same comments they're making now, it will just prevent them from doing so in that location.

These might or might not be good things.

More generally, I'm interested in what results you expect from implementing such an option. It would be good to record that somewhere before making a change, so we can subsequently establish whether the change had the desired results.

I'm also curious in what ways you expect those results to compare to giving mods the power to freeze a comment tree (that is, identify a comment and not allow further comments to be made downstream of it by anyone) when they consider it appropriate. But that's more of a personal curiosity.

Comment author: Vladimir_Nesov 14 September 2012 02:55:49PM *  1 point [-]

I'll observe that this will also prevent the "Huh. Can someone explain why this comment has been so heavily downvoted?" sorts of comments

I thought of that, but there doesn't appear to be a way of automatically separating these cases. Such questions could be edited-in in the downvoted comment itself, or included in a separately posted improved reframing of the content of the downvoted comment.

what results you expect from implementing such an option

This would make bad threads of the currently typical form literally impossible to construct, so it's at least an interesting experiment. The successful outcome is for the downvoted conversations to peter out faster due to the inconvenience of having to find new starting points that are not replies to preceding conversations. I expect the worst that could happen is that instead of the nice orderly Big Bad Threads we'll have a deluge of bad comments scattered all over the place.

I'm also curious in what ways you expect those results to compare to giving mods the power to freeze a comment tree

This variant of blocking only the downvoted user's comments seems better on most counts, as it doesn't have the downside of indiscriminate blocking which motivated the need for human judgment, it's automatic and so won't focus complaints as much, it seems to catch all the same threads that a human moderator might close, and it applies faster.

Comment author: TheOtherDave 14 September 2012 03:08:26PM 0 points [-]

OK, thanks.

I suspect that if the goal is to make bad threads peter out faster, preventing all users from contributing to a bad thread will likely achieve that goal more readily than preventing one user from doing so.

We could even do that automatically if we wanted. For my own part I trust humans more than simple automatic pattern-matchers for this sort of thing, but if y'all prefer automatic pattern-matchers to diffuse the resulting complaints that's an option as well.

Of course, if we're OK with automatically blocking the downvoted user on the thread but not OK with automatically blocking other users on the thread, then an automatic branch-freeze won't work. This might be true if there are other as-yet-unstated goals being addressed, beyond the desire to end the thread itself.

Personally, I don't like the idea of letting everyone post on a thread except the person they are responding to; one-sided conversations make my teeth itch.

Comment author: DanArmak 13 September 2012 08:22:54PM 3 points [-]

I was one of those who asked for examples. This is indeed a good example, and I take it to heart. I am still uncertain what the effect of the new and planned rules will be (troll feeding fee etc.). But it's now less a case of "what problem are you trying to solve?" and more "how should we solve this problem?"

In more detail: I missed this thread, but skimming the remaining comments, I think it would have been a waste of time to participate. But since many others did participate (while saying in many comments that eridu was quite irrational and/or wrong), it's possible I would have been drawn in if I had the opportunity. So I'm glad you stopped it.

Comment author: Eliezer_Yudkowsky 14 September 2012 01:59:12AM -1 points [-]

At the time I make this reply, DanArmak's comment was downvoted (I voted it back up). Downvoting a comment like that above is the sort of reason why I am starting to distrust the behavior of meta-threads as a reliable signal of what the community thinks.

Comment author: DaFranker 14 September 2012 02:14:00AM *  5 points [-]

It's easy to see:

But since many others did participate (while saying in many comments that eridu was quite irrational and/or wrong), it's possible I would have been drawn in if I had the opportunity. So I'm glad you stopped it.

... and read "It's obvious that eridu is stupid and irrational, and people said so yet kept blabbering and that could have made me join in, so thanks for stopping all this idiocy."

It actually tempted me to downvote too, but the comment is overall useful and that is a very uncharitable interpretation of the wording. It's simply not true that it was a waste of time for everyone - each of my comments and each response to them made me learn something and helped me do a few updates.

It was also a very good opportunity for me to review my own cached database on gender-unfairness in this particular case, which I hadn't done yet since way before learning all this cool stuff about rationality I learned on LessWrong. Overall, I came out winning from that thread, regardless of whether it was started by a troll or not (the alternative was being bored and brainkilled to death by my boring and mind-killing-filled day job). So, for me, and maybe a few others, the above statement about eridu and the thread rings untrue, though not completely unjustified in retrospect.

Comment author: DanArmak 14 September 2012 09:28:33AM *  0 points [-]

read "It's obvious that eridu is stupid and irrational, and people said so yet kept blabbering and that could have made me join in, so thanks for stopping all this idiocy."

I haven't seen eridu's comments myself. I can make no real judgement on their quality. My comment was based solely on the comments of other people in the thread. And the gist of most of those comments is that eridu was being irrational and wrong.

However, now that you point it out, it seems wrong for me to wish to restrict other people's conversations. I would prefer to simply ignore such conversations, but I don't trust myself to do so reliably. Selfishly, I might wish for moderators to ban such conversations, but the moderators' preferences on what to ban don't always coincide with mine or other users'.

A better technical solution might help. I don't have enough experience with other forums to make good predictions on what different features might lead do.

Comment author: TheOtherDave 14 September 2012 02:16:29PM 0 points [-]

it seems wrong for me to wish to restrict other people's conversations

Do you mean in general, or do you mean in a particular forum?

If the latter: there are all kinds of conversations I wish to restrict on this particular forum. Most of them don't in fact happen here, but if they started to I would leave. Some of them do happen here, and I grit my teeth and do my best to ignore them, and I downvote them to communicate my preference.

What's wrong with that?

Comment author: DanArmak 14 September 2012 03:17:23PM 0 points [-]

I mean conversations on LW, yes. And yes there are conversations, which are few in practice, that I wouldn't wish to happen even if I was oblivious to them. Like anything that harms people.

But the subject I was discussing was conversations that bothered me when I saw them, not just in themselves (then I might vote or reply to influence them), but by tempting me to participate in a something I would later regret as a waste of time. E.g., an unproductive argument, troll-baiting, bad argumentation or rationality, and other things of that sort. Hence Eliezer's new rules which are intended to more quickly shutdown downvoted conversations - although I disagree with the method, I tentatively agree with the goal.

However, I don't want to stop others from having conversations that I don't like merely because they e.g. use poor arguments or defend completely wrong positions. It would best for conversations to happen, just without bothering me. I don't know if this can be achieved in practice.

Some of them do happen here, and I grit my teeth and do my best to ignore them

Of course I can't be sure that the conversations that affect you that way are the same ones that affect me that way. So could you say which ones you mean?

Comment author: TheOtherDave 14 September 2012 03:36:35PM 0 points [-]

It would best for conversations to happen, just without bothering me

Why would that be better than the conversations not happening here at all?

So could you say which ones you mean?

I would prefer not to point to specific threads. Generally speaking, what most irritates me is exchanges where we talk past each other in long comments without ever quite engaging with each others' main points, and threads where we don't really engage one another at all but rather all try to show off how individually clever we are.

Comment author: DanArmak 14 September 2012 05:49:34PM 0 points [-]

Why would that be better than the conversations not happening here at all?

Because it would be better for others to have the conversations they want, and the same to me if I were not bothered.

Comment author: TheOtherDave 14 September 2012 05:57:40PM 0 points [-]

Well, OK, but... let me back up a bit here, because I'm now confused.

You've said that you're talking about conversations that bother you by tempting you to participate in them, and you've (tentatively) endorsed the goal of shutting those conversations down. But you've also said you endorse allowing conversations to continue if people want those conversations. And it seems implicit in the whole conversation that you're treating peoples' participation in conversations as evidence that they want those conversations.

It seems that those three sentences describe an internally inconsistent set of desires... that is, if they were true of me, there would exist conversations C such that I both want C shut down and do not want C shut down.

Which, OK, that sort of goal-conflict is certainly a thing that happens to human brains, it happens to me all the time, and if that's what's going on then I understand my confusion about it and no further clarification is necessary. (Or, well, more accurate is to say I consider no further clarification likely.)

But if that's not what's going on then I'm confused.

Comment author: Bugmaster 13 September 2012 04:18:30AM 2 points [-]

Let's say that I post comment B in response to comment A. Comment A has 0 karma, so I suffer no karma penalty. Five minutes afterward, however, various other users downvote comment A to -5. Would I be karma-taxed retroactively ? How would this affect comment B's rating ? If the answers are "no" and "it wouldn't", that could explain the present situation.

Comment author: shminux 13 September 2012 08:42:20PM *  9 points [-]

I wonder if there can be a race condition, when a comment is started before its parent is downvoted to -3, but submitted after, resulting in an unexpected karma burn.

Comment author: thomblake 13 September 2012 08:46:37PM 2 points [-]

A related note: You can sometimes get around the karma burn by upvoting a comment that's at -3, commenting, and then reversing your upvote after.

Comment author: Nornagest 13 September 2012 08:45:03PM *  6 points [-]

Yes. That happened to me yesterday; not only does it produce karma loss, but the warning message doesn't pop up.

Comment author: shminux 13 September 2012 09:44:26PM 2 points [-]

I guess a workaround would be to open the parent in another window and check its vote before hitting "comment"... And if it is already at -2, maybe think a bit first :)

I hope that this half-assed mis-implementation gets fixed eventually. Incidentally, my earlier suggestion to only apply karma burn when the offending comment's author has negative monthly karma would largely take care of the race condition as well, if the warning message pops up based on the monthly karma. Something along the lines of "do you really think it's a good idea to reply to someone with negative karma?"

Comment author: Nornagest 13 September 2012 10:16:59PM 0 points [-]

Yeah, that sounds like a much better solution than what we've got. Your workaround should also work -- and would be made a bit more safe by applying the reversible vote trick, though that's a borderline exploit -- but I wouldn't be surprised to find other issues; the different parts of the karma system here don't always synchronize perfectly.

Comment author: Eliezer_Yudkowsky 13 September 2012 04:51:00AM 0 points [-]

"No" and "It wouldn't", indeed. But heritable penalties once something does go to -3 would prevent users with zero or lower karma from replying further, thus preventing the current thread from happening again.

Comment author: CCC 13 September 2012 07:23:02AM 2 points [-]

An alternative possibility, that may have the same or a similar effect, is to auto-close the children of heavily downvoted posts when they appear on the "Recent Comments" window. Adding an extra step to reply to such a post will tend to reduce the number of replies that is gets, and will clearly signal to the reader that the post is, in fact, the child of a heavily downvoted post.

I have no idea if this possibility will be better or worse than the heritable penalties (nor, for that matter, which option would be easier to implement).

Comment author: Bugmaster 13 September 2012 08:09:16PM *  2 points [-]

Could we change the "Recent Comments" box to say "Recent Threads", instead, with a count of updated comments, net karma, and most recent poster for each thread as usual ? For example, something like this:

EliezerYudkowsky on Meta-note: Right now... by EliezerYudkowsky on The Worst Argument In The World | 7k, 2 new
Mugbuster on You all smell... by Obvious_Troll on The Worst Argument In The World | -15k, 18 new

This tells me that Eliezer commented on a thread that he started, and the thread is generally positively rated, though low-volume, so I might click it. On the other hand, Mugbuster commented on a high-volume thread that has cumulative -15 karma, which means that it's probably a trolling thread, and I should stay out of it.

Comment author: Eliezer_Yudkowsky 13 September 2012 01:57:48PM 3 points [-]

That one's in progress, I think.

Also, to reply to a comment elsewhere in thread, obviously penalties are not going to be charged retrospectively if an ancestor later goes to -3. Nobody has proposed this. Navigating the LW rules is not intended to require precognition.

Comment author: spuckblase 13 September 2012 02:35:47PM 3 points [-]

Navigating the LW rules is not intended to require precognition.

Well, it was required when (negative) karma for Main articles increased tenfold.

Comment author: thomblake 13 September 2012 07:58:54PM 1 point [-]

Yes, or when downvotes were limited without warning.

Comment author: komponisto 13 September 2012 06:10:49AM 6 points [-]

I don't think "preventing the current thread from happening again" is anywhere near an important enough goal to justify heritable karma penalties -- let alone retroactive ones.

Comment author: ciphergoth 13 September 2012 06:21:01AM 6 points [-]

I've not seen retroactive penalties proposed anywhere; the current system warns you when you start if a penalty applies for making a comment, presumably that wouldn't change.

Comment author: Eliezer_Yudkowsky 14 September 2012 01:54:40AM 0 points [-]

Yep. Nobody was proposing retroactive.