Viliam_Bur comments on Self-serving meta: Whoever keeps block-downvoting me, is there some way to negotiate peace? - Less Wrong

16 Post author: ialdabaoth 16 November 2013 04:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (281)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 19 November 2013 08:38:37AM *  3 points [-]

There could be consensus that it's harmful without consensus that there should be a rule against it.

Making the rules against all harmful things is a FAI-complete problem. If someone is able to do that, they would better spend their time writing the rules in a programming language and creating a Friendly AI.

Let's assume we have a rule "it is forbidden to downvote all posts by someone, we detect such behavior automatically by a script, and the punishment is X". What will most likely happen?

a) The mass downvoters will switch to downvoting all comments but one.

b) A new troll will come to the website, post three idiotic comments, someone will downvote all three of them and unknowingly trigger the illegal downvoting detection script.

Comment author: gjm 19 November 2013 09:43:17AM 2 points [-]

Thank you for that nice clear demonstration that there are reasons for not wanting a rule against mass-downvoting that don't involve thinking mass-downvoting isn't a very bad thing.

I think you exaggerate, though. Making good enough rules might not be an FAI-complete problem. E.g., the rules and/or automatic detection mechanism might leave the matter partly to moderators' discretion (or to other users', if all that happens on a violation is that a complete description of what you did gets posted automatically).

(The previous paragraph is not intended as an endorsement of having such rules. Just observing that it might be possible to have useful ones without needing perfect ones.)

Comment author: NancyLebovitz 19 November 2013 01:47:56PM 2 points [-]

This may be a demonstration that ultimately, if you want to constrain human beings to achieve a complex goal, you need human moderation. (Or, of course, moderation by FAI, but we don't have one of those.)

Comment author: gjm 19 November 2013 04:12:29PM 1 point [-]

Yes. Of course, LW has human moderators, or at least admins -- but they don't appear to do very much human moderation. (Which is fair enough -- it's a time-intensive business.)