Vaniver comments on Upcoming LW Changes - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (105)
My impression is that the primary benefit of a concrete definition is easy communication; if my concrete definition aligns with your concrete definition, then we can both be sure that we know, the other person knows, and both of those pieces of information are mutually known. So the worry here is if a third person comes in and we need to explain the 'no vote manipulation' rule to them.
I am not as impressed with algorithmic detection systems because of the ease of evading them with algorithms, especially if the mechanics of any system will be available on Github.
I remember that case, and I would put that in the "downvoting five terrible politics comments" category, since it wasn't disagreement on that topic spilling over to other topics.
My current plan is to introduce karma weights, where we can easily adjust how much an account's votes matter, and zero out the votes of any account that engages in vote manipulation. If someone makes good comments but votes irresponsibly, there's no need to penalize their comments or their overall account standing when we can just remove the power they're not wielding well. (This also makes it fairly easy to fix any moderator mistakes, since disenfranchised accounts will still have their votes recorded, just not counted.)
All security ultimately relies on some kind of obscurity, this is true. But the first pass should deal with -dumb- evil. Smart evil is its own set of problems.
You would. Somebody else would put it somewhere else. You don't have a common definition. Literally no matter what moderation decision is made in a high-enough profile case like that - somebody is going to be left unsatisfied that it was politics that decided the case instead of rules.