Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Viliam 21 December 2016 02:16:58PM *  3 points [-]

You can't have friendly debates on a web forum containing a stalker psycho who will take offense at opinions you expressed and then will keep "punishing" you. It's simply not fun. And people come here for intelligent debate and fun.

In the "before Eugine" era, we had once in a while also debates on more or less political topics. People expressed their opinions, some agreed, some disagreed, then we moved on. The "politics is the mindkiller" was a reminder to not take this too seriously. Some people complained about these debates, but they had the option of simply avoiding them. And whatever you said during the political debate stopped becoming relevant when you changed the topic.

The karma system was here to allow feedback, and I think everyone understood that it was an imperfect mechanism, but still better than nothing. (That it's good to have a convenient mechanism for saying "more of this, please" and "less of this, please" without having to write an explanation every time, and potentially derailing the debate by doing so.) The idea of using sockpuppets to win some pissing contest simply wasn't out there.

Essentially, the karma feedback system is quite fragile, because it assumes being used in good faith. It assumes that people upvote stuff they genuinely like, downvote stuff they genuinely dislike, and that there is only one vote per person. With these assumptions, negative karma means "most readers believe this comment shouldn't be here", which is a reason to update or perhaps ask for an explanation. Without these assumptions, negative karma may simply mean "Eugine doesn't like your face", and there is nothing useful to learn from that.

(At this moment I notice that I am confused -- how does Reddit deal with the same kind of downvote abuse? Do their moderators have better tools, e.g. detecting sockpuppets by IP addresses, or seeing who made the votes? I could try to find out...)

Articles and debates influence each other. People come to debate here, because they want to debate the articles posted here. But people decide to post their articles here, because they expect to see a good discussion here... and I believe this may simply not be true anymore for many potential contributors.

At this moment, the downvoting is stopped, but it exposes us to the complementary risk -- drowning in noise, because we have removed the only mechanism we had against it. We yet have to see how that develops.

Comment author: steven0461 21 December 2016 05:06:21PM 1 point [-]

Some people complained about these debates, but they had the option of simply avoiding them.

Not if we wanted to use the "recent comments" page, and not if we were worried about indirect effects on the site, e.g. through drawing in bad commenters.

Comment author: NancyLebovitz 21 December 2016 12:28:31AM 3 points [-]

In the hopes of making things easier for me, I've been referring to centuries by their number range-- "the 1900's" rather than "the twentieth century". I've gotten one piece of feedback from someone who found it confusing, but how clear is it to most people who are reading this?

Comment author: steven0461 21 December 2016 04:55:20PM *  1 point [-]

I find it confusing as well: the century already has a different name and the decade does not, so it's natural to assume "the 1900s" refers to the decade.

Also, I guess technically "the 1900s" includes 1900 but not 2000 and "the 20th century" includes 2000 but not 1900.

Comment author: ChristianKl 18 December 2016 08:41:22PM 2 points [-]

I don't think having a few anonymous amount of high voting power users and most users at normal voting power would get around the problem of Eugine's sockpuppets. A page-rank like algorithm on the other hand would make the forum robust against attacks of that sort.

You can additionally seed the algorithm with giving specific individuals higher voting power.

Comment author: steven0461 18 December 2016 08:45:18PM 0 points [-]

Yes, we'd need a separate solution to sockpuppet attacks, like disallowing downvotes from accounts below a karma threshold, or the one about moderator database access that's currently in the pipeline.

Comment author: ChristianKl 18 December 2016 08:25:31PM 1 point [-]

Hand-picking has some advantages but it also produces problem because it induces political discussion about who deserves to be in the hand-picked group of high voting power users.

Comment author: steven0461 18 December 2016 08:27:15PM *  3 points [-]

The hand-pickers can be anonymous to everyone except the site owners. The picking needn't even be a continuous process; it can just be done once with no possibility of discussion. People would still yell at us for the abstract fact that we implemented such a scheme, but we'd have to weigh that against what I expect would be a substantial increase in voting quality. (Nobody would lose their vote and this would help make it palatable.)

Comment author: Error 17 December 2016 10:27:51PM 6 points [-]

I've written a bit about this, but I never finished the sequence and don't really endorse any of it as practical. Some of the comment threads may have useful suggestions in them, though.

Discussion quality is a function of the discussants more than the software.

I think we are better off using something as close to off-the-shelf as possible, modified only via intended configuration hooks. Software development isn't LW's comparative advantage. If we are determined to do it anyway, we should do it in such a way that it's useful to more than just us, so as to potentially get contributions from elsewhere.

What's the replacement plan? Are we building something from the ground up, re-forking Reddit, or something else? I've nosed around contributing a few times and keep getting put off by the current crawling horror. If we're re-building from something clean, I might reconsider.

Comment author: steven0461 18 December 2016 07:55:23PM *  2 points [-]

Discussion quality is a function of the discussants more than the software.

I agree with this and suspect that a willingness to keep out low-quality users is more important than any technical feature. The decision to remove all downvoting is worrying in this regard.

Comment author: ChristianKl 18 December 2016 06:06:53PM 4 points [-]

Votes shouldn't have equal weight, but a page-rank like algorithm should score votes by people who get a lot of votes higher.

Comment author: steven0461 18 December 2016 07:52:00PM 3 points [-]

Or, better yet, it should score votes by a number of hand-picked people higher. Karma is an indicator of voting quality, but an unreliable one.

Comment author: Lumifer 15 December 2016 10:06:02PM 0 points [-]

trusted, anonymous

These two words do not match well.

Comment author: steven0461 16 December 2016 03:33:48AM *  1 point [-]

Trusted by the site owners, anonymous to others. (This is not actually a practical suggestion, so it doesn't matter.)

Comment author: Vaniver 15 December 2016 06:39:29PM 0 points [-]

One problem with setting the limit too high is that the voter base becomes unbalanced in a problematic way; is it really useful to have downvotes if only ~50 people can use them, but ~5000 people can use upvotes?

Comment author: steven0461 15 December 2016 08:29:56PM *  2 points [-]

Yes, it seems to me that would be useful. Concretely, it might make the difference between whether clearly-bad-but-not-ban-worthy content ends up at +1 or -2. If >10k isn't enough people, something like >3k would still come with a pretty minimal risk of abuse.

edit: On rereading your comment, it sounds like you're saying a high threshold for downvoting has problems relative to a low threshold. I agree with this, but what we currently have is no downvoting. I suspect the ideal policy in terms of site quality (but not politics/PR/attractiveness to newcomers) is a medium-sized whitelist of voters selected by a trusted, anonymous entity, with no voting (up or down) outside this whitelist.

Comment author: Vaniver 02 December 2016 06:07:32PM *  6 points [-]

Currently the limit is set at 10; it would be fairly easy to change it to 100 or to 1000. The problem is that we don't know what accounts Eugine has yet, and so even if we set the limit at 1k he might still have twenty accounts available to downvote things. Once we get the ability to investigate comment voting, then we can keep the limit fairly low.

Comment author: steven0461 15 December 2016 04:45:11PM 2 points [-]

How about 10k?

Comment author: Evan_Gaensbauer 14 December 2016 11:56:19AM 3 points [-]

I've been using the Effective Altruism Forum more frequently than I have LessWrong for at least the past year. I've noticed it's not particularly heavily moderated. I mean, one thing is effective altruism is mediated both primarily through in-person communities, and social media. So, most of the drama occurring in EA occurs there, and works itself out before it gets to the EA Forum.

Still, though, the EA Forum seems to have a high level of quality content, but without as much active moderation necessary. The site doesn't get as much traffic as LW ever did. The topics covered are much more diverse: while LW covered things like AI safety, metacognition and transhumanism, all that and every other cause in EA is game for the EA Forum[1]. From my perspective, though, it's far and away host to the highest-quality content in the EA community. So, if anyone else here also finds that to be the case: what makes EA unlike LW in not needing as many moderators on its forum.

(Personally, I expect most of the explanatory power comes from the hypothesis the sorts of discussions which would need to be moderated are filtered out before they get to the EA Forum, and the academic tone set in EA conduce people to posting more detailed writing.)

[1] I abbreviate "Effective Altruism Forum" as "EA Forum", rather than "EAF", as EAF is the acronym of the Effective Altruism Foundation, an organization based out of Switzerland. I don't want people to get confused between the two.

Comment author: steven0461 15 December 2016 04:37:04PM 4 points [-]

Some guesses:

  • The EA forum has less of a reputation, so knowing about it selects better for various virtues
  • Interest in altruism probably correlates with pro-social behavior in general, e.g. netiquette
  • The EA forum doesn't have the "this site is about rationality, I have opinions and I agree with them, so they're rational, so I should post about them here" problem

View more: Next