You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

fubarobfusco comments on Stupid Questions December 2014 - Less Wrong Discussion

16 Post author: Gondolinian 08 December 2014 03:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (341)

You are viewing a single comment's thread. Show more comments above.

Comment author: gattsuru 09 December 2014 11:34:00PM *  3 points [-]

what is it that you authenticate? Do you mean trust in the same sense as "web of trust" in PGP-type crypto systems?

For starters, a system to be sure that a user or service is the same user or service it was previously. Web of trusts /or/ a central authority would work, but honestly we run into limits even before the gap between electronic worlds and meatspace. PGP would be nice, but PGP itself is closed-source, and neither PGP nor OpenPGP/GPG are user-accessible enough to even survive in the e-mail sphere they were originally intended to operate. SSL allows for server authentication (ignoring the technical issues), but isn't great for user authentication.

I'm not aware of any generalized implementation for other use, and the closest precursors (keychain management in Murmur/Mumble server control?) are both limited and intended to be application-specific. But at the same time, I recognize that I don't follow the security or open-source worlds as much as I should.

For reputation as an assessment of user ratings, you can obviously build a bunch of various metrics, but the real question is which one is the best. And that question implies another one: Best for what?

Oh, yeah. It's not an easy problem to solve Right.

I'm more interested in if anyone's trying to solve it. I can see a lot of issues with a user-based reputation even in addition to the obvious limitation and tradeoffs that fubarobfusco provides -- a visible metric is more prone to being gamed but obscuring the metric reduces its utility as a feedback for 'good' posting, value drift without a defined root versus possible closure without, so on.

What surprises me is that there are so few attempts to improve the system beyond the basics. IP.Board, vBulletin, and phpBoard plugins are usually pretty similar -- the best I've seen merely lets you disable them on a per-subfora basis rather than globally, and they otherwise use a single point score. Reddit uses the same Karma system whether you're answering a complex scientific question or making a bad joke. LessWrong improves on that only by allowing users to see how contentious a comment's scoring. Discourse uses count of posts and tags, almost embarrassingly minimalistic. I've seen a few systems that make moderator and admin 'likes' count for more. I think that's about the fanciest.

I don't expect them to have an implementation that matches my desires, but I'm really surprised that there's no attempts to run multi-dimensional reputation systems, to weigh votes by length of post or age of poster, spellcheck or capitalizations thresholds. These might even be /bad/ decisions, but usually you see someone making them.

I expect Twitter or FaceBook have something complex underneath the hood, but if they do, they're not talking about the specifics and not doing a very good job. Maybe its their dominance in the social development community, but I dunno.

Comment author: Lumifer 10 December 2014 02:00:48AM 1 point [-]

For starters, a system to be sure that a user or service is the same user or service it was previously.

That seems to be pretty trivial. What's wrong with a username/password combo (besides all the usual things) or, if you want to get a bit more sophisticated, with having the user generate a private key for himself?

You don't need a web of trust or any central authority to verify that the user named X is in possession of a private key which the user named X had before.

I'm more interested in if anyone's trying to solve it.

Well, again, the critical question is: What are you really trying to achieve?

If you want the online equivalent of the meatspace reputation, well, first meatspace reputation does not exist as one convenient number, and second it's still a two-argument function.

there's no attempts to run multi-dimensional reputation systems, to weigh votes by length of post or age of poster, spellcheck or capitalizations thresholds.

Once again, with feeling :-D -- to which purpose? Generally speaking, if you run a forum all you need is a way to filter out idiots and trolls. Your regular users will figure out reputation on their own and their conclusions will be all different. You can build an automated system to suit your fancy, but there's no guarantee (and, actually, a pretty solid bet) that it won't suit other people well.

I expect Twitter or FaceBook have something complex underneath the hood

Why would Twitter or FB bother assigning reputation to users? They want to filter out bad actors and maximize their eyeballs and their revenue which generally means keeping users sufficiently happy and well-measured.

Comment author: fubarobfusco 10 December 2014 02:30:11AM 2 points [-]

That seems to be pretty trivial. What's wrong with a username/password combo (besides all the usual things)

"All the usual things" are many, and some of them are quite wrong indeed.

If you need solid long-term authentication, outsource it to someone whose business depends on doing it right. Google for instance is really quite good at detecting unauthorized use of an account (i.e. your Gmail getting hacked). It's better (for a number of reasons) not to be beholden to a single authentication provider, though, which is why there are things like OpenID Connect that let users authenticate using Google, Facebook, or various other sources.

On the other hand, if you need authorization without (much) authentication — for instance, to let anonymous users delete their own posts, but not other people's — maybe you want tripcodes.

And if you need to detect sock puppets (one person pretending to be several people), you may have an easy time or you may be in hard machine-learning territory. (See the obvious recent thread for more.) Some services — like Wikipedia — seem to attract some really dedicated puppeteers.