You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

kpreid comments on Stupid Questions December 2014 - Less Wrong Discussion

16 Post author: Gondolinian 08 December 2014 03:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (341)

You are viewing a single comment's thread. Show more comments above.

Comment author: gattsuru 08 December 2014 10:06:04PM 7 points [-]

Are there any good trust, value, or reputation metrics in the open source space? I've recently established a small internal-use Discourse forum and been rather appalled by the limitations of what is intended to be a next-generation system (status flag, number of posts, tagging), and from a quick overview most competitors don't seem to be much stronger. Even fairly specialist fora only seem marginally more capable.

This is obviously a really hard problem and conflux of many other hard problems, but it seems odd that there are so many obvious improvements available.

((Inspired somewhat by my frustration with Karma, but I'm honestly more interested in its relevance for outside situations.))

Comment author: Viliam_Bur 09 December 2014 10:42:17AM 8 points [-]

Tangentially, is it possible for a good reputation metric to survive attacks in real life?

Imagine that you become e.g. a famous computer programmer. But although you are a celebrity among free software people, you fail to convert this fame to money. So must keep a day job at a computer company which produces shitty software.

One day your boss will realize that you have high prestige in the given metric, and the company has low prestige. So the boss will ask you to "recommend" the company on your social network page (which would increase the company prestige and hopefully increase the profit; might decrease your prestige as a side effect). Maybe this would be illegal, but let's suppose it isn't, or that you are not in a position to refuse. Or you could imagine a more dramatic situation: you are a widely respected political or economical expert, it is 12 hours before election, and a political party has kidnapped your family and threatens to kill them unless you "recommend" this party, which according to their model would help them win the election.

In other words, even a digital system that works well could be vulnerable to attacks from outside of the system, where otherwise trustworthy people are forced to act against their will. A possible defense would be if people could somehow hide their votes; e.g. your boss might know that you have high prestige and the company has low prestige, but has no methods to verify whether you have "recommended" the company or not (so you could just lie that you did). But if we make everything secret, is there a way to verify whether the system is really working as described? (The owner of the system could just add 9000 trust points to his favorite political party and no one would ever find out.)

I suspect this is all confused and I am asking a wrong question. So feel free to answer to question I should have asked.

Comment author: kpreid 09 December 2014 06:07:27PM 3 points [-]

I don't have a solution for you, but a related probably-unsolvable problem is what some friends of mine call “cashing in your reputation capital”: having done the work to build up a reputation (for trustworthiness, in particular), you betray it in a profitable way and run.

… otherwise trustworthy people are forced to act against their will. … But if we make everything secret, is there a way to verify whether the system is really working as described?

This is a problem in elections. In the US (I believe depending on state) there are rules which are intended to prevent someone from being able to provide proof that they have voted a particular way (to make coercion futile), and the question then is whether the vote counting is accurate. I would suggest that the topic of designing fair elections contains the answer to your question insofar as an answer exists.

Comment author: alienist 11 December 2014 06:57:51AM 6 points [-]

In the US (I believe depending on state) there are rules which are intended to prevent someone from being able to provide proof that they have voted a particular way (to make coercion futile),

And then there are absentee ballots which potentially make said laws a joke.