Nice! Eventually it might make sense to have this functionality integrated into LW2, similar to what we had on LW1.0 with the anti-kibitzer, but this is a pretty good solution for now.
Perhaps instead the karma of a post ought not to be linear in the number of upvotes it receives? If the karma of a post is best used as a signal of the goodness of the post, then it is less noisy as more data points appear, but not linearly so.
There is perhaps still a place for karma as a linear reward mechanism - that is, pleasing 10 people enough to get them to upvote is, all other things being equal, 10 times as good as pleasing 1 person - but this might be best separated from the signal aspect.
Which of the things that karma is used for do you think would benefit from nonlinearity, and which nonlinearity?
>A second downside is that anonymity reduces the selfish incentives to produce good content (we socially reward high-quality, civil discussion, and punish rudeness.)
It also decreases some selfish incentives to avoid producing good content (say, because you think there's a chance you might be wrong and face humiliation).
I don't think these selfish incentives at all compare to the selfish incentives to produce good content. Looking at the sites that do enforce anonymity (the chans for example), I am very sceptical of its success.
In online discussions, the number of upvotes or likes a contribution receives is often highly correlated with the social status of the author within that community. This makes the community less epistemically diverse, and can contribute to feelings of groupthink or hero worship.
Yet both the author of a contribution and its degree of support contain bayesian evidenceabout its value, arguably an amount that should overwhelm your own inside view.
We want each individual to invest the socially optimal amount of resources into critically evaluating other people’s writing (which is higher than the amount that would be optimal for individual epistemic rationality). Yet we also all and each want to give sufficient weight to authority in forming our all-things-considered views.
As Greg Lewis writes:
Full blinding to usernames and upvote counts is great for critical thinking. If all you see is the object level, you can’t be biased by anything else. The downside is you lose a lot of relevant information. A second downside is that anonymity reduces the selfish incentives to produce good content (we socially reward high-quality, civil discussion, and punish rudeness.)
I have a suggestion for capturing (some of) the best of both worlds:
To enable this, there are now two user scripts which hide usernames and upvote counts on (1) the EA forum and (2) LessWrong 2.0. You’ll need to install the Stylish browser extension to use them.
Cross-posted here (clicking the link will unblind you!).