Controversy - Healthy or Harmful?
Follow-up to: What have you recently tried, and failed at?
Related-to: Challenging the Difficult Sequence
ialdabaoth's post about blockdownvoting and its threads have prompted me to keep an eye on controversial topics and community norms on LessWrong. I noticed some things.
I was motivated: My own postings are also sometimes controversial. I know beforehand which might be (this one possibly). Why do I post them nonetheless? Do I want to wreak havoc? Or do I want to foster productive discussion of unresolved but polarized questions? Or do I want to call in question some point the community may have a blind spot on or possibly has taken something for granted too early.
Meta: social influence bias and the karma system
Given LW’s keen interest in bias, it would seem pertinent to be aware of the biases engendered by the karma system. Note: I used to be strictly opposed to comment scoring mechanisms, but witnessing the general effectiveness in which LWers use karma has largely redeemed the system for me.
In “Social Influence Bias: A Randomized Experiment” by Muchnik et al, random comments on a “social news aggregation Web site” were up-voted after being posted. The likelihood of such rigged comments receiving additional up-votes were quantified in comparison to a control group. The results show that users were significantly biased towards the randomly up-voted posts:
The up-vote treatment significantly increased the probability of up-voting by the first viewer by 32% over the control group ... Uptreated comments were not down-voted significantly more or less frequently than the control group, so users did not tend to correct the upward manipulation. In the absence of a correction, positive herding accumulated over time.
At the end of their five month testing period, the comments that had artificially received an up-vote had an average rating 25% higher than the control group. Interestingly, the severity of the bias was largely dependent on the topic of discussion:
We found significant positive herding effects for comment ratings in “politics,” “culture and society,” and “business,” but no detectable herding behavior for comments in “economics,” “IT,” “fun,” and “general news”.
The herding behavior outlined in the paper seems rather intuitive to me. If before I read a post, I see a little green ‘1’ next to it, I’m probably going to read the post in a better light than if I hadn't seen that little green ‘1’ next to it. Similarly, if I see a post that has a negative score, I’ll probably see flaws in it much more readily. One might say that this is the point of the rating system, as it allows the group as a whole to evaluate the content. However, I’m still unsettled by just how easily popular opinion was swayed in the experiment.
This certainly doesn't necessitate that we reprogram the site and eschew the karma system. Moreover, understanding the biases inherent in such a system will allow us to use it much more effectively. Discussion on how this bias affects LW in particular would be welcomed. Here are some questions to begin with:
- Should we worry about this bias at all? Are its effects negligible in the scheme of things?
- How does the culture of LW contribute to this herding behavior? Is it positive or negative?
- If there are damages, how can we mitigate them?
Notes:
In the paper, they mentioned that comments were not sorted by popularity, therefore “mitigating the selection bias.” This of course implies that the bias would be more severe on forums where comments are sorted by popularity, such as this one.
For those interested, another enlightening paper is “Overcoming the J-shaped distribution of product reviews” by Nan Hu et al, which discusses rating biases on websites such as amazon. User gwern has also recommended a longer 2007 paper by the same authors which the one above is based upon: "Why do Online Product Reviews have a J-shaped Distribution? Overcoming Biases in Online Word-of-Mouth Communication"
Karma as Money
How do you gather a theory of Counterfactuals, Karma, and Economics, into a revised algorithm for thinking about Lesswrong?
Thinking of Karma as money.
There are a lot of things that one may consider worth saying on Lesswrong. Things that go against the agenda, things that may make people unconfortable, things that are different from what the high-ranking officials would prefer to read here. But we don't do it, because we don't want to "loose" precious Karma points. Each Karma point loss is felt as an insecurity, as a tiny arrow penetrating the chest. But should it be that way?
Here is the alternative: Think of Karma as money. You work hard for getting a few karma points by writing interesting stuff on superintelligence and whatnot, society rewards you by paying some karma points. Then you go there and write something you think people need to hear, but will downvote for sure, at least initially. Some people by now will be very rich, which affords them the opportunity of saying a lot of things that they are not sure will get themselves upvoted, but are sure should be posted.
Citizen: Wait, you said counterfactuals...
Yes, just like your State doesn't really care or like you going out in your hovercraft through the river and using equipment to climb a mountain, so the people here may not care about putting attention into that idea which you think they should hear. Thus, they dowvote it. They make you pay for their attention. If you mentalize it as "they are drawing my soul and life is worthless if karma is negative", then you are much less likely to end up posting something controversial that may be counterfactually relevant.
Just like efficient charity donation works because the vast majority of people are not paying to effectively cause others into being happier, using karma as money works because the vast majority of people are afraid their soul is being sucked every time a downvote comes. But it isn't, this is just the price people charge for their attention, if you think the way I'm tentatively suggesting. It is just a test worth trying, not necessarily something that I fully endorse. I like the idea, and have been using it since forever. Every post linked here, or an earlier subpart of it, has been negative at some point, and from before posting, I knew it would be a "costly one". Try it, if you are rich, you may have nothing much to loose, and more controversial but useful stuff will show up with time.
Let's see how much this costs.
[minor] Separate Upvotes and Downvotes Implimented
It seems that if you look at the column on the right of the page, you can see upvotes and downvotes separately for recent posts. The same [n, m] format is displayed for recent comments, but it doesn't seem to actually sync with the score displaying on the comment. This feature only seems available on the sidebar: looking at the actual comment or post doesn't give you this information.
Thanks, whoever did this!
Meta: What do you think of a karma vote checklist?
I'm imagining an optional checklist offered if you downvote, with the list possibly including troll, poor spelling/grammar, false, redundant....
For that matter, if an optional checklist for downvotes makes sense, then perhaps there should also be one for upvotes: sensible, informative, funny, caused an update....
I'm imagining a little chart appearing if your cursor is on a karma number, the way the proportions of stars do for amazon reviews.
I don't know how much trouble this would be to program-- I'm just floating the idea.
Proposal: Show up and down votes separately
One of the most interesting things about this site is the karma scoring, and that it reflects (to a greater degree than you see elsewhere) an objective assessment of the merits of an argument.
[Edit^6: the proposal in this post is related to the Kibitzer system, but this post discusses adding information, while that system concentrates on taking information away. Special thanks for matt's comment and to Vincentyu for being the first to point to prior discussion. A related issue is discussed here (2009) with reference to a wikipedia, and on which Eliezer said "I may end up linking this from the About page when it comes time to explain suggested voting policies"). Data: It took me ~2 days of effort to obtain get linked to this information (09 June 2012 11:29PM -> 11 June 2012 10:28:26PM).]
Suppose a controversial post/comment has six up votes and three down votes. Right now we only see the net result: 3 points, but when the voting is mixed we're losing important information. If it's reasonably easy to implement, could we please show up and down tallies separately? E.g show "3 points (+6,-3)", at least when the voting is mixed? I think the negative votes are the single most important thing. In particular, I want to know about negative votes I receive and where I receive them, because those are the posts where I need to think carefully.
Example: here's a welcome post by syzygy, which relates to Eliezer's post about Politics as the Mind Killer. I know that it's controversial, because I can sort by controversial and it shows up high on the welcome post thread (neat feature!), but I can't tell how many down votes it has. Does syzygy commit a fallacy? (I don't mean to pick on you, sorry about that; I liked your post.)
Of course this change wouldn't fix everything. If a post has "-1 points (+0,-1)", that doesn't mean only one person read it and disapproved; maybe 100s read it and thought it was bad, but saw that it already had -1 net and considered that sufficiently punitive. This is pretty good; we don't want to spend all our time fiddling with scores.
I mean if we wanted to get fancy and use Bayesian inspired scoring, we could let everyone who wishes assign a score (say from -5 to 5) and report posterior summaries of the scores. Or, more importantly if we value objective scoring, we could identify posts that are controversial and we could have the system randomly select users with respectable karma, and assign them to give their score on the post. Such a score would be valid in a way that the current "convenience" scores are not. Additionally, posts could be scored on multiple axes: soundness of argument, potential impact, innovation, whether we agree with the normative basis of a judgement, etc....
But I'm not arguing for a complicated change, just a simple less wrong one.
Other than feasibility concerns, or maybe aesthetics, the strongest argument I can see against this proposal is that we might embarrass or shame users. Can any one give an example where that might be a concern? I figure that since we already show negative scores, users have gotten over most of that inhibition, but I'm new here.
Another possible criticism is that it's a non-issue: almost all posts are all plus or all minus, so it's not worth the effort. I disagree with this one because I think the posts where we have mixed judgements are the most important ones to get right.
EDIT: Wouldn't it be nice to know how many down votes this post has?
Correcting errors and karma
An easy way to win cheep karma on LW:
- Publicly make a mistake.
- Wait for people to call you on it.
- Publicly retract your errors and promise to improve.
Meta: Karma and lesswrong mainstream positions
My impression is that critiques of lesswrong mainstream positions and arguments for contrary positions are received well and achieve high karma scores when they are of very high quality. Similarly posts and comments that take lesswrong mainstream positions will still be voted down if they are of very low quality. But in between there seems to be a gulf: Moderately low quality mainstream comments will stay at 0 to -1 karma while contra-mainstream comments of (apparently) similar quality score solidly negative karma, moderately high quality mainstream comments achieve good positive karma while similar quality contra-mainstream comments stay at 0 to 2.
Do you share my impression? And if this is the case, should we try to do something about it?
The Trickle-Down Effect of Good Communities
We don't know who came up with the ancient Indian idea of karma or why they did so, but one of its social functions is to motivate people to behave better. If people really believe that they will suffer for their evil actions and prosper for their good actions due to a law of nature, this probably motivates them to do more good and less evil.
Less Wrong, of course, has a karma system. You gain karma points if you write something that people value, and you lose karma points if you write something that people think is inappropriate. At low levels, gaining karma points gives you new posting privileges. At high levels, karma points indicate something like your status in the community.
Recently I noticed that I post better comments on Less Wrong than I usually do on my own site. I think this is partly due to Less Wrong's karma system. When I draft a comment or a post for Less Wrong, I'm more likely to (1) talk to others charitably and with respect and (2) go out of my way to provide useful links and context than I when I draft a comment for my own site!
And now I find myself motivated to bring a stronger emphasis on those qualities to the writing on my own site. So the Less Wrong karma system is having a trickle-down effect into other areas of my life.
Which got me thinking... it might be helpful to have a karma system in "real life," beyond the pages of Less Wrong (or reddit). Maybe something like Facebook karma. People could anonymously add and subtract points on people's Facebook profiles according to whether or not that person acted like a douche in daily life. This could be done by a smartphone app, and plugged into Facebook via an opt-in Facebook app that users could voluntarily choose to add to their profiles.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)