I'd much rather have less focus on meaningless internet points around here (and in most places). Focus on collecting good comments that help you update your beliefs and models, in order to be less wrong.
Note that there _is_ a weighting that happens - higher-karma people give/take more than 1 karma with their votes. It's not specific to your evaluation of them, nor all that visible, but it's there.
I think there are two aspects to this though. One the comments that any person finds useful in making better decisions later and evaluation of your own comments -- sometimes we get points without any comment.
It was this last aspect I was interested in. If I get no feedback other than the vote I have little useful information. It's also reasonable,then, for me to simply ignore it, but in that case why bother with having it here? (I should check the setting to see if I can just turn the display off perhaps).
I suppose I can also check to see if the vote...
Recently ran across this post that recommends using a Beta-Binomial distribution to more correctly represent the uncertainty that any given post adds to your overall karma. I thought it was a cool idea and would love to see what my Karma is when represented that way, rather than just adding everything together:
https://moultano.wordpress.com/2013/08/21/how-karma-should-work-betabinomial/
I'd be interested in a mathy person translating that into less mathy language that roughly communicates why it's interesting.
It gives a reasonably rigorous way of predicting how many upvotes and downvotes a post will get, given the history of the user who wrote it. Specifically, it defines a probabilistic model: for each user, we can specify a Beta distribution with various unknown parameters, and then learn those parameters from the user's post history. The details of that learning are rather charming if you're a statistician, or aspire to be one, but don't translate very well.
mr-hire would like to know what his particular Beta distribution looks like. To find out, we have to adapt Moulton's method to the LW karma system. This turns out to be a little difficult, and requires some additional modeling choices:
Moulton models votes on individual posts with a Binomial distribution, which is used for sequences of binary outcomes. In this case each voter either upvotes the post (with probability p) or downvotes it (with probability 1-p) -- we ignore non-voters since it's hard to know how many of them there are. But a LessWrong voter has four choices: they can vote Up or Down, and they can vote Normal or Strong, so the Binomial distribution is no longer appropriate.
This is fixable with a different choice of distributions, but then you run into another problem. In LW, even normal votes vary in value: an upvote from a high-karma user is worth more than one from a low-karma user. Do we wish to model this effect, and if so how?
If you were willing to treat all user votes equally I think you could get away with using the Dirichlet-multinomial. If not, I think you have to give up on modeling individual votes and try to model karma directly, without breaking it down into its component upvotes and downvotes.
For me, just understanding the weighting for the votes based on existing karma of the voter would be sufficient to say my idea was "implemented". Even without knowing that it was helpful to learn about that feature of the votes.
I was thinking the other day it would be interesting to know something about the karma I was getting. Particularly the quality, to the extent that is possible to assess.
I suspect everyone would prefer to have those they respect up voting posts and comments but that type of transparency might not be desirable. I'm not sure if some type of weighting system would work or not -- how may karma points do the account giving an up vote have so you can get some average gauge of quality of the feedback.
I would not think that should be on the main page but something each person can look at under their own account.
Wonder what others think or if this has been suggested before?