Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Would a Bayesian notion of "upvotes / downvotes" work better than simple upvoting / downvoting? Suppose that instead of a simple sum of ups and downs, that there is some unknown latent "goodness" variable theta, which is the parameter of a Binomial distribution. Roughly, theta is the probability that a random reader of your post would upvote it. The sum of upvotes, or upvotes - downvotes, is not a very useful piece of information (since a highly upvoted / downvoted post could be highly controversial, but simply have a huge amount of voters). Instead of that, if you calculate the posterior distribution over theta (let's say theta is modeled by a Beta distribution), then you have information about what theta is likely to be along with the degree of confidence in that estimate. Would calculating that every time someone votes be a huge strain on the backend?
ISTR reading an article about how reddit's "best" sorting worked, and I would have described it roughly like that.
Aha, see http://www.evanmiller.org/how-not-to-sort-by-average-rating.html via https://redditblog.com/2009/10/15/reddits-new-comment-sorting-system/. I, uh, don't actually understand it. It's possible I read the text and made up a thing that seemed like it would do something like what the text sounded like it did.