Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
In grade school, I read a series of books titled Sideways Stories from Wayside School by Louis Sachar, who you may know as the author of the novel Holes which was made into a movie in 2003. The series included two books of math problems, Sideways Arithmetic from Wayside School and More Sideways Arithmetic from Wayside School, the latter of which included the following problem (paraphrased):
The students have Mrs. Jewl's class have been given the privilege of voting on the height of the school's new flagpole. She has each of them write down what they think would be the best hight for the flagpole. The votes are distributed as follows:
- 1 student votes for 6 feet.
- 1 student votes for 10 feet.
- 7 students vote for 25 feet.
- 1 student votes for 30 feet.
- 2 students vote for 50 feet.
- 2 students vote for 60 feet.
- 1 student votes for 65 feet.
- 3 students vote for 75 feet.
- 1 student votes for 80 feet, 6 inches.
- 4 students vote for 85 feet.
- 1 student votes for 91 feet.
- 5 students vote for 100 feet.
At first, Mrs. Jewls declares 25 feet the winning answer, but one of the students who voted for 100 feet convinces her there should be a runoff between 25 feet and 100 feet. In the runoff, each student votes for the height closest to their original answer. But after that round of voting, one of the students who voted for 85 feet wants their turn, so 85 feet goes up against the winner of the previous round of voting, and the students vote the same way, with each student voting for the height closest to their original answer. Then the same thing happens again with the 50 foot option. And so on, with each number, again and again, "very much like a game of tether ball."
Question: if this process continues until it settles on an answer that can't be beaten by any other answer, how tall will the new flagpole be?
Answer (rot13'd): fvkgl-svir srrg, orpnhfr gung'f gur zrqvna inyhr bs gur bevtvany frg bs ibgrf. Naq abj lbh xabj gur fgbel bs zl svefg rapbhagre jvgu gur zrqvna ibgre gurberz.
Why am I telling you this? There's a minor reason and a major reason. The minor reason is that this shows it is possible to explain little-known academic concepts, at least certain ones, in a way that grade schoolers will understand. It's a data point that fits nicely with what Eliezer has written about how to explain things. The major reason, though, is that a month ago I finished my systematic read-through of the sequences and while I generally agree that they're awesome (perhaps moreso than most people; I didn't see the problem with the metaethics sequence), I thought the mini-discussion of political parties and voting was on reflection weak and indicative of a broader nerd failure mode.
TLDR (courtesy of lavalamp):
- Politicians probably conform to the median voter's views.
- Most voters are not the median, so most people usually dislike the winning politicians.
- But people dislike the politicians for different reasons.
- Nerds should avoid giving advice that boils down to "behave optimally". Instead, analyze the reasons for the current failure to behave optimally and give more targeted advice.
Summary: People often say that voting is irrational, because the probability of affecting the outcome is so small. But the outcome itself is extremely large when you consider its impact on other people. I estimate that for most people, voting is worth a charitable donation of somewhere between $100 and $1.5 million. For me, the value came out to around $56,000.
Moreover, in swing states the value is much higher, so taking a 10% chance at convincing a friend in a swing state to vote similarly to you is probably worth thousands of expected donation dollars, too.
I find this much more compelling than the typical attempts to justify voting purely in terms of signal value or the resulting sense of pride in fulfilling a civic duty. And voting for selfish reasons is still almost completely worthless, in terms of direct effect. If you're on the way to the polls only to vote for the party that will benefit you the most, you're better off using that time to earn $5 mowing someone's lawn. But if you're even a little altruistic... vote away!
Time for a Fermi estimate
Below is an example Fermi calculation for the value of voting in the USA. Of course, the estimates are all rough and fuzzy, so I'll be conservative, and we can adjust upward based on your opinion.
I'll be estimating the value of voting in marginal expected altruistic dollars, the expected number of dollars being spent in a way that is in line with your altruistic preferences.1 If you don't like measuring the altruistic value of the outcome in dollars, please consider making up your own measure, and keep reading. Perhaps use the number of smiles per year, or number of lives saved. Your measure doesn't have to be total or average utilitarian, either; as long as it's roughly commensurate with the size of the country, it will lead you to a similar conclusion in terms of orders of magnitude.
Follow-up to: Politics as Charity
Can we think well about courses of action with low probabilities of high payoffs?
Such changes could have enormous effects, but the cost-effectiveness of supporting them is very difficult to quantify as one needs to determine both the value of the effects and the degree to which your donation increases the probability of the change occurring. Each of these is very difficult to estimate and since the first is potentially very large and the second very small , it is very challenging to work out which scale will dominate.
This sequence attempts to actually work out a first approximation of an answer to this question, piece by piece. Last time, I discussed the evidence, especially from randomized experiments, that money spent on campaigning can elicit marginal votes quite cheaply. Today, I'll present the state-of-the-art in estimating the chance that those votes will directly swing an election outcome.
Politics is a mind-killer: tribal feelings readily degrade the analytical skill and impartiality of otherwise very sophisticated thinkers, and so discussion of politics (even in a descriptive empirical way, or in meta-level fashion) signals an increased probability of poor analysis. I am not a political partisan and am raising the subject primarily for its illustrative value in thinking about small probabilities of large payoffs.
In the United States and other countries, we elect our leaders. Each individual voter chooses some criteria by which to decide who they vote for, and the aggregate result of all those criteria determines who gets to lead. The public narrative overwhelmingly supports one strategy for deciding between politicians: look up their positions on important and contentious issues, and vote for the one you agree with. Unfortunately, this strategy is wrong, and the result is inferior leadership, polarization into camps and never-ending arguments. Instead, voters should be encouraged to vote based on the qualifications that matter: their intelligence, their rationality, their integrity, and their ability to judge character.
If an issue really is contentious, then a voter without specific inside knowledge should not expect their opinion to be more accurate than chance. If everyone votes based on a few contentious issues, then politicians have a powerful incentive to lie about their stance on those issues. But the real problem is, most of the important things that a politician does have nothing to do with the controversies at all. Whether a budget is good or bad depends on how well its author can distinguish between efficient and inefficient spending, over many small projects and expenditures that will never be reviewed by the voters, and not on the total amount taxed or spent. Whether a regulation is good or bad depends on how well its author can predict the effects and engineer the small details for optimal effect, and not on whether it is more or less strict overall. Whether foreign policies succeed or fail depends on how well the diplomats negotiate, and not on any strategy that could be determined years earlier before the election.
Jane is a connoisseur of imported cheeses and Homo Economicus in good standing, using a causal decision theory that two-boxes on Newcomb's problem. Unfortunately for her, the politically well-organized dairy farmers in her country have managed to get an initiative for increased dairy tariffs on the ballot, which will cost her $20,000. Should she take an hour to vote against the initiative on election day?
She estimates that she has a 1 in 1,000,000 chance of casting the deciding vote, for an expected value of $0.02 from improved policy. However, while Jane may be willing to give her two cents on the subject, the opportunity cost of her time far exceeds the policy benefit, and so it seems she has no reason to vote.
Jane's dilemma is just the standard Paradox of Voting in political science and public choice theory. Voters may still engage in expressive voting to affiliate with certain groups or to signal traits insofar as politics is not about policy, but the instrumental rationality of voting to bring about selfishly preferred policy outcomes starts to look dubious. Thus many of those who say that we rationally ought to vote in hopes of affecting policy focus on altruistic preferences: faced with a tiny probability of casting a decisive vote, but large impacts on enormous numbers of people in the event that we are decisive, we should shut up and multiply, voting if the expected value of benefit to others sufficiently exceeds the cost to ourselves.
Meanwhile, at the Experimental Philosophy blog, Eric Schwitzgebel reports that philosophers overwhelmingly rate voting as very morally good (on a scale of 1 to 9), with voting placing right around donating 10% of one's income to charity. He offers the following explanation:
There has been a considerable amount of discussion scattered around Less Wrong about voting, what software features having to do with voting should be added or subtracted, what purpose voting should serve, etc. It seems as though it would be useful to have conveniently consolidated information on how people are actually voting, so we know what habits that we want to encourage or discourage are actually in use and how prevalently.
1. About what percentage of comments do you vote on at all? What percentage of top-level posts?
3. What karma threshold do you use to filter what you see, if any?
4. When you vote on a post, or read it and decide not to vote on it, what features of the post are you occurrently conscious of that influence your decision either way? (Submitter, current post score, length, style, topic, spelling, whatever.) What about comments?
Related to: Well-Kept Gardens Die By Pacifism.
I wrote a script for the greasemonkey extension for Firefox, implementing less painful downvoting. It inserts a button "Vote boo" in addition to "Vote up" and "Vote down" for each comment. Pressing this button has 30% chance of resulting in downvoting the comment, which is on average equivalent to taking 0.3 points of rating. If pressing the button once has no effect, don't press it twice: the action is already performed, resulting in one of the two possible outcomes.
The idea is to lower the level of punishment from downvoting, thus making it easier to downvote average mediocre comments, not just remarkably bad ones. Systematically downvoting mediocre comments should make their expected rating negative, creating an incentive to focus more on making high-quality comments, and punishing systematic mediocrity. At the same time, low penalty for average comments (implemented through stochastic downvoting) allows to still make them freely, which is essential for supporting a discussion. Contributors may see positive rating of good comments as currency for which they can buy a limited number of discussion-supporting average comments.
The "Vote boo" option is not to be taken lightly, one should understand a comment before declaring it mediocre. If you are not sure, don't vote. If comment is visibly a simple passing remark, or of mediocre quality, press the button.
There has been a great deal of discussion here about the proper methods of voting on comments and on how karma should be assigned. I believe it's finally reached the point where a post is warranted that covers some of the issues involved. (This may be just because I find myself frequently in disagreement with others about it.)
The Automatic Upvote
First, there is the question of whether one should be able to upvote one's own comment. This actually breaks apart into two related concerns:
(1) One is able to upvote one's own comments, and
(2) One gains a point of karma just for posting a comment.
These need not be tied. We could have (2) without (1) by awarding a point of karma for commenting, without changing the comment's score. We could also have (1) without (2) by simply not counting self-upvotes for karma.
I am in favor of (2). The main argument against (2) is that it rewards quantity over quality. The main argument for (2) is that it offers an automatic incentive to post comments; that is, it rewards commenting over silence. As we're community-building, I think the latter incentive is more important than the former. But I'm not sure this is worth arguing further - it serves as a distraction from the benefits of (1).
I am also in favor of (1). As a default, all comments have a base rating of 0. Since one is allowed to vote on one's own comments, and upvoting is the default for one's own comments, this makes comments effectively start at a rating of 1. The argument against this is that it makes more sense for comments to start with a rating of 0, so that someone else liking a comment gives it a positive rating, while someone disliking it gives it a negative rating. I disagree with this assessment.
If I post a comment, it's because it's the best comment I could think of to add to the discussion. I will usually not bother saying something if I don't think it's the sort of thing that I would upvote. When I see someone else's comment that I don't think is very good, I downvote it. Since they already upvoted it, I'm in effect disagreeing that this was something worth saying. The score now reflects this - a score of 0 shows that one person thought it was a worthwhile comment, and one person did not.
Furthermore, if I was not able to vote on my own comments, I would be much more reluctant to upvote. Since I would not be able to upvote my comment, upvoting someone else's comment would suggest that I think their comment is better than my own. But by hypothesis, I thought my comment was nearly the best thing that could be said on the subject; thus, upvotes will be rare.
And so I say that we implement a compromise - (1) and not (2).
What should upvote/downvote mean?
I think it is established pretty well that upvote means "High quality comment" or "I would like to see more comments like this one", while downvote means "Low quality comment" or "I would like to see fewer comments like this one". However, this definition still retains a good bit of ambiguity.
It is too easy to think of upvote and downvote as 'agree' and 'disagree'. Even guarding myself against this behavior I find the cursor drifting to downvote as soon as I think, "Well that's obviously wrong". But that's clearly not what the concept is there for. Comments voted up appear higher on the page (on certain views), which allows casual readers to see the best comments and discussions on any particular post. If we use upvote and downvote to mean 'agree' and 'disagree', then this is effectively an echo chamber, where the only comments to float to the top are the ones that jive with the groupthink.
Instead, upvote and downvote should reflect overall quality of a comment. There are several criteria I tend to use to judge a good comment (this list is not all-inclusive):
- Did the comment add any information, or did it just add to the noise? (+)
- Does the comment include references or links to relevant information? (+)
- Does the comment reflect a genuinely held point-of-view that adds to the discussion? (+)
- Is the comment part of a discussion that might lead somewhere interesting? (+)
- Is the comment obvious spam / trolling? (-)
- Is the comment off-topic? (-)
Since we feel the need to voice whether we agree or disagree with comments, but 'I agree' and 'I disagree' comments are noisy, it's been suggested that there should be separate buttons to indicate agreement and disagreement. Thus, someone posting a well-argued on-topic defense of theism can get the upvote and 'disagree', while someone posting an off-topic 'physicalism is true' can get the downvote and 'agree'. Presumably, we'd only count upvotes and downvotes for karma, but we could use 'agree' and 'disagree' for "most controversial" or other views/metrics.
Whether votes should require an explanation
It has been suggested that votes, or downvotes specifically, should require an explanation. I disagree with both sentiments. First, requiring explanations for downvotes but not upvotes would bias the voting positively, which would have the effect of rewarding quantity over quality and decrease the impact of downvotes.
But requiring explanations for votes is in general a bad idea. This site is already a burden to keep up with; for those of us that do a lot of voting, writing an explanation for each one would be too much time and effort. Requiring an explanation for every vote would doubtless result in a lot less voting. Also, explaining votes is almost always off-topic, so adds to the noise here without really contributing to the discussion.
Note Yvain's more personal rationale:
I'm not prepared to write an essay explaining exactly was wrong with each of them, especially if the original commenter wasn't prepared to take three seconds to write a halfway decent response.
Adding to the burden of those already performing the service of voting unduly penalizes those who are doing good, to the end of appeasing those who are contributing to the noise here.
For reference, some links to relevant posts and sections of comments. I tried to be inclusive, since there have been a lot of discussions about these issues - more relevant ones hopefully near the top. (Please comment if you know of any other relevant discussions)
4 whether karma should be the sum of individual post scores, or (perhaps) an average
6 The utility of comment karma
7 whether one should unselect the self-upvote
10 whether Eliezer Yudkowsky gets fewer upvotes than others
11 whether karma can be used to gauge rationality
12 whether people downvote for disagreeing with groupthink
13 whether karma promotes a closed-garden effect
14 whether administrators should delete comments entirely
15 Lesswrong Antikibitzer: tool for hiding comment authors and vote counts
ETA: I might concede that this post is possibly off-topic for Less Wrong - but the blog/community site about "Less Wrong" does not exist yet, so this seems like the best place to post it.
ETA2: Public records of upvotes/downvotes might solve some of these problems; discuss.
Not all that surprisingly, there's quite a lot of discussion on LW about questions like
- just what should get voted up or down?
- what conclusions can one reasonably draw from getting downvoted?
- should downvotes (or even upvotes) be accompanied by explanations?
- should the way karma and voting work be changed?
This generally happens in dribs and drabs, typically in response to more specific questions of the form
- Waaaa, how come my supremely insightful comment above is currently sitting at -69?
and therefore tends to clutter up discussions that are meant to be about something else. So maybe it's worth seeing if we can arrive at some sort of consensus about the general issues, at which point maybe we can write that up and refer newcomers to it.
(The outcome may be that we find that there's no consensus to be had. That would be useful information too.)
I'll kick things off with a few unfocused thoughts.
Related to Information Cascades
Information Cascades has implied that people's votes are being biased by the number of votes already cast. Similarly, some commenters express a perception that higher status posters are being upvoted too much.
If, like me, you suspect that you might be prone to these biases, you can correct for them by installing LessWrong anti-kibitzer which I hacked together yesterday morning. You will need Firefox with the greasemonkey extention installed. Once you have greasemonkey installed, clicking on the link to the script will pop up a dialog box asking if you want to enable the script. Once you enable it, a button which you can use to toggle the visibility of author and point count information should appear in the upper right corner of any page on LessWrong. (On any page you load, the authors and pointcounts are automatically hidden until you show them.) Let me know if it doesn't work for any of you.
Already, I've had some interesting experiences. There were a few comments that I thought were written by Eliezer that turned out not to be (though perhaps people are copying his writing style.) There were also comments that I thought contained good arguments which were written by people I was apparently too quick to dismiss as trolls. What are your experiences?
View more: Next