As Robin Hanson is fond of pointing out, people would often get better answers by taking other people's answers more into account.  See Aumann's Agreement Theorem.

The application is obvious if you're computing an answer for your personal use.  But how do you apply it when voting?

Political debates are tug-of-wars.  Say a bill is being voted on to introduce a 7-day waiting period for handguns.  You might think that you should vote on the merits of a 7-day waiting period.  This isn't what we usually do.  Instead, we've chosen our side on the larger issue (gun control: for or against) ahead of time; and we vote whichever way is pulling in our direction.

To use the tug-of-war analogy:  There's a knot tied in the middle of the rope, and you have some line in the sand where you believe the knot should end up.  But you don't stop pulling when the knot reaches that point; you keep pulling, because the other team is still pulling.  So, if you're anti-gun-control, you vote against the 7-day waiting period, even if you think it would be a good idea; because passing it would move the knot back towards the other side of your line.

Tug-of-war voting makes intuitive sense if you believe that an irrational extremist is usually more politically effective than a reasonable person is.  (It sounds plausible to me.)  If you've watched a debate long enough to see that the "knot" does a bit of a random walk around some equilibrium that's on the other side of your line, it can make sense to vote this way.

How do you apply Aumann's theorem to tug-of-war voting?

I think the answer is that you try to identify which side has more idiots, and vote on the other side.

I was thinking of this because of the current online debate between Arthur Caplan and Craig Venter on DNA privacy.  I don't have a strong opinion which way to vote, largely because it's nowhere stated clearly what it is that you're voting for or against.

So I can't tell what the right answer is myself.  But I can identify idiots.  Applying Aumann's theorem, I take it on faith that the non-idiot population can eventually work out a good solution to the problem.  My job is to cancel out an idiot.

My impression is that there is a large class of irrational people who are generally "against" biotechnology because they're against evolution or science.  (This doesn't come out in the comments on economist.com, which are surprisingly good for this sort of online debate, and unfortunately don't supply enough idiots to be statistically significant.)  I have enough experience with this group and their opposite number to conclude that they are not counterbalanced by a sufficient number of uncritically pro-science people.

So I vote against the proposition, even though the vague statement "People's DNA sequences are their business, and nobody else's" sounds good to me.  I am picking sides not based on the specific issue at hand, but on what I perceive as being the larger tug-of war; and pulling for the side with fewer idiots.

Do you think this is a good heuristic?

You might break your answer into separate parts for "tug-of-war voting" (which means to choose sides on larger debates rather than on particular issues) and "cancel out an idiot" (which can be used without adopting tug-of-war voting).

EDIT: Really, please do say if your comment refers to "tug-of-war" voting or "cancelling out an idiot".  Perhaps I should have broken them into separate posts.

New to LessWrong?

New Comment


37 comments, sorted by Click to highlight new comments since:

Much more common situation: the parties are A and B. A is slightly more idiotic. The right answer is C, which has no candidate and causes both A and B to recoil in horror.

Vote how?

You have 2 options:

  • Sit at home and whine about how stupid people are.
  • Vote B.

You can hope to do something about "how stupid people are", but only in the long term.

In the immediate term, where the election is tomorrow, Recognising that it is only a small contribution towards staving off disaster, and acting to make other choices possible is far more important; Disliking both intensely for their corruption and self-serving bias; Holding your nose so hard it turns blue, vote B, vote B, please vote B.

(edited following parent edit)

Holding your nose

I'd also spray some deodorant onto the ballot while I'm at it.

Let's abstract this into a simple game:

Imagine that there are 100 agents each playing this game, and all are presented with the same choice at the next iteration:

  • A) Add 500 snargs to the polunk.
  • B) Add 400 snargs to the polunk.
  • C) Add 25 snargs to the polunk.

The polunk currently has 500 snargs in it. Once the polunk has 2,000 snargs in it, each and every agent playing the game will be cast into the outer darkness, where they will be forced to sort Precious Mao buttons for all eternity while ferocious rabid weasels gnaw at their extremities.

It is now time to choose. You know from polling that approximately 25% of the agents will tend to pick A, and approximately 20% of the agents will tend to pick B. The remainder have an equal chance of picking A, B, or C.

So which do you choose: A, B, or C?

Let's abstract this into a simple game

The game is considerably more complicated and involves concepts such as legitimacy and perceived support for policies.

I really liked Robin Hanson's essay about this, "Policy Tug-O-War":

http://www.overcomingbias.com/2007/05/policy_tugowar.html

Moral: Pull policy ropes sideways!

If you're going to do this, you must research idiocy independently and gather statistics on its specific forms. Do not allow your impressions of where the idiots are and how numerous they are to be formed by the media.

Taking a random sample, even of 1 (as in Heinlein, above), is stochastic, but robust against media bias.

Phil- clever heuristic, canceling idiots..though note that it actually applies directly from a bayesian expected value calculation in certain scenarios:

  1. Assume you have no info about the voting issues except who the idiots are and how they vote. Now either your prior is that reversed stupidity is intelligence in this domain or it's not. If it is, then you have clear bayesian grounds to vote against the idiots. If it's not, then reversed stupidity either is definite stupidity or it has 0 correlation. In case 1, reason itself does not work (e.g., a situation in which god confounds the wisdom of the wise, I.e. You're screwed precisely for being rational). If 0 correlation, then the idiots are noise and provided you can count the idiots to be sure multiple of you don't cancel one idiot, you reduce noise, which is the best you can do.

The doubtful point in this assessment is how you identify "idiots" about a voting situation which Ostensibly you know nothing else about. In your examples, the info you used to identify the idiots seemed to require some domain knowledge which itself should figure into how you vote. Assuming idiots are "cross-domain incompetent" may be true for worlds like ours, but that needs to be fleshed a lot more for soundness, I think.

I think you need evidence about what effect non-tug of war voting has.

Suppose I support the free ownership of weapons, but think a seven day waiting period is better than none.

If I vote for that waiting period, am I demoralising my fellow gun supporters, and invigorating the gun control types, who will therefore struggle harder for more restrictions? Or invigorating my side, which will make sure it does not get defeated next time? Too little evidence to make a prediction.

Or what if I say, well, seven days is OK, but if they win this the gun control types will then demand gun licencing, involving gun holders needing annual psychiatrist's reports. So I have to tug against seven days, in case something worse comes along.

I would vote for the policy I supported. This has little enough effect on whether that policy gets made into law. I would think the effect on future changes is more negligible.

As a British citizen, I have never been eligible to vote in a referendum. It seems that American propositions are much more common.

Less Wrong SF quote: "The right to bear weapons is the right to be free"- The Weapon Shops of Isher.

"If you are part of a society that votes, then do so. There may be no candidates and no measures you want to vote for, but there are certain to be ones you want to vote against. In case of doubt, vote against. By this rule you will rarely go wrong. If this is too blind for your taste, consult some well-meaning fool (there is always one around) and ask his advice. Then vote the other way. This enables you to be a good citizen (if such is your wish) without spending the enormous amount of time on it that truly intelligent exercise of franchise requires."

--Heinlein

[-][anonymous]10

Now I've heard it: definite proof that Heinlein is a nutcase. Here he openly advocates the idea that reversed stupidity is intelligence.

Heinlein's approach is stochastic, but more robust.

Voting to "cancel out an idiot" is possibly acceptable as a first-order approximation, but sorely lacking beyond that.

Even assuming single-issue voting on a question that is completely linear, if you believe that the correct point is at point X% of the way from A to B, where Y% of the idiots vote towards A and (100-Y)% towards B, a second-order approximation of rationality would mean randomly voting toward one side or the other in proportion to the difference between X and Y such that if X and Y are equal you flip a coin.

Indeed - I've considered similar problems with Less Wrong comment voting. If I see a comment that's rated as a 20 and I think it's more like a 5, I'm tempted to vote it down. But I resist the urge because I won't look at it again but there might be 20 people later on that decide to vote it down on its merits, in which case I would want to cancel them out by voting up. So it seems best, when voting isn't one-off and closed, to vote one's conscience.

Is the problem here our inclination to interpret the number of points or karma as a rating in and of itself? As I understand it, that is just a tally of the upvotes and downvotes.

A 20 isn't four times as correct as a 5. It isn't even necessarily perceived as correct by four times as many people since the total number of votes might be larger for the 5 than for the 20.

So if we see a comment rated 20 and think it's more like a 5, we need to correct our thinking. Because this rating is not a 20/20 or some other percentage. The difference between 5 and 20 isn't necessarily qualitative. Does that make sense?

Indeed. One of the things I don't like that much about the karma system is that I'd consider 5 upvotes and 0 downvotes to be better than 24 upvotes and 20 downvotes.

Surely, other things equal, your best estimate for future voting is current voting. It's more likely that another 20 will upvote than another 20 downvote. If you're only concerned with the outcome, your best strategy will be to downvote. Of course, you may feel really bad if you downvoted a comment below what you think it deserves, because you were responsible.

That approach would be good if there were a large number of people using this strategy, or if you voted many times on the same issue.

if you voted many times on the same issue.

In this case, moving to Chicago is an option.

This assumes that the debate and possible solution set lies along a straight line, in which case reversed stupidity is very close to intelligence. In situations where this is strictly the case, this method might not be bad, and in markets if you can manage to buy when the idiots sell and sell when the idiots buy (again, along a straight line of possible values) in my experience you end up doing well, if you can figure out which end of the rope is which. JGWeissman, I wouldn't worry about overcancellation too much because the number of idiots is large and the number of people willing to employ heuristics like this is small.

In most situations of this type the best solutions lie far from the rope and even the smart people have long since given up doing anything but pulling. If that is not possible, and there is no cost to pull on the rope, trying to cancel out the idiots is on average likely to be better than doing nothing, but I certainly wouldn't think this is a good primary methodology to make decisions.

I don't think it's a good heuristic, and I don't think you do either. Reversed stupidity is not intelligence, and it's more efficient to tug "sideways".

For issues that are split around 95%-5%, I wouldn't be surprised if the proportion of idiots had very little correlation to the truth of the causes.

The assumption is that you're in a two-choice vote, where there is no way to pull the rope sideways.

Is your advocacy to vote in order to cancel out mindless voters? Or does the heuristic promote voting to cancel out the mindless in general?

I ask because I don't think you can generally distinguish between voting idiots and non-voting idiots in a secret ballot system.

Imagine a less publicized election with low turnout. If the pro biotechnology group votes more rigorously, they might actually have more mindlessly pro-science voters because a large number of anti-science voters stayed home.

If the heuristic dictates voting against idiots in general, then it falls to the aforementioned "reversed stupidity is not intelligence". If the heuristic dictates voting against voting idiots, then you need to have good assumptions about which idiots vote and which idiots stay home. And that's virtually unattainable knowledge.

It dictates voting against idiots in general, and it doesn't reduce to reversed stupidity is not intelligence when there are 2 options on the ballot. You are correct that it could fail if the views of voting and non-voting idiots aren't positively-correlated.

I'm agnostic to the heuristic you propose, but I disagree with applying it to the metric that you use (being pro- or anti-science). Scientific progress might be slowed by respecting genetic privacy rights, but we could say the same of any privacy rights (or, indeed, many other things). Imagine how much faster sociology and psychology could advance if we knew what everybody does in the privacy of their homes. Surely there are considerations more important than the advancement of science.

Don't know what you mean; being pro- or anti-science is not a metric.

Surely there are considerations more important. But some information is better than no information. It is better, in this case, to use less-important but less-biased information, than more-important, more-biased information.

My job is to cancel out an idiot.

How do you coordinate so that many others with the same strategy don't cancel out the same idiot? If you also consider everyone who uses strategy as an idiot, it could work, but it seems difficult to achieve in practice. I think it would be more effective to actually make a judgment of what you would do if you were in charge, and then vote that way.

Part of the purpose of this heuristic is that you can use it when you can't make such a judgement.

The strategy only comes into effect when there are many idiots on one side of a 2-sided issue. Until I beome a famous political theorist, it is safe to assume there are more such idiots than people using this strategy.

So, in order to not be counterproductive, the strategy needs an environment in which it will be ineffective? Or are you suggesting that the difference in idiots on the two sides will be larger than the cancelers, but smaller than the cancelers combined with the experts? I think verifying this in a particular situation would be difficult.

On the other hand, if you actually have a position on the issue, you can use strategies that go beyond voting, like trying to persuade people. Even trying to persuade people not to vote because of their own ignorance could be more effective, if you really can't make a good judgment.

Sorry - I can't figure out what you're asking in the 1st paragraph. I agree with your second paragraph.

Consider the following cases:

  1. The difference in the number of idiots on the two sides is greater than the number of cancelers plus the number of experts. The cancelers have not made a difference. The impact is neutral.

  2. The number of cancelers is sufficient to narrow the difference in the number of idiots to smaller than the number of experts (who have presumably achieved an expert consensus). The experts voting as a block can sway the election either way. The cancelers have enabled the experts to make a decision. The expected impact is positive (50% chance the experts change the decision, 50% the idiots were right anyways).

  3. The number of cancelers is greater than the difference in the number of idiots plus the number of experts. The cancelers have changed the results of the election, without empowering the experts. The expected impact is neutral. (50% chance the new decision is right, 50% it is wrong. It is worse if the strategy convinces you to cancel out the idiots you think are a little more likely to be right. If you are canceling the idiots you think are a little more likely to be wrong, you have other reasons to vote that way.)

Having a reliable positive impact depends on being in situation 2, which, given a small number of experts, seems unlikely unless you are careful to only apply the strategy in this case, which would be a lot of work. I expect other strategies to get better results for the effort.

This is an excellent point!

I didn't think so, actually - it sounded to me like the fallacy outright of "reversed stupidity is not intelligence" - but taking your different opinion into account, I've promoted the post.

In a binary proposition, reversing the largest stupidity seems likely to at least be marginally more intelligent than the alternative. Which isn't really saying much, overall.

Stupidity is uncorrelated with truth, not anticorrelated with truth. Reversed stupidity is still uncorrelated with truth.