Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

A Voting Puzzle, Some Political Science, and a Nerd Failure Mode

88 ChrisHallquist 10 October 2013 02:10AM

In grade school, I read a series of books titled Sideways Stories from Wayside School by Louis Sachar, who you may know as the author of the novel Holes which was made into a movie in 2003. The series included two books of math problems, Sideways Arithmetic from Wayside School and More Sideways Arithmetic from Wayside School, the latter of which included the following problem (paraphrased):

The students have Mrs. Jewl's class have been given the privilege of voting on the height of the school's new flagpole. She has each of them write down what they think would be the best hight for the flagpole. The votes are distributed as follows:

  • 1 student votes for 6 feet.
  • 1 student votes for 10 feet.
  • 7 students vote for 25 feet.
  • 1 student votes for 30 feet.
  • 2 students vote for 50 feet.
  • 2 students vote for 60 feet.
  • 1 student votes for 65 feet.
  • 3 students vote for 75 feet.
  • 1 student votes for 80 feet, 6 inches.
  • 4 students vote for 85 feet.
  • 1 student votes for 91 feet.
  • 5 students vote for 100 feet.

At first, Mrs. Jewls declares 25 feet the winning answer, but one of the students who voted for 100 feet convinces her there should be a runoff between 25 feet and 100 feet. In the runoff, each student votes for the height closest to their original answer. But after that round of voting, one of the students who voted for 85 feet wants their turn, so 85 feet goes up against the winner of the previous round of voting, and the students vote the same way, with each student voting for the height closest to their original answer. Then the same thing happens again with the 50 foot option. And so on, with each number, again and again, "very much like a game of tether ball."

Question: if this process continues until it settles on an answer that can't be beaten by any other answer, how tall will the new flagpole be?

Answer (rot13'd): fvkgl-svir srrg, orpnhfr gung'f gur zrqvna inyhr bs gur bevtvany frg bs ibgrf. Naq abj lbh xabj gur fgbel bs zl svefg rapbhagre jvgu gur zrqvna ibgre gurberz.

Why am I telling you this? There's a minor reason and a major reason. The minor reason is that this shows it is possible to explain little-known academic concepts, at least certain ones, in a way that grade schoolers will understand. It's a data point that fits nicely with what Eliezer has written about how to explain things. The major reason, though, is that a month ago I finished my systematic read-through of the sequences and while I generally agree that they're awesome (perhaps moreso than most people; I didn't see the problem with the metaethics sequence), I thought the mini-discussion of political parties and voting was on reflection weak and indicative of a broader nerd failure mode.

TLDR (courtesy of lavalamp):

  1. Politicians probably conform to the median voter's views.
  2. Most voters are not the median, so most people usually dislike the winning politicians.
  3. But people dislike the politicians for different reasons.
  4. Nerds should avoid giving advice that boils down to "behave optimally". Instead, analyze the reasons for the current failure to behave optimally and give more targeted advice.

continue reading »

The noncentral fallacy - the worst argument in the world?

157 Yvain 27 August 2012 03:36AM

Related to: Leaky Generalizations, Replace the Symbol With The Substance, Sneaking In Connotations

David Stove once ran a contest to find the Worst Argument In The World, but he awarded the prize to his own entry, and one that shored up his politics to boot. It hardly seems like an objective process.

If he can unilaterally declare a Worst Argument, then so can I. I declare the Worst Argument In The World to be this: "X is in a category whose archetypal member gives us a certain emotional reaction. Therefore, we should apply that emotional reaction to X, even though it is not a central category member."

Call it the Noncentral Fallacy. It sounds dumb when you put it like that. Who even does that, anyway?

It sounds dumb only because we are talking soberly of categories and features. As soon as the argument gets framed in terms of words, it becomes so powerful that somewhere between many and most of the bad arguments in politics, philosophy and culture take some form of the noncentral fallacy. Before we get to those, let's look at a simpler example.

Suppose someone wants to build a statue honoring Martin Luther King Jr. for his nonviolent resistance to racism. An opponent of the statue objects: "But Martin Luther King was a criminal!"

Any historian can confirm this is correct. A criminal is technically someone who breaks the law, and King knowingly broke a law against peaceful anti-segregation protest - hence his famous Letter from Birmingham Jail.

But in this case calling Martin Luther King a criminal is the noncentral. The archetypal criminal is a mugger or bank robber. He is driven only by greed, preys on the innocent, and weakens the fabric of society. Since we don't like these things, calling someone a "criminal" naturally lowers our opinion of them.

The opponent is saying "Because you don't like criminals, and Martin Luther King is a criminal, you should stop liking Martin Luther King." But King doesn't share the important criminal features of being driven by greed, preying on the innocent, or weakening the fabric of society that made us dislike criminals in the first place. Therefore, even though he is a criminal, there is no reason to dislike King.

This all seems so nice and logical when it's presented in this format. Unfortunately, it's also one hundred percent contrary to instinct: the urge is to respond "Martin Luther King? A criminal? No he wasn't! You take that back!" This is why the noncentral is so successful. As soon as you do that you've fallen into their trap. Your argument is no longer about whether you should build a statue, it's about whether King was a criminal. Since he was, you have now lost the argument.

Ideally, you should just be able to say "Well, King was the good kind of criminal." But that seems pretty tough as a debating maneuver, and it may be even harder in some of the cases where the noncentral Fallacy is commonly used.

continue reading »

Conspiracy Theories as Agency Fictions

30 [deleted] 09 June 2012 03:15PM

Related to: Consider Conspiracies, What causes people to believe in conspiracy theories?

Here I consider in some detail a failure mode that classical rationality often recognizes. Unfortunately nearly all heuristics normally used to detect it seem remarkably vulnerable to misfiring or being exploited by others. I advocate an approach where we try our best to account for the key bias, seeing agency where there is none, while trying to minimize the risk of being tricked into dismissing claims because of boo lights.  

What does calling something a "conspiracy theory" tell us?

What is a conspiracy theory? Explanations that invoke plots orchestrated by covert groups are easily called or thought of as such. In a more legal sense conspiracy is an agreement between persons to mislead or defraud others. This simple story gets complicated because people aren't very clear on what they consider a conspiracy.

To give an example, is explicit negotiation or agreement really necessary to call something a conspiracy? Does silent cooperation on Prisoner's Dilemma count? What if the players are deceiving themselves that they are really following a different goal and the resulting cooperation is just a side effect? How could we tell the difference and would it matter? The latter is especially interesting if one applies the anthropic principle to social attitudes and norms.

The phrase is also a convenient tool to mark an opponent's tale as low status and unworthy of further investigation. A boo light easily applied to anything that has people acting in something that can be framed as self-interest and happens to be few inferential jumps away from the audience. Not only is its use in this way well known, this is arguably the primary meaning of calling an argument a conspiracy theory.

We have plenty of historical examples of high-stakes conspiracies so we know they can be the right answer. Noting this and putting aside the misuse of the label, people do engage in crafting conspiracy theories when they just aren't needed. Entire communities can fixate on them or fail to call such bad thinking out. Why does this happen? Humans being the social animals that we are, the group dynamics at work probably need an article or sequence of their own. It should suffice for now to point to belief as attire, the bandwagon effect and Robin Hanson's take on status. Let's rather consider the question of why individuals may be biased towards such explanations. Why do they privilege the hypothesis?

When do they seem more likely than they are?

First off we have a hard time understanding that coordination is hard. Seeing a large pay off available and thinking it easily in reach if "we could just get along" seems like a classical failing. Our pro-social sentiments lead us to downplay such barriers in our future plans. Motivated cognition on behalf of assessing the threat potential of perceived enemies or strangers likely shares this problem. Even if we avoid this, we may still be lost since the second big relevant thing is our tendency for anthropomorphizing things that better not be. Ours is a paranoid brain seeing agency in every shadow or strange sound. The cost of false positives was once reasonably low, while the cost of a false negative very high.

Our minds are also just plain lazy. We are pretty good at modelling other human minds and considering just how hard the task really is, we do a pretty remarkable job of it. If you are stuck in relative ignorance on a subject, say the weather, dancing to appease the sky spirits makes sense. After all the weather is pretty capricious and angry sky spirits is a model that makes as much or more sense as any other model you know. Unlike some other models this one is at least cheap to run on your brain! The modern world is remarkably complex. Do we see ghosts in it?

Our Dunbarian minds probably just plain can't get how a society can be that complex and unpredictable without it being "planned" by a cabal of Satan or Heterosexual White Males or the Illuminati (but I repeat myself twice) scheming to make weird things happen in our oblivious small stone age tribe. Learning about useful models helps people escape anthropomorphizing human society or the economy or government. The latter is particularly salient. I think most people slip up occasionally in assuming that say something like the United States government can be successfully modelled as a single agent to explain most of its "actions". To make matters worse it is a common literary device used by pundits.

A mysterious malignant agency or someone keeping a secret playing the role of the villain makes a good story. Humans love stories. Its fun to think in stories. Any real conspiracy revealed will probably be widely publicized. Peter Knight in his 2003 book cites historians who have put forward the idea, that the United States is something of a home for popular conspiracy theories because so many high-level ones have been undertaken and uncovered since the 1960s. We are more likely to hear about real confirmed conspiracies today than ever before.

Wishful thinking also plays a role. A universe where bad things happen because bad people make them to is appealing. Getting rid of bad people, even very bad people, is easy compared to all the different things one has to do to make sure bad things don't happen in a universe that doesn't care about us and where really bad things are allowed to happen. Finding bad people whether there are or aren't is a problematic tendency. The sad thing is that this may also be how we often manage to coordinate. Do all theories of legitimacy also perhaps rest on the same cognitive failings that conspiracy theories do? The difference between a shadowy cabal we need to get rid of and an institution worthy of respect may be just some bad luck.

How this misleads us

Putting aside such wild speculation, what should we take away from this? When do conspiracy theories seem more likely than they are?

  • The phenomena is unpredictable or can't be modelled very well
  • Models used by others are hard to understand or are very counter-intuitive
  • Thinking about the subject significantly strains cognitive resources
  • The theory explains why bad things happen or why something went wrong
  • The theory requires coordination

When you see these features you probably find the theory more plausible than it is. 

But how many here are likely to accept "conspiracy theories"? To do so with stuff that actually gets called a conspiracy theory doesn't fit our tribal attire. Reverse stupidity may be particularly problematic for us on this topic. Being open to thinking conspiracy is recommended. Just remember to compare how probable it is in relation to other explanations. It is important to call out people who misuse the tag for rhetorical gain.

This applies to debunking as well. Don't go wildly contrarian. But remember that even things that are tagged conspiracy theories are surprisingly popular. How popular might false theories that avoid that tag be? History shows us we don't have the luxury of hoping that kind of thing just doesn't happen in human societies. When assessing an explanation sharing the key features that make conspiracy theories seem more plausible than they are, compensate as you would with a conspiracy theory. 

But don't listen to me, I'm talking conspiracy theories. 

 


Note: This article started out as a public draft, feedback to other such drafts is always welcomed.  Special thanks to user Villiam_Bur for his commentary and user copt for proofreading and suggestions. Also thanks to the LessWrong IRC chatroom for last minute corrections and stylistic tips.

When None Dare Urge Restraint, pt. 2

56 Jay_Schweikert 30 May 2012 03:28PM

In the original When None Dare Urge Restraint post, Eliezer discusses the dangers of the "spiral of hate" that can develop when saying negative things about the Hated Enemy trumps saying accurate things. Specifically, he uses the example of how the 9/11 hijackers were widely criticized as "cowards," even though this vice in particular was surely not on their list. Over this past Memorial Day weekend, however, it seems like the exact mirror-image problem played out in nearly textbook form.

The trouble began when MSNBC host Chris Hayes noted* that he was uncomfortable with how people use the word "hero" to describe those who die in war -- in particular, because he thinks this sort of automatic valor attributed to the war dead makes it easier to justify future wars. And as you might expect, people went crazy in response, calling Hayes's comments "reprehensible and disgusting," something that "commie grad students would say," and that old chestnut, apparently offered without a hint of irony, "unAmerican." If you watch the video, you can tell that Hayes himself is really struggling to make the point, and by the end he definitely knew he was going to get in trouble, as he started backpedaling with a "but maybe I'm wrong about that." And of course, he apologized the very next day, basically stating that it was improper to have "opine[d] about the people who fight our wars, having never dodged a bullet or guarded a post or walked a mile in their boots."

This whole episode struck me as particularly frightening, mostly because Hayes wasn't even offering a criticism. Soldiers in the American military are, of course, an untouchable target, and I would hardly expect any attack on soldiers to be well received, no matter how grounded. But what genuinely surprised me in this case was that Hayes was merely saying "let's not automatically apply the single most valorizing word we have, because that might cause future wars, and thus future war deaths." But apparently anything less than maximum praise was not only incorrect, but offensive.

Of course, there's no shortage of rationality failures in political discourse, and I'm obviously not intending this post as a political statement about any particular war, policy, candidate, etc. But I think this example is worth mentioning, for two main reasons. First, it's just such a textbook example of the exact sort of problem discussed in Eliezer's original post, in a purer form than I can recall seeing since 9/11 itself. I don't imagine many LW members need convincing in this regard, but I do think there's value in being mindful of this sort of problem on the national stage, even if we're not going to start arguing politics ourselves.

But second, I think this episode says something not just about nationalism, but about how people approach death more generally. Of course, we're all familiar with afterlifism/"they're-in-a-better-place"-style rationalizations of death, but labeling a death as "heroic" can be a similar sort of rationalization. If a death is "heroic," then there's at least some kind of silver lining, some sense of justification, if only partial justification. The movie might not be happy, but it can still go on, and there's at least a chance to play inspiring music. So there's an obvious temptation to label death as "heroic" as much as possible -- I'm reminded of how people tried to call the 9/11 victims "heroes," apparently because they had the great courage to work in buildings that were targeted in a terrorist attack.

If a death is just a tragedy, however, you're left with a more painful situation. You have to acknowledge that yes, really, the world isn't fair, and yes, really, thousands of people -- even the Good Guy's soldiers! -- might be dying for no good reason at all. And even for those who don't really believe in an afterlife, facing death on such a large scale without the "heroic" modifier might just be too painful. The obvious problem, of course -- and Hayes's original point -- is that this sort of death-anesthetic makes it all too easy to numb yourself to more death. If you really care about the problem, you have to face the sheer tragedy of it. Sometimes, all you can say is "we shall have to work faster." And I think that lesson's as appropriate on Memorial Day as any other.

*I apologize that this clip is inserted into a rather low-brow attack video. At the time of posting it was the only link on Youtube I could find, and I wanted something accessible.

Schelling fences on slippery slopes

182 Yvain 16 March 2012 11:44PM

Slippery slopes are themselves a slippery concept. Imagine trying to explain them to an alien:

"Well, we right-thinking people are quite sure that the Holocaust happened, so banning Holocaust denial would shut up some crackpots and improve the discourse. But it's one step on the road to things like banning unpopular political positions or religions, and we right-thinking people oppose that, so we won't ban Holocaust denial."

And the alien might well respond: "But you could just ban Holocaust denial, but not ban unpopular political positions or religions. Then you right-thinking people get the thing you want, but not the thing you don't want."

This post is about some of the replies you might give the alien.

Abandoning the Power of Choice

This is the boring one without any philosophical insight that gets mentioned only for completeness' sake. In this reply, giving up a certain point risks losing the ability to decide whether or not to give up other points.

For example, if people gave up the right to privacy and allowed the government to monitor all phone calls, online communications, and public places, then if someone launched a military coup, it would be very difficult to resist them because there would be no way to secretly organize a rebellion. This is also brought up in arguments about gun control a lot.

I'm not sure this is properly thought of as a slippery slope argument at all. It seems to be a more straightforward "Don't give up useful tools for fighting tyranny" argument.

The Legend of Murder-Gandhi

Previously on Less Wrong's The Adventures of Murder-Gandhi: Gandhi is offered a pill that will turn him into an unstoppable murderer. He refuses to take it, because in his current incarnation as a pacifist, he doesn't want others to die, and he knows that would be a consequence of taking the pill. Even if we offered him $1 million to take the pill, his abhorrence of violence would lead him to refuse.

But suppose we offered Gandhi $1 million to take a different pill: one which would decrease his reluctance to murder by 1%. This sounds like a pretty good deal. Even a person with 1% less reluctance to murder than Gandhi is still pretty pacifist and not likely to go killing anybody. And he could donate the money to his favorite charity and perhaps save some lives. Gandhi accepts the offer.

Now we iterate the process: every time Gandhi takes the 1%-more-likely-to-murder-pill, we offer him another $1 million to take the same pill again.

Maybe original Gandhi, upon sober contemplation, would decide to accept $5 million to become 5% less reluctant to murder. Maybe 95% of his original pacifism is the only level at which he can be absolutely sure that he will still pursue his pacifist ideals.

Unfortunately, original Gandhi isn't the one making the choice of whether or not to take the 6th pill. 95%-Gandhi is. And 95% Gandhi doesn't care quite as much about pacifism as original Gandhi did. He still doesn't want to become a murderer, but it wouldn't be a disaster if he were just 90% as reluctant as original Gandhi, that stuck-up goody-goody.

What if there were a general principle that each Gandhi was comfortable with Gandhis 5% more murderous than himself, but no more? Original Gandhi would start taking the pills, hoping to get down to 95%, but 95%-Gandhi would start taking five more, hoping to get down to 90%, and so on until he's rampaging through the streets of Delhi, killing everything in sight.

Now we're tempted to say Gandhi shouldn't even take the first pill. But this also seems odd. Are we really saying Gandhi shouldn't take what's basically a free million dollars to turn himself into 99%-Gandhi, who might well be nearly indistinguishable in his actions from the original?

Maybe Gandhi's best option is to "fence off" an area of the slippery slope by establishing a Schelling point - an arbitrary point that takes on special value as a dividing line. If he can hold himself to the precommitment, he can maximize his winnings. For example, original Gandhi could swear a mighty oath to take only five pills - or if he didn't trust even his own legendary virtue, he could give all his most valuable possessions to a friend and tell the friend to destroy them if he took more than five pills. This would commit his future self to stick to the 95% boundary (even though that future self is itching to try to the same precommitment strategy to stick to its own 90% boundary).

Real slippery slopes will resemble this example if, each time we change the rules, we also end up changing our opinion about how the rules should be changed. For example, I think the Catholic Church may be working off a theory of "If we give up this traditional practice, people will lose respect for tradition and want to give up even more traditional practices, and so on."

Slippery Hyperbolic Discounting

One evening, I start playing Sid Meier's Civilization (IV, if you're wondering - V is terrible). I have work tomorrow, so I want to stop and go to sleep by midnight.

At midnight, I consider my alternatives. For the moment, I feel an urge to keep playing Civilization. But I know I'll be miserable tomorrow if I haven't gotten enough sleep. Being a hyperbolic discounter, I value the next ten minutes a lot, but after that the curve becomes pretty flat and maybe I don't value 12:20 much more than I value the next morning at work. Ten minutes' sleep here or there doesn't make any difference. So I say: "I will play Civilization for ten minutes - 'just one more turn' - and then I will go to bed."

Time passes. It is now 12:10. Still being a hyperbolic discounter, I value the next ten minutes a lot, and subsequent times much less. And so I say: I will play until 12:20, ten minutes sleep here or there not making much difference, and then sleep.

And so on until my empire bestrides the globe and the rising sun peeps through my windows.

This is pretty much the same process described above with Murder-Gandhi except that here the role of the value-changing pill is played by time and my own tendency to discount hyperbolically.

The solution is the same. If I consider the problem early in the evening, I can precommit to midnight as a nice round number that makes a good Schelling point. Then, when deciding whether or not to play after midnight, I can treat my decision not as "Midnight or 12:10" - because 12:10 will always win that particular race - but as "Midnight or abandoning the only credible Schelling point and probably playing all night", which will be sufficient to scare me into turning off the computer.

(if I consider the problem at 12:01, I may be able to precommit to 12:10 if I am especially good at precommitments, but it's not a very natural Schelling point and it might be easier to say something like "as soon as I finish this turn" or "as soon as I discover this technology").

Coalitions of Resistance

Suppose you are a Zoroastrian, along with 1% of the population. In fact, along with Zoroastrianism your country has fifty other small religions, each with 1% of the population. 49% of your countrymen are atheist, and hate religion with a passion.

You hear that the government is considering banning the Taoists, who comprise 1% of the population. You've never liked the Taoists, vile doubters of the light of Ahura Mazda that they are, so you go along with this. When you hear the government wants to ban the Sikhs and Jains, you take the same tack.

But now you are in the unfortunate situation described by Martin Niemoller:

First they came for the socialists, and I did not speak out, because I was not a socialist.
Then they came for the trade unionists, and I did not speak out, because I was not a trade unionist.
Then they came for the Jews, and I did not speak out, because I was not a Jew.
Then they came for me, but we had already abandoned the only defensible Schelling point

With the banned Taoists, Sikhs, and Jains no longer invested in the outcome, the 49% atheist population has enough clout to ban Zoroastrianism and anyone else they want to ban. The better strategy would have been to have all fifty-one small religions form a coalition to defend one another's right to exist. In this toy model, they could have done so in an ecumenial congress, or some other literal strategy meeting.

But in the real world, there aren't fifty-one well-delineated religions. There are billions of people, each with their own set of opinions to defend. It would be impractical for everyone to physically coordinate, so they have to rely on Schelling points.

In the original example with the alien, I cheated by using the phrase "right-thinking people". In reality, figuring out who qualifies to join the Right-Thinking People Club is half the battle, and everyone's likely to have a different opinion on it. So far, the practical solution to the coordination problem, the "only defensible Schelling point", has been to just have everyone agree to defend everyone else without worrying whether they're right-thinking or not, and this is easier than trying to coordinate room for exceptions like Holocaust deniers. Give up on the Holocaust deniers, and no one else can be sure what other Schelling point you've committed to, if any...

...unless they can. In parts of Europe, they've banned Holocaust denial for years and everyone's been totally okay with it. There are also a host of other well-respected exceptions to free speech, like shouting "fire" in a crowded theater. Presumably, these exemptions are protected by tradition, so that they have become new Schelling points there, or are else so obvious that everyone except Holocaust deniers is willing to allow a special Holocaust denial exception without worrying it will impact their own case.

Summary

Slippery slopes legitimately exist wherever a policy not only affects the world directly, but affects people's willingness or ability to oppose future policies. Slippery slopes can sometimes be avoided by establishing a "Schelling fence" - a Schelling point that the various interest groups involved - or yourself across different values and times - make a credible precommitment to defend.

The bias shield

18 PhilGoetz 31 December 2011 05:44PM

A friend asked me to get her Bill O'Reilly's new book Killing Lincoln for Christmas.  I read its reviews on Amazon, and found several that said it wasn't as good as another book about the assassination, Blood on the Moon.  This seemed like a believable conclusion to me.  Killing Lincoln has no footnotes to document any of its claims, and is not in the Ford's Theatre national park service bookstore because the NPS decided it was too historically inaccurate to sell.  Nearly 200 books have been written about the Lincoln assassination, including some by professional Lincoln scholars.  So the odds seemed good that at least one of these was better than a book written by a TV talk show host.

But I was wrong.  To many people, this was not a believable conclusion.

(This is not about the irrationality of Fox network fans.  They are just a useful case study.)

continue reading »

A few analogies to illustrate key rationality points

50 kilobug 09 October 2011 01:00PM

Introduction

Due to long inferential distances it's often very difficult to use knowledge or understanding given by rationality in a discussion with someone who isn't versed in the Art (like, a poor folk who didn't read the Sequences, or maybe even not the Goedel, Escher, Bach !). So I find myself often forced to use analogies, that will necessary be more-or-less surface analogies, which don't prove anything nor give any technical understanding, but allow someone to have a grasp on a complicated issue in a few minutes.

A tale of chess and politics

Once upon a time, a boat sank and a group of people found themselves isolated in an island. None of them knew the rules of the game "chess", but there was a solar-powered portable chess computer on the boat. A very simple one, with no AI, but which would enforce the rules. Quickly, the survivors discovered the joy of chess, deducing the rules by trying moves, and seeing the computer saying "illegal move" or "legal move", seeing it proclaiming victory, defeat or draw game.

So they learned the rules of chess, movement of the pieces, what "chess" and "chessmate" is, how you can promote pawns, ... And they understood the planning and strategy skills required to win the game. So chess became linked to politics, it was the Game, with a capital letter, and every year, they would organize a chess tournament, and the winner, the smartest of the community, would become the leader for one year.

One sunny day, a young fellow named Hari playing with his brother Salvor (yes, I'm an Asimov fan), discovered a new move of chess : he discovered he could castle. In one move, he could liberate his rook, and protect his king. They kept the discovery secret, and used it on the tournament. Winning his games, Hari became the leader.

Soon after, people started to use the power of castling as much as they could. They even sacrificed pieces, even their queen, just to be able to castle fast. But everyone was trying to castle as fast as they could, and they were losing sight of the final goal : winning, for the intermediate goal : castling.

continue reading »

Probability and Politics

17 CarlShulman 24 November 2010 05:02PM

Follow-up toPolitics as Charity

Can we think well about courses of action with low probabilities of high payoffs?  

Giving What We Can (GWWC), whose members pledge to donate a portion of their income to most efficiently help the global poor, says that evaluating spending on political advocacy is very hard:

Such changes could have enormous effects, but the cost-effectiveness of supporting them is very difficult to quantify as one needs to determine both the value of the effects and the degree to which your donation increases the probability of the change occurring. Each of these is very difficult to estimate and since the first is potentially very large and the second very small [1], it is very challenging to work out which scale will dominate.

This sequence attempts to actually work out a first approximation of an answer to this question, piece by piece. Last time, I discussed the evidence, especially from randomized experiments, that money spent on campaigning can elicit marginal votes quite cheaply. Today, I'll present the state-of-the-art in estimating the chance that those votes will directly swing an election outcome.

Disclaimer

Politics is a mind-killer: tribal feelings readily degrade the analytical skill and impartiality of otherwise very sophisticated thinkers, and so discussion of politics (even in a descriptive empirical way, or in meta-level fashion) signals an increased probability of poor analysis. I am not a political partisan and am raising the subject primarily for its illustrative value in thinking about small probabilities of large payoffs.

continue reading »

Vote Qualifications, Not Issues

10 jimrandomh 26 September 2010 08:26PM

In the United States and other countries, we elect our leaders. Each individual voter chooses some criteria by which to decide who they vote for, and the aggregate result of all those criteria determines who gets to lead. The public narrative overwhelmingly supports one strategy for deciding between politicians: look up their positions on important and contentious issues, and vote for the one you agree with. Unfortunately, this strategy is wrong, and the result is inferior leadership, polarization into camps and never-ending arguments. Instead, voters should be encouraged to vote based on the qualifications that matter: their intelligence, their rationality, their integrity, and their ability to judge character.

If an issue really is contentious, then a voter without specific inside knowledge should not expect their opinion to be more accurate than chance. If everyone votes based on a few contentious issues, then politicians have a powerful incentive to lie about their stance on those issues. But the real problem is, most of the important things that a politician does have nothing to do with the controversies at all. Whether a budget is good or bad depends on how well its author can distinguish between efficient and inefficient spending, over many small projects and expenditures that will never be reviewed by the voters, and not on the total amount taxed or spent. Whether a regulation is good or bad depends on how well its author can predict the effects and engineer the small details for optimal effect, and not on whether it is more or less strict overall. Whether foreign policies succeed or fail depends on how well the diplomats negotiate, and not on any strategy that could be determined years earlier before the election.

continue reading »

Politics as Charity

29 CarlShulman 23 September 2010 05:33AM

Related toShut up and multiplyPolitics is the mind-killerPascal's MuggingThe two party swindleThe American system and misleading labelsPolicy Tug-of-War 

Jane is a connoisseur of imported cheeses and Homo Economicus in good standing, using a causal decision theory that two-boxes on Newcomb's problem. Unfortunately for her, the politically well-organized dairy farmers in her country have managed to get an initiative for increased dairy tariffs on the ballot, which will cost her $20,000. Should she take an hour to vote against the initiative on election day? 

She estimates that she has a 1 in 1,000,000 chance of casting the deciding vote, for an expected value of $0.02 from improved policy. However, while Jane may be willing to give her two cents on the subject, the opportunity cost of her time far exceeds the policy benefit, and so it seems she has no reason to vote.

Jane's dilemma is just the standard Paradox of Voting in political science and public choice theory. Voters may still engage in expressive voting to affiliate with certain groups or to signal traits insofar as politics is not about policy, but the instrumental rationality of voting to bring about selfishly preferred policy outcomes starts to look dubious. Thus many of those who say that we rationally ought to vote in hopes of affecting policy focus on altruistic preferences: faced with a tiny probability of casting a decisive vote, but large impacts on enormous numbers of people in the event that we are decisive, we should shut up and multiply, voting if the expected value of benefit to others sufficiently exceeds the cost to ourselves.

Meanwhile, at the Experimental Philosophy blog, Eric Schwitzgebel reports that philosophers overwhelmingly rate voting as very morally good (on a scale of 1 to 9), with voting placing right around donating 10% of one's income to charity. He offers the following explanation:

continue reading »

View more: Next