Politics as Charity
Related to: Shut up and multiply, Politics is the mind-killer, Pascal's Mugging, The two party swindle, The American system and misleading labels, Policy Tug-of-War
Jane is a connoisseur of imported cheeses and Homo Economicus in good standing, using a causal decision theory that two-boxes on Newcomb's problem. Unfortunately for her, the politically well-organized dairy farmers in her country have managed to get an initiative for increased dairy tariffs on the ballot, which will cost her $20,000. Should she take an hour to vote against the initiative on election day?
She estimates that she has a 1 in 1,000,000 chance of casting the deciding vote, for an expected value of $0.02 from improved policy. However, while Jane may be willing to give her two cents on the subject, the opportunity cost of her time far exceeds the policy benefit, and so it seems she has no reason to vote.
Jane's dilemma is just the standard Paradox of Voting in political science and public choice theory. Voters may still engage in expressive voting to affiliate with certain groups or to signal traits insofar as politics is not about policy, but the instrumental rationality of voting to bring about selfishly preferred policy outcomes starts to look dubious. Thus many of those who say that we rationally ought to vote in hopes of affecting policy focus on altruistic preferences: faced with a tiny probability of casting a decisive vote, but large impacts on enormous numbers of people in the event that we are decisive, we should shut up and multiply, voting if the expected value of benefit to others sufficiently exceeds the cost to ourselves.
Meanwhile, at the Experimental Philosophy blog, Eric Schwitzgebel reports that philosophers overwhelmingly rate voting as very morally good (on a scale of 1 to 9), with voting placing right around donating 10% of one's income to charity. He offers the following explanation:
Is Rationality Maximization of Expected Value?
Two or three months ago, my trip to Las Vegas made me ponder the following: If all gambles in the casinos have negative expected values, why do people still engage in gambling - especially my friends fairly well-versed in probability/statistics?
Suffice it to say, I still have not answered that question.
On the other hand, this did lead me to ponder more about whether rational behavior always involves making choices with the highest expected (or positive) value - call this Rationality-Expectation (R-E) hypothesis.
Here I'd like to offer some counterexamples that show R-E is clearly false, to me at least. (In hindsight, these look fairly trivial but some commentators on this site speak as if maximizing expectation is somehow constitutive of rational decision making - as I used to. So, it may be interesting for those people at the very least.)
- Suppose someone offers you a (single trial) gamble A in which you stand to gain 100k dollars with probability 0.99 and stand to lose 100M dollars with probability 0.01. Even though expectation is -98999000 dollars, you should still take the gamble since the probability of winning on a single trial is very high - 0.99 to be exact.
- Suppose someone offers you a (single trial) gamble B in which you stand to lose 100k dollars with probability 0.99 and stand to gain 100M dollars with probability 0.01. Even though expectation is 98999000 dollars, you should not take the gamble since the probability of losing on a single trial is very high - 0.99 to be exact.
A is a gamble that shows that choices with negative expectation can sometimes lead to net pay off.
B is a gamble that shows that choices with positive expectation can sometimes lead to net costs.
As I'm sure you've all noticed, expectation is only meaningful in decision-making when the number of trials in question can be large (or more precisely, large enough relative to the variance of the random variable in question). This, I think, in essence is another way of looking at Weak Law of Large Numbers.
In general, most (all? few?) statistical concepts make sense only when we have trials numerous enough relative to the variance of the quantities in question.
This makes me ponder a deeper question, nonetheless.
Does it make sense to speak of probabilities only when you have numerous enough trials? Can we speak of probabilities for singular, non-repeating events?
Sayeth the Girl
Disclaimer: If you are prone to dismissing women's complaints of gender-related problems as the women being whiny, emotionally unstable girls who see sexism where there is none, this post is unlikely to interest you.
For your convenience, links to followup posts: Roko says; orthonormal says; Eliezer says; Yvain says; Wei_Dai says
As far as I can tell, I am the most active female poster on Less Wrong. (AnnaSalamon has higher karma than I, but she hasn't commented on anything for two months now.) There are not many of us. This is usually immaterial. Heck, sometimes people don't even notice in spite of my girly username, my self-introduction, and the fact that I'm now apparently the feminism police of Less Wrong.
My life is not about being a girl. In fact, I'm less preoccupied with feminism and women's special interest issues than most of the women I know, and some of the men. It's not my pet topic. I do not focus on feminist philosophy in school. I took an "Early Modern Women Philosophers" course because I needed the history credit, had room for a suitable class in a semester when one was offered, and heard the teacher was nice, and I was pretty bored. I wound up doing my midterm paper on Malebranche in that class because we'd covered him to give context to Mary Astell, and he was more interesting than she was. I didn't vote for Hilary Clinton in the primary. Given the choice, I have lots of things I'd rather be doing than ferreting out hidden or less-than-hidden sexism on one of my favorite websites.
Unfortunately, nobody else seems to want to do it either, and I'm not content to leave it undone. I suppose I could abandon the site and leave it even more masculine so the guys could all talk in their own language, unimpeded by stupid chicks being stupidly offended by completely unproblematic things like objectification and just plain jerkitude. I would almost certainly have vacated the site already if feminism were my pet issue, or if I were more easily offended. (In general, I'm very hard to offend. The fact that people here have succeeded in doing so anyway without even, apparently, going out of their way to do it should be a great big red flag that something's up.) If you're wondering why half of the potential audience of the site seems to be conspicuously not here, this may have something to do with it.
Reason is not the only means of overcoming bias
Sometimes the best way to overcome bias is through an emotional appeal. Below I interweave discussion of how emotional appeals can be used to overcome the bias corresponding to the identifiable victim effect and maladaptive resource hoarding instinct.
Humans are not automatically strategic
Reply to: A "Failure to Evaluate Return-on-Time" Fallacy
Lionhearted writes:
[A] large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped.
A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995....
I’m curious as to why.
Why will a randomly chosen eight-year-old fail a calculus test? Because most possible answers are wrong, and there is no force to guide him to the correct answers. (There is no need to postulate a “fear of success”; most ways writing or not writing on a calculus test constitute failure, and so people, and rocks, fail calculus tests by default.)
Why do most of us, most of the time, choose to "pursue our goals" through routes that are far less effective than the routes we could find if we tried?[1] My guess is that here, as with the calculus test, the main problem is that most courses of action are extremely ineffective, and that there has been no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective.
Cryonics Questions
Cryonics fills many with disgust, a cognitively dangerous emotion. To test whether a few of your possible cryonics objections are reason or disgust based, I list six non-cryonics questions. Answering yes to any one question indicates that rationally you shouldn’t have the corresponding cryonics objections.
1. You have a disease and will soon die unless you get an operation. With the operation you have a non-trivial but far from certain chance of living a long, healthy life. By some crazy coincidence the operation costs exactly as much as cryonics does and the only hospitals capable of performing the operation are next to cryonics facilities. Do you get the operation?
Answering yes to (1) means you shouldn’t object to cryonics because of costs or logistics.
2. You have the same disease as in (1), but now the operation costs far more than you could ever obtain. Fortunately, you have exactly the right qualifications NASA is looking for in a space ship commander. NASA will pay for the operation if in return you captain the ship should you survive the operation. The ship will travel close to the speed of light. The trip will subjectively take you a year, but when you return one hundred years will have passed on Earth. Do you get the operation?
Answering yes to (2) means you shouldn't object to cryonics because of the possibility of waking up in the far future.
Minimum computation and data requirements for consciousness.
Consciousness is a difficult question because it is poorly defined and is the subjective experience of the entity experiencing it. Because an individual experiences their own consciousness directly, that experience is always richer and more compelling than the perception of consciousness in any other entity; your own consciousness always seem more “real” and richer than the would-be consciousness of another entity.
Because the experience of consciousness is subjective, we can never “know for sure” that an entity is actually experiencing consciousness. However there must be certain computational functions that must be accomplished for consciousness to be experienced. I am not attempting to discuss all computational functions that are necessary, just a first step at enumerating some of them and considering implications.
First an entity must have a “self detector”; a pattern recognition computation structure which it uses to recognizes its own state of being an entity and of being the same entity over time. If an entity is unable to recognize itself as an entity, then it can't be conscious that it is an entity. To rephrase Descartes, "I perceive myself to be an entity, therefore I am an entity." It is possible to be an entity and not perceive that one is an entity. This happens in humans but rarely. Other computation structures may be necessary also, but without an ability to recognize itself as an entity an entity cannot be conscious.
Criteria for Rational Political Conversation
Query: by what objective criteria do we determine whether a political decision is rational?
I propose that the key elements -- necessary but not sufficient -- are (where "you" refers collectively to everyone involved in the decisionmaking process):
- you must use only documented reasoning processes:
- use the best known process(es) for a given class of problem
- state clearly which particular process(es) you use
- document any new processes you use
- you must make every reasonable effort to verify that:
- your inputs are reasonably accurate, and
- there are no other reasoning processes which might be better suited to this class of problem, and
- there are no significant flaws in in your application of the reasoning processes you are using, and
- there are no significant inputs you are ignoring
If an argument satisfies all of these requirements, it is at least provisionally rational. If it fails any one of them, then it's not rational and needs to be corrected or discarded.
This is not a circular definition (defining "rationality" by referring to "reasonable" things, where "reasonable" depends on people being "rational"); it is more like a recursive algorithm, where large ambiguous problems are split up into smaller and smaller sub-problems until we get to a size where the ambiguity is negligible.
This is not one great moral principle; it is more like a self-modifying working process (subject to rational criticism and therefore improvable over time -- optimization by successive approximation). It is an attempt to apply the processes of science (or at least the same reasoning which arrived at those processes) to political discourse.
So... can we agree on this?
This is a hugely, vastly, mindbogglingly trimmed-down version of what I originally posted. All comments prior to 2010-08-26 20:52 (EDT) refer to that version, which I have reposted here for comparison purposes and for the morbidly curious. (It got voted down to negative 6. Twice.)
Problems in evolutionary psychology
Note: The primary target of the post is not professional, academic evolutionary psychology. Rather, I am primarily cautioning amateurs (such as LW regulars) about some of the caveats involved in (armchair) evpsych and noting the rigor required for good theories. While the post does also serve as a warning to be cautious about sloppy research (or sloppy science journalism) that doesn't seem to be taking these issues into account, I do believe that most of the researchers doing serious evpsych research are quite aware of these issues.
Evolutionary theories get mentioned a lot on this site, and I frequently feel that they are given far more weight than would be warranted. In particular, evolutionary theories about sex differences seem to get mentioned and appealed to as if they had an iron-cast certainty. People also don't hesitate to make up their own evolutionary psychological explanations. To counterbalance this, I present a list of evolutionary psychology-related problems, divided into four rough categories.
Problems in hypothesis generation
Rationalization bias. We know that human minds are very prone to first deciding on a desired outcome, then coming up with a plausible-sounding story of why it must be so. In general, our minds have difficulty noticing faulty reasoning if it leads to the right conclusion. It's easy and tempting to come up with an ad-hoc evolutionary explanation for any behavior, regardless of whether or not it actually has any biological roots.
Over-attributing meaning. Humans also have a strong tendency to attribute meaning to random chance. We might easily come up with explanations that are unnecessarily complex, and try to make everything into an evolved adaptation. For instance, humans tend to avoid thinking about unpleasant thoughts about themselves. A contrived evpsych explanation might be that this is evolved self-deception: by not acknowledging our own faults, it makes it easier for us to deceive others about them. But mental unpleasantness tends to be correlated with harmful experiences: we avoid situations where we'd be afraid, and fear is correlated with danger. It could just as well be that the mechanism for avoiding mental unpleasantness evolved from the mechanism for avoiding physical unpleasantness, and we avoid thinking unpleasant thoughts of ourselves for the same reason why we avoid poking our fingers at hot stoves. (Example courtesy of Anna Salamon.)
Alternative ways of reaching the goal. Eliezer previously gave us the example of the scientists who thought insects would under the right circumstances limit their breeding, but the insects ended up eating their competitors' offspring instead. We can only cover a limited part of the space of all possible routes evolution could take. While ”but another hypothesis might explain it better” is admittedly a problem all scientific disciplines face, it is especially acute here, since we have very little knowledge of what life in the EEA was actually like.
Problems in background assumptions
Did a genetic path to the adaptation exist? Evolution works by the rule of immediate advantage: for mutation X to reach fixation, it has to provide an immediate advantage. It's well and good to propose that under specific circumstances, organisms that developed a specific behavior would have gained a fitness advantage. But that, by itself, tells us nothing about how many mutations reaching such a behavior would have required. Nor does it tell us anything about whether all of those intermediate stages actually conferred the organism a fitness benefit, making it possible for the final form of the adaptation to actually be reached.
Christopher Hitchens and Cryonics
Christopher Hitchens is probably dying of cancer. Hitchens is a well known author, journalist and militant atheist. Having read much of his work I believe he is also a very high IQ rationalist who enjoys being provocative. He has written "I am quietly resolved to resist bodily as best I can, even if only passively, and to seek the most advanced advice."
Hitchens should be extremely receptive to cryonics. Convincing him to signup would do much for the cryonics movement in part because he would immediately become our most articulate member.
I have written to him about cryonics, but I suspect he is getting tens of thousands of emails and probably won't ever even read mine. I propose that the Less Wrong community attempt to get Hitchens to at least seriously consider cryonics. We could do this by mass emailing him and by linking to this blogpost.
Here is an article in which he talks about his cancer. His email address is at the end of the article.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)