What are some other seemingly "irrational" things we do that are in fact rational when we factor in the pleasantness of doing them?
Relevant paper: Lay Rationalism and Inconsistency between Predicted Experience and Decision
What are some other seemingly "irrational" things we do that are in fact rational when we factor in the pleasantness of doing them?
Relevant paper: Lay Rationalism and Inconsistency between Predicted Experience and Decision
Thanks Nick. That paper looks very interesting.
I think Eliezer's using these terms in a more specific sense than you are. For instance, your Person 2 is making an appeal to egalitarianism (in the conventional sense) as an argument for their position; while it still may be invalid, it's not an argument for why the debate should stop, which is what this post is about, if I'm reading it correctly. The appeal to egalitarianism is something like "Both of us have equally valid opinions, so who's to say which of us is right or wrong? Let's agree to disagree." The appeal to personal freedom is "I have a right to my opinion, so by arguing with me, you're infringing on my rights" (I encounter that one depressingly often), "I define my words this one way, so by disputing that, you're infringing on my rights", etc. They're never arguments (even wrong ones) about the actual merit of the views being debated.
Oops, yes, I misread the original post. Thanks for pointing that out.
The items on that list of appeals can also be ranked. According to mainstream US values, "Appeal to egalitarianism" trumps "Appeal to unquestionable authority", "Appeal to personal freedom" trumps "Appeal to egalitarianism"; and so on. The standard political talk show debate consists of a back-and-forth escalation up this ladder.
For example, in a televised debate on regulation:
Person 1: "The National Bureau of Economics Research published a study showing conclusively that regulation of X is harmful" (authority)
Person 2: "Well, I don't care what the elite economists say; the poor are not getting equal access to X and that is unfair." (egalitarianism)
Person 1: "Sure, it's unequal, but if the government played big brother with X, that would violate our fundamental freedoms." (personal freedom)
Exercising "rational" self-control can be very unpleasant, therefore resulting in disutility.
Example 1: When I come buy an interesting-looking book on Amazon, I can either have it shipped to me in 8 days for free, or 2 days for a few bucks. The naive rational thing to do is to select the free shipping, but you know what? That 10-day wait is more unpleasant than spending a few bucks.
Example 2: When I come home from the grocery store I'm tempted to eat all the tastiest food first. It would be more "emotionally intelligent" to spread it out over the course of the week. But that requires a lot of unpleasant resistance to temptation. Also, the plain food seems more appealing when I'm hungry and it's the only thing in my fridge.
Of course, exercising restraint probably builds willpower, a good thing in the long run. But in some cases we should admit that our willpower is only so elastic, and that the most rational thing to do is to give in to our impulses.
What are some other seemingly "irrational" things we do that are in fact rational when we factor in the pleasantness of doing them?
I think clever people are especially susceptible to the belief that their perceptions are typical. Let's say you can't visualize images in your mind, but your coworker insists that he can. Since you're not a brain scientist, you can't verify whether he's right or whether he's just misinterpreted the question. However, the last few times you had a disagreement with him on a verifiable subject, you were vindicated by the facts, so you can only assume that you are right this time as well. Add to that the fact that people's stated perceptions and preferences are frequently dishonest (because of signaling), and it's very easy to mistrust them.
One useful first step to overcoming this bias is to compare one's results on a test like UVA's Moral Foundations Questionnaire here to other segments of the population.
However, it's not enough to just learn the facts about how other people perceive the world; sometimes one has to experience them firsthand. I have always been an ambitious high achiever and used to get frustrated and confused by people who were not able to follow through with their goals. However, a few years back I had an adverse reaction to a medication, and experienced for a few hours what depression must be like. From then on, it all made perfect sense.
One day I wonder if it will be possible to alter my brain chemstry safely and temporarily so that I can experience what it is like to perceive the world as a conservative, a liberal, a luddite, a woman, a blue collar worker, a depression sufferer, a jock, an artist, etc. The impact on my emotional maturity and ability to empathize would be tremendous.
There are a lot of opportunities in the day for something to happen that might prompt you to think "wow, that's one in a thousand", though. It wouldn't have been worth wasting a moment wondering if it was coincidence unless you had some reason to suspect an alternative hypothesis, like that it changed because the mouse moved.
bit that makes no sense deleted
Please take note of the wording: "reject all bias as evil".
That is, lumping all demonstrated instances of bias into a general category of "ugh, I should avoid doing this" is likely to keep us from looking into the interesting adaptive properties of specific biases.
When confronted with a specific bias, the useful thing to do is recognize that it introduces error in particular contexts but may remain adaptive in other contexts. We will then strive to adopt prescriptive approaches, selected according to context, which help correct for observed bias and bring our cognition into line with the desired normative frameworks - which themselves differ from context to context.
I think this disagreement comes down to the definition of "bias", which Wikipedia defines as "a tendency or preference towards a particular perspective, ideology or result, when the tendency interferes with the ability to be impartial, unprejudiced, or objective." If a bias helps you make fewer errors, I would argue it's not a bias.
Maybe it is clearer if we speak of behaviors rather than biases. A given behavior (e.g. tendency to perceive what you were expecting to perceive) may make you more biased in certain contexts, and more rational in others. It might be advantageous to keep this behavior if it helps you more than it hurts you, but to the extent that you can identify the situations where the behavior causes errors, you should try to correct it.
Great audio clip, BTW.
How do people here consume Less Wrong? I just started reading and am looking for a good way to stay on top of posts and comments. Do you periodically check the website? Do you use an RSS feed? (which?) Or something else?
That really shouldn't matter. Your calibration should include the chances of the question being a "trick question". If fewer than 90% of subjects give confidence intervals containing the actual number of employees, they're being overconfident by underestimating the probability that the question has an unexpected answer.
Imagine an experiment where we randomize subjects into two groups. All subjects are given a 20-question quiz that asks them to provide a confidence interval on the temperatures in various cities around the world on various dates in the past year. However, the cities and dates for group 1 are chosen at random, whereas the cities and dates for group 2 are chosen because they were record highs or lows.
This will result in two radically different estimates of overconfidence. The fact that the result of a calibration test depends heavily on the questions being asked should suggest that the methodology is problematic.
What this comes down to is: how do you estimate the probability that a question has an unexpected answer? See this quiz: maybe the quizzer is trying to trick you, maybe he's trying to reverse-trick you, or maybe he just chose his questions at random. It's a meaningless exercise because you're being asked to estimate values from an unknown distribution. The only rational thing to do is guess at random.
People taking a calibration test should first see the answers to a sample of the data set they will be tested on.
-- Talib Kweli (substitute "nature" for "God")