Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

On the unpopularity of cryonics: life sucks, but at least then you die

73 gwern 29 July 2011 09:06PM

From Mike Darwn's Chronopause, an essay titled "Would You Like Another Plate of This?", discussing people's attitudes to life:

The most important, the most obvious and the most factual reason why cryonics is not more widely accepted is that it  fails the “credibility sniff test” in that it makes many critical assumptions which may not be correct...In other words, cryonics is not proven. That is a plenty valid reason for rejecting any costly procedure; dying people do this kind of thing every day for medical procedures which are proven, but which have a very low rate of success and (or) a very high misery quotient. Some (few) people have survived metastatic head/neck cancer – the film critic Roger Ebert, is an example (Figure 1). However, the vast majority of patients who undergo radical neck surgery for cancer die anyway. For the kind and extent of cancer Ebert had, the long term survival rate (>5 years) is ~5% following radical neck dissection and ancillary therapy: usually radiation and chemotherapy. This is thus a proven procedure – it works – and yet the vast majority of patients refuse it.

continue reading »

How to enjoy being wrong

20 lincolnquirk 27 July 2011 05:48AM

Related to: Reasoning Isn't About Logic, It's About Arguing; It is OK to Publicly Make a Mistake and Change Your Mind.

Examples of being wrong

A year ago, in arguments or in thought, I would often:

  • avoid criticizing my own thought processes or decisions when discussing why my startup failed
  • overstate my expertise on a topic (how to design a program written in assembly language), then have to quickly justify a position and defend it based on limited knowledge and cached thoughts, rather than admitting "I don't know"
  • defend a position (whether doing an MBA is worthwhile) based on the "common wisdom" of a group I identify with, without any actual knowledge, or having thought through it at all
  • defend a position (whether a piece of artwork was good or bad) because of a desire for internal consistency (I argued it was good once, so felt I had to justify that position)
  • defend a political or philosophical position (libertarianism) which seemed attractive, based on cached thoughts or cached selves rather than actual reasoning
  • defend a position ("cashiers like it when I fish for coins to make a round amount of change"), hear a very convincing argument for its opposite ("it takes up their time, other customers are waiting, and they're better at making change than you"), but continue arguing for the original position. In this scenario, I actually updated -- thereafter, I didn't fish for coins in my wallet anymore -- but still didn't admit it in the original argument.
  • defend a policy ("I should avoid albacore tuna") even when the basis for that policy (mercury risk) has been countered by factual evidence (in this case, the amount of mercury per can is so small that you would need 10 cans per week to start reading on the scale).
  • provide evidence for a proposition ("I am getting better at poker") where I actually thought it was just luck, but wanted to believe the proposition
  • when someone asked "why did you [do a weird action]?", I would regularly attempt to justify the action in terms of reasons that "made logical sense", rather than admitting that I didn't know why I made a choice, or examining myself to find out why.

Now, I very rarely get into these sorts of situations. If I do, I state out loud: "Oh, I'm rationalizing," or perhaps "You're right," abort that line of thinking, and retreat to analyzing reasons why I emitted such a wrong statement.

We rationalize because we don't like admitting we're wrong. (Is this obvious? Do I need to cite it?) One possible evo-psych explanation: rationalization is an adaptation which improved fitness by making it easier for tribal humans to convince others to their point of view.

Over the last year, I've self-modified to mostly not mind being wrong, and in some cases even enjoy being wrong. I still often start to rationalize, and in some cases get partway through the thought, before noticing the opportunity to correct the error. But when I notice that opportunity, I take it, and get a flood of positive feedback and self-satisfaction as I update my models.

continue reading »

Secrets of the eliminati

93 Yvain 20 July 2011 10:15AM

Anyone who does not believe mental states are ontologically fundamental - ie anyone who denies the reality of something like a soul - has two choices about where to go next. They can try reducing mental states to smaller components, or they can stop talking about them entirely.

In a utility-maximizing AI, mental states can be reduced to smaller components. The AI will have goals, and those goals, upon closer examination, will be lines in a computer program.

But in the blue-minimizing robot, its "goal" isn't even a line in its program. There's nothing that looks remotely like a goal in its programming, and goals appear only when you make rough generalizations from its behavior in limited cases.

Philosophers are still very much arguing about whether this applies to humans; the two schools call themselves reductionists and eliminativists (with a third school of wishy-washy half-and-half people calling themselves revisionists). Reductionists want to reduce things like goals and preferences to the appropriate neurons in the brain; eliminativists want to prove that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.

continue reading »

Connectionism: Modeling the mind with neural networks

39 Yvain 19 July 2011 01:16AM

For about a century, people have known that the brain is made up of neurons which connect to each another and perform computations through electrochemical transmission. For about half a century, people have known enough about computers to realize that the brain doesn't look much like one but still computes pretty well regardless. How?

Spreading Activation was one of the first models of mental computation. In this theory, you can imagine the brain as a bunch of nodes in a graph with labels like "Warlord" "Mongolia" "Barbarian", "Genghis Khan" and "Salmon". Each node has certain connections to the others; when they get activated around the same time, it strengthens the connection. When someone asks a question like "Who was that barbaric Mongol warlord, again?" it activates the nodes "warlord", "barbarian", and "Mongol". The activation spreads to all the nodes connected to these, activating them too, and the most strongly activated node will be the one that's closely connected to all three - the barbaric Mongol warlord in question, Genghis Khan. All the while, "salmon", which has no connection to any of these concepts, just sits on its own not being activated. This fits with experience, in which if someone asks us about barbaric Mongol warlords, the name "Genghis Khan" pops into our brain like magic, while we continue to not think about salmon if we weren't thinking about them before.

Bark leash bone wag puppy fetch. If the word "dog" is now running through your head, you may be a victim of spreading activation, as were participants in something called a Deese-Roediger-McDermott experiment, who when asked to quickly memorize a list of words like those and then test their retention several minutes later, were more likely to "remember" "dog" than any of the words actually on the list.

So this does seem attractive, and it does avoid the folk psychology concept of a "belief". The spreading activation network above was able to successfully answer a question without any representation of propositional statements like "Genghis Khan was a barbaric Mongol warlord." And one could get really enthusiastic about this and try to apply it to motivation. Maybe we have nodes like "Hunger", "Food", "McDonalds", and "*GET IN CAR, DRIVE TO MCDONALDS*". The stomach could send a burst of activation to "Hunger", which in turn activates the closely related "Food", which in turn activates the closely related "McDonalds", which in turn activates the closely related "*GET IN CAR, DRIVE TO MCDONALDS*", and then before you know it you're ordering a Big Mac.

But when you try to implement this on a computer, you don't get very far. Although it can perform certain very basic computations, it has trouble correcting itself, handling anything too complicated (the question "name one person who is *not* a barbaric Mongol warlord" would still return "Genghis Khan" on our toy spreading activation network), or making good choices (you can convince the toy network McDonalds is your best dining choice just by saying its name a lot; the network doesn't care about food quality, prices, or anything else.)

This simple spreading activation model also crashes up against modern neuroscience research, which mostly contradicts the idea of a "grandmother cell", ie a single neuron that represents a single concept like your grandmother. Mysteriously, all concepts seem to be represented everywhere at once - Karl Lashley found you can remove any part of a rat's cortex without significantly damaging a specific memory, proving the memory was nonlocalized. How can this be?

Computer research into neural nets developed a model that could answer these and other objects, transforming the immature spreading activation model into full-blown connectionism.

continue reading »

The limits of introspection

56 Yvain 16 July 2011 09:00PM

Related to: Inferring Our Desires

The last post in this series suggested that we make up goals and preference for other people as we go along, but ended with the suggestion that we do the same for ourselves. This deserves some evidence.

One of the most famous sets of investigations into this issue was Nisbett and Wilson's Verbal Reports on Mental Processes, the discovery of which I owe to another Less Wronger even though I can't remember who. The abstract says it all:

When people attempt to report on their cognitive processes, that is, on the processes mediating the effects of a stimulus on a response, they do not do so on the basis of any true introspection. Instead, their reports are based on a priori, implicit casual theories, or judgments about the extent to which a particular stimulus is a plausible cause of a given response. This suggests that though people may not be able to observe directly their cognitive processes, they will sometimes be able to report accurately about them. Accurate reports will occur when influential stimuli are salient and are plausible causes of the responses they produce, and will not occur when stimuli are not salient or are not plausible causes.

In short, people guess, and sometimes they get lucky. But where's the evidence?

Nisbett & Schachter, 1966. People were asked to get electric shocks to see how much shock they could stand (I myself would have waited to see if one of those see-how-much-free-candy-you'll-eat studies from the post last week was still open). Half the subjects were also given a placebo pill which they were told would cause heart palpitations, tremors, and breathing irregularities - the main problems people report when they get shocked. The hypothesis: people who took the pill would attribute much of the unpleasantness of the shock to the pill instead, and so tolerate more shock. This occurred right on schedule: people who took the pill tolerated four times as strong a shock as controls. When asked why they did so well, the twelve subjects in the experimental group came up with fabricated reasons; one example given was "I played with radios as a child, so I'm used to electricity." Only three of twelve subjects made a connection between the pill and their shock tolerance; when the researchers revealed the deception and their hypothesis, most subjects said it was an interesting idea and probably explained the other subjects, but it hadn't affected them personally.

Zimbardo et al, 1965. Participants in this experiment were probably pleased to learn there were no electric shocks involved, right up until the point where the researchers told them they had to eat bugs. In one condition, a friendly and polite researcher made the request; in another, a surly and arrogant researcher asked. Everyone ate the bug (experimenters can be pretty convincing), but only the group accosted by the unpleasant researcher claimed to have liked it. This confirmed the team's hypothesis: the nice-researcher group would know why they ate the bug - to please their new best friend - but the mean-researcher group would either have to admit it was because they're pushovers, or explain it by saying they liked eating bugs. When asked after the experiment why they were so willing to eat the bug, they said things like "Oh, it's just one bug, it's no big deal." When presented with the idea of cognitive dissonance, they once again agreed it was an interesting idea that probably affected some of the other subjects but of course not them.

Maier, 1931. Subjects were placed in a room with several interesting tools and asked to come up with as many solutions as possible to a puzzle about tying two cords together. One end of each cord was tied to the ceiling, and when the subject was holding on to one cord they couldn't reach the other. A few solutions were obvious, such as tying an extension cord to each, but the experiment involved a more complicated solution - tying a weight to a cord and using it as a pendulum to bring it into reach of the other. Subjects were generally unable to come up with this idea on their own in any reasonable amount of time, but when the experimenter, supposedly in the process of observing the subject, "accidentally" brushed up against one cord and set it swinging, most subjects were able to develop the solution within 45 seconds. However, when the experimenter asked immediately afterwards how they came up with the pendulum idea, the subjects were completely unable to recognize the experimenter's movement as the cue, and instead came up with completely unrelated ideas and invented thought processes, some rather complicated. After what the study calls "persistent probing", less than a third of the subjects mentioned the role of the experimenter.

Latane & Darley, 1970. This is the famous "bystander effect", where people are less likely to help when there are others present. The researchers asked subjects in bystander effect studies what factors influenced their decision not to help; the subjects gave many, but didn't mention the presence of other people.

Nisbett & Wilson, 1977. Subjects were primed with lists of words all relating to an unlisted word (eg "ocean" and "moon" to elicit "tide"), and then asked the name of a question, one possible answer to which involved the unlisted word (eg "What's your favorite detergent?" "Tide!"). The experimenters confirmed that many more people who had been primed with the lists gave the unlisted answer than control subjects (eg more people who had memorized "ocean" and "moon" gave Tide as their favorite detergent). Then they asked subjects why they had chosen their answer, and the subjects generally gave totally unrelated responses (eg "I love the color of the Tide box" or "My mother uses Tide"). When the experiment was explained to subjects, only a third admitted that the words might have affected their answer; the rest kept insisting that Tide was really their favorite. Then they repeated the process with several other words and questions, continuing to ask if the word lists influenced answer choice. The subjects' answers were effectively random - sometimes they believed the words didn't affect them when statistically they probably did, other times they believed the words did affect them when statistically they probably didn't.

Nisbett & Wilson, 1977. Subjects in a department store were asked to evaluate different articles of clothing in a line. As usually happens in this sort of task, people disproportionately chose the rightmost object (four times as often as the leftmost), no matter which object was on the right; this is technically referred to as a "position effect". The customers were asked to justify their choices and were happy to do so based on different qualities of the fabric et cetera; none said their choice had anything to do with position, and the experimenters dryly mention that when they asked the subjects if this was a possibility, "virtually all subjects denied it, usually with a worried glance at the interviewer suggesting they felt that they...were dealing with a madman".

Nisbett & Wilson, 1977. Subjects watched a video of a teacher with a foreign accent. In one group, the video showed the teacher acting kindly toward his students; in the other, it showed the teacher being strict and unfair. Subjects were asked to rate how much they liked the teacher, and also how much they liked his appearance and accent, which were the same across both groups. Because of the halo effect, students who saw the teacher acting nice thought he was attractive with a charming accent; people who saw the teacher acting mean thought he was ugly with a harsh accent. Then subjects were asked whether how much they liked the teacher had affected how much they liked the appearance and accent. They generally denied any halo effect, and in fact often insisted that part of the reason they hated the teacher so much was his awful clothes and annoying accent - the same clothes and accent which the nice-teacher group said were part of the reason they liked him so much!

There are about twice as many studies listed in the review article itself, but the trend is probably getting pretty clear. In some studies, like the bug-eating experiment, people perform behaviors and, when asked why they performed the behavior, guess wrong. Their true reasons for the behavior are unclear to them. In others, like the clothes position study, people make a choice, and when asked what preferences caused the choice, guess wrong. Again, their true reasons are unclear to them.

Nisbett and Wilson add that when they ask people to predict how they would react to the situations in their experiments, people "make predictions that in every case were similar to the erroneous reports given by the actual subjects." In the bystander effect experiment, outsiders predict the presence or absence of others wouldn't affect their ability to help, and subjects claim (wrongly) that the presence or absence of others didn't affect their ability to help.

In fact, it goes further than this. In the word-priming study (remember? The one with Tide detergent?) Nisbett and Wilson asked outsiders to predict which sets of words would change answers to which questions (would hearing "ocean" and "moon" make you pick Tide as your favorite detergent? Would hearing "Thanksgiving" make you pick Turkey as a vacation destination?). The outsiders' guesses correlated not at all with which words genuinely changed answers, but very much with which words the subjects guessed had changed their answers. Perhaps the subjects' answers looked a lot like the outsiders' answers because both were engaged in the same process: guessing blindly.

These studies suggest that people do not have introspective awareness to the processes that generate their behavior. They guess their preferences, justifications, and beliefs by inferring the most plausible rationale for their observed behavior, but are unable to make these guesses qualitatively better than outside observers. This supports the view presented in the last few posts: that mental processes are the results of opaque preferences, and that our own "introspected" goals and preferences are a product of the same machinery that infers goals and preferences in others in order to predict their behavior.

Basics of Animal Reinforcement

45 Yvain 05 July 2011 08:42PM

Behaviorism historically began with Pavlov's studies into classical conditioning. When dogs see food they naturally salivate. When Pavlov rang a bell before giving the dogs food, the dogs learned to associate the bell with the food and salivate even after they merely heard the bell . When Pavlov rang the bell a few times without providing food, the dogs stopped salivating, but when he added the food again it only took a single trial before the dogs "remembered" their previously conditioned salivation response1.

So much for classical conditioning. The real excitement starts at operant conditioning. Classical conditioning can only activate reflexive actions like salivation or sexual arousal; operant conditioning can produce entirely new behaviors and is most associated with the idea of "reinforcement learning".

Serious research into operant conditioning began with B.F. Skinner's work on pigeons. Stick a pigeon in a box with a lever and some associated machinery (a "Skinner box"2). The pigeon wanders around, does various things, and eventually hits the lever. Delicious sugar water squirts out. The pigeon continues wandering about and eventually hits the lever again. Another squirt of delicious sugar water. Eventually it percolates into its tiny pigeon brain that maybe pushing this lever makes sugar water squirt out. It starts pushing the lever more and more, each push continuing to convince it that yes, this is a good idea.

Consider a second, less lucky pigeon. It, too, wanders about in a box and eventually finds a lever. It pushes the lever and gets an electric shock. Eh, maybe it was a fluke. It pushes the lever again and gets another electric shock. It starts thinking "Maybe I should stop pressing that lever." The pigeon continues wandering about the box doing anything and everything other than pushing the shock lever.

The basic concept of operant conditioning is that an animal will repeat behaviors that give it reward, but avoid behaviors that give it punishment3.

Skinner distinguished between primary reinforcers and secondary reinforcers. A primary reinforcer is hard-coded: for example, food and sex are hard-coded rewards, pain and loud noises are hard-coded punishments. A primary reinforcer can be linked to a secondary reinforcer by classical conditioning. For example, if a clicker is clicked just before giving a dog a treat, the clicker itself will eventually become a way to reward the dog (as long as you don't use the unpaired clicker long enough for the conditioning to suffer extinction!)

Probably Skinner's most famous work on operant conditioning was his study of reinforcement schedules: that is, if pushing the lever only gives you reward some of the time, how obsessed will you become with pushing the lever?

Consider two basic types of reward: interval, in which pushing the lever gives a reward only once every t seconds - and ratio, in which pushing the lever gives a reward only once every x pushes.

Put a pigeon in a box with a lever programmed to only give rewards once an hour, and the pigeon will wise up pretty quickly. It may not have a perfect biological clock, but after somewhere around an hour, it will start pressing until it gets the reward and then give up for another hour or so. If it doesn't get its reward after an hour, the behavior will go extinct pretty quickly; it realizes the deal is off.

Put a pigeon in a box with a lever programmed to give one reward every one hundred presses, and again it will wise up. It will start pressing more on the lever when the reward is close (pigeons are better counters than you'd think!) and ease off after it obtains the reward. Again, if it doesn't get its reward after about a hundred presses, the behavior will become extinct pretty quickly.

To these two basic schedules of fixed reinforcement, Skinner added variable reinforcement: essentially the same but with a random factor built in. Instead of giving a reward once an hour, the pigeon may get a reward in a randomly chosen time between 30 and 90 minutes. Or instead of giving a reward every hundred presses, it might take somewhere between 50 and 150.

Put a pigeon in a box on variable interval schedule, and you'll get constant lever presses and good resistance to extinction.

Put a pigeon in a box with a variable ratio schedule and you get a situation one of my professors unscientifically but accurately described as "pure evil". The pigeon will become obsessed with pecking as much as possible, and really you can stop giving rewards at all after a while and the pigeon will never wise up.

Skinner was not the first person to place an animal in front of a lever that delivered reinforcement based on a variable ratio schedule. That honor goes to Charles Fey, inventor of the slot machine.

So it looks like some of this stuff has relevance for humans as well4. Tomorrow: more freshman psychology lecture material. Hooray!



FOOTNOTES

1. Of course, it's not really psychology unless you can think of an unethical yet hilarious application, so I refer you to Plaud and Martini's study in which slides of erotic stimuli (naked women) were paired with slides of non-erotic stimuli (penny jars) to give male experimental subjects a penny jar fetish; this supports a theory that uses chance pairing of sexual and non-sexual stimuli to explain normal fetish formation.

2. The bizarre rumor that B.F. Skinner raised his daughter in a Skinner box is completely false. The rumor that he marketed a child-rearing device called an "Heir Conditioner" is, remarkably, true.

3: In technical literature, behaviorists actually use four terms: positive reinforcement, positive punishment, negative reinforcement, and negative punishment. This is really confusing: "negative reinforcement" is actually a type of reward, behavior like going near wasps is "punished" even though we usually use "punishment" to mean deliberate human action, and all four terms can be summed up under the category "reinforcement" even though reinforcement is also sometimes used to mean "reward as opposed to punishment". I'm going to try to simplify things here by using "positive reinforcement" as a synonym for "reward" and "negative reinforcement" as a synonym for "punishment", same way the rest of the non-academic world does it.

4: Also relevant: checking HP:MoR for updates is variable interval reinforcement. You never know when an update's coming, but it doesn't come faster the more times you reload fanfiction.net. As predicted, even when Eliezer goes weeks without updating, the behavior continues to persist.

Overcoming suffering: Emotional acceptance

38 Kaj_Sotala 29 May 2011 10:57AM

Follow-up to: Suffering as attention-allocational conflict.

In many cases, it may be possible to end an attention-allocational conflict by looking at the content of the conflict and resolving it. However, there are also many cases where this simply won't work. If you're afraid of public speaking, say, the "I don't want to do this" signal is going to keep repeating itself regardless of how you try to resolve the conflict. Instead, you have to treat the conflict in a non-content-focused way.

In a nutshell, this is just the map-territory distinction as applied to emotions. Your emotions have evolved as a feedback and attention control mechanism: their purpose is to modify your behavior. If you're afraid of a dog, this is a fact about you, not about the dog. Nothing in the world is inherently scary, bad or good. Furthermore, emotions aren't inherently good or bad either, unless we choose to treat them as such.

We all know this, right? But we don't consistently apply it to our thinking of emotions. In particular, this has two major implications:

1. You are not the world: It's always alright to feel good. Whether you're feeling good or bad won't change the state of the world: the world is only changed by the actual actions you take. You're never obligated to feel bad, or guilty, or ashamed. In particular, since you can only influence the world through your actions, you will accomplish more and be happier if your emotions are tied to your actions, not states of the world.
2. Emotional acceptance: At the same time, "negative" emotions are not something to suppress or flinch away from. They're a feedback mechanism which imprints lessons directly into your automatic behavior (your elephant). With your subconsciousness having been trained to act better in the future, your conscious mind is free to concentrate on other things. If the feedback system is broken and teaching you bad lessons, then you should act to correct it. But if the pain is about some real mistake or real loss you suffered, then you should welcome it.

Internalizing these lessons can have some very powerful effects. I've been making very good progress on consistently feeling better after starting to train myself to think like this. But some LW posters are even farther along; witness Will Ryan:

continue reading »

Suffering as attention-allocational conflict

49 Kaj_Sotala 18 May 2011 03:12PM

I previously characterized Michael Vassar's theory on suffering as follows: "Pain is not suffering. Pain is just an attention signal. Suffering is when one neural system tells you to pay attention, and another says it doesn't want the state of the world to be like this." While not too far off the mark, it turns out this wasn't what he actually said. Instead, he said that suffering is a conflict between two (or more) attention-allocation mechanisms in the brain.

I have been successful at using this different framing to reduce the amount of suffering I feel. The method goes like this. First, I notice that I'm experiencing something that could be called suffering. Next, I ask, what kind of an attention-allocational conflict is going on? I consider the answer, attend to the conflict, resolve it, and then I no longer suffer.

An example is probably in order, so here goes. Last Friday, there was a Helsinki meetup with Patri Friedman present. I had organized the meetup, and wanted to go. Unfortunately, I already had other obligations for that day, ones I couldn't back out from. One evening, I felt considerable frustration over this.

Noticing my frustration, I asked: what attention-allocational conflict is this? It quickly become obvious that two systems were fighting it out:

* The Meet-Up System was trying to convey the message: ”Hey, this is a rare opportunity to network with a smart, high-status individual and discuss his ideas with other smart people. You really should attend.”
* The Prior Obligation System responded with the message: ”You've already previously agreed to go somewhere else. You know it'll be fun, and besides, several people are expecting you to go. Not going bears an unacceptable social cost, not to mention screwing over the other people's plans.”

Now, I wouldn't have needed to consciously reflect on the messages to be aware of them. It was hard to not be aware of them: it felt like my consciousness was in a constant crossfire, with both systems bombarding it with their respective messages.

But there's an important insight here, one which I originally picked up from PJ Eby. If a mental subsystem is trying to tell you something important, then it will persist in doing so until it's properly acknowledged. Trying to push away the message means it has not been properly addressed and acknowledged, meaning the subsystem has to continue repeating it.

continue reading »

The Cognitive Costs to Doing Things

39 lionhearted 02 May 2011 09:13AM

What's the mental burden of trying to do something? What's it cost? What price are you going to pay if you try to do something out in the world.

I think that by figuring out what the usual costs to doing things are, we can reduce the costs and otherwise structure our lives so that it's easier to reach our goals.

When I sat down to identify cognitive costs, I found seven. There might be more. Let's get started -

Activation Energy - As covered in more detail in this post, starting an activity seems to take a larger of willpower and other resources than keeping going with it. Required activation energy can be adjusted over time - making something into a routine lowers the activation energy to do it. Things like having poorly defined next steps increases activation energy required to get started. This is a major hurdle for a lot of people in a lot of disciplines - just getting started.

Opportunity cost - We're all familiar with general opportunity cost. When you're doing one thing, you're not doing something else. You have limited time. But there also seems to be a cognitive cost to this - a natural second guessing of choices by taking one path and not another. This is the sort of thing covered by Barry Schwartz in his Paradox of Choice work (there's some faulty thought/omissions in PoC, but it's overall valuable). It's also why basically every significant military work ever has said you don't want to put the enemy in a position where their only way out is through you - Sun Tzu argued always leaving a way for the enemy to escape, which splits their focus and options. Hernan Cortes famously burned the boats behind him. When you're doing something, your mind is subtly aware and bothered by the other things you're not doing. This is a significant cost.

Inertia - Eliezer Yudkowsky wrote that humans are "Adaptation-Executers, not Fitness-Maximizers." He was speaking in terms of large scale evolution, but this is also true of our day to day affairs. Whatever personal adaptations and routines we've gotten into, we tend to perpetuate. Usually people do not break these routines unless a drastic event happens. Very few people self-scrutinize and do drastic things without an external event happening.

The difference between activation energy and inertia is that you can want to do something, but be having a hard time getting started - that's activation energy. Whereas inertia suggests you'll keep doing what you've been doing, and largely turn your mind off. Breaking out of inertia takes serious energy and tends to make people uncomfortable. They usually only do it if something else makes them more uncomfortable (or, very rarely, when they get incredibly inspired).

Ego/willpower depletion - The Wikipedia article on ego depletion is pretty good. Basically, a lot of recent research shows that by doing something that takes significant willpower your "battery" of willpower gets drained some, and it becomes harder to do other high-will-required tasks. From Wikipedia: " In an illustrative experiment on ego depletion, participants who controlled themselves by trying not to laugh while watching a comedian did worse on a later task that required self-control compared to participants who did not have to control their laughter while watching the video." I'd strongly recommend you do some reading on this topic if you haven't - Roy Baumeister has written some excellent papers on it. The pattern holds pretty firm - when someone resists, say, eating a snack they want, it makes it harder for them to focus and persist doing rote work later.

Neurosis/fear/etc - Almost all humans are naturally more risk averse than gain-inclined. This seems to have been selected for evolutionarily. We also tend to become afraid far in excess of what we should for certain kinds of activities - especially ones that risk social embarrassment.

I never realized how strong these forces were until I tried to break free of them - whenever I got a strong negative reaction from someone to my writing, it made it considerably harder to write pieces that I thought would be popular later. Basic things like writing titles that would make a post spread, or polishing the first paragraph and last sentence - it's like my mind was weighing on the "con" side of pro/con that it would generate criticism, and it was... frightening's not quite the right word, but something like that.

Some tasks can be legitimately said to be "neurosis-inducing" - that means, you start getting more neurotic when you ponder and start doing them. Things that are almost guaranteed to generate criticism or risk rejection frequently do this. Anything that risks compromising a person's self image can be neurosis inducing too.

Altering of hormonal balance - A far too frequently ignored cost. A lot of activities will change your hormonal balance for the better or worse. Entering into conflict-like situations can and does increase adrenalin and cortisol and other stress hormones. Then you face adrenalin withdrawal and crash later. Of course, we basically are biochemistry, so significant changing of hormonal balance affects a lot of our body - immune system, respiration, digestion, etc. A lot of people are aware of this kind of peripherally, but there hasn't been much discussion about the hormonal-altering costs of a lot of activities.

Maintenance costs from the idea re-emerging in your thoughts - Another under-appreciated cognitive cost is maintenance costs in your thoughts from an idea recurring, especially when the full cycle isn't complete. In Getting Things Done, David Allen talks about how "open loops" are "anything that's not where it's supposed to be." These re-emerge in our thoughts periodically, often at inopportune times, consuming thought and energy. That's fine if the topic is exceedingly pleasant, but if it's not, it can wear you out. Completing an activity seems to reduce the maintenance cost (though not completely). An example would be not having filled your taxes out yet - it emerges in your thoughts at random times, derailing other thought. And it's usually not pleasant.

Taking on any project, initiative, business, or change can generate these maintenance costs from thoughts re-emerging.

Conclusion I identified these seven as the mental/cognitive costs to trying to do something -

 

  1. Activation Energy
  2. Opportunity cost
  3. Inertia
  4. Ego/willpower depletion
  5. Neurosis/fear/etc
  6. Altering of hormonal balance
  7. Maintenance costs from the idea re-emerging in your thoughts

 

I think we can reduce some of these costs by planning our tasks, work lives, social lives, and environment intelligently. Others of them it's good to just be aware of so we know when we start to drag or are having a hard time. Thoughts on other costs, or ways to reduce these are very welcome.

Fun and Games with Cognitive Biases

62 Cosmos 18 February 2011 08:38PM

You may have heard about IARPA's Sirius Program, which is a proposal to develop serious games that would teach intelligence analysts to recognize and correct their cognitive biases.  The intelligence community has a long history of interest in debiasing, and even produced a rationality handbook based on internal CIA publications from the 70's and 80's.  Creating games which would systematically improve our thinking skills has enormous potential, and I would highly encourage the LW community to consider this as a potential way forward to encourage rationality more broadly.

While developing these particular games will require thought and programming, the proposal did inspire the NYC LW community to play a game of our own.  Using a list of cognitive biases, we broke up into groups of no larger than four, and spent five minutes discussing each bias with regards to three questions:

  1. How do we recognize it?
  2. How do we correct it?
  3. How do we use its existence to help us win?

The Sirius Program specifically targets Confirmation Bias, Fundamental Attribution Error, Bias Blind Spot, Anchoring Bias, Representativeness Bias, and Projection Bias.  To this list, I also decided to add the Planning Fallacy, the Availability Heuristic, Hindsight Bias, the Halo Effect, Confabulation, and the Overconfidence Effect.  We did this Pomodoro style, with six rounds of five minutes, a quick break, another six rounds, before a break and then a group discussion of the exercise.

Results of this exercise are posted below the fold.  I encourage you to try the exercise for yourself before looking at our answers.

continue reading »

View more: Prev | Next