Physical and Mental Behavior

48 Yvain 10 July 2011 08:20PM

B.F. Skinner called thoughts "mental behavior". He believed they could be rewarded and punished just like physical behavior, and that they increased or declined in frequency accordingly.

Sadly, psychology has not yet advanced to the point where we can give people electric shocks for thinking things, so the sort of rewards and punishments that reinforce thoughts must be purely internal reinforcement. A thought or intention that causes good feelings gets reinforced and prospers; one that causes bad feelings gets punished and dies out.

(Roko has already discussed this in Ugh Fields; so much as thinking about an unpleasant task is unpleasant; therefore most people do not think about unpleasant tasks and end up delaying them or avoiding them completely. If you haven't already read that post, it does a very good job of making reinforcement of thoughts make sense.)

A while back, D_Malik published a great big List Of Things One Could Do To Become Awesome.  As David_Gerard replied, the list was itself a small feat of awesome. I expect a couple of people started on some of the more awesome-sounding entries, then gave up after a few minutes and never thought about it again. Why?

When I was younger, I used to come up with plans to become awesome in some unlikely way. Maybe I'd hear someone speaking Swahili, and I would think "I should learn Swahili," and then I would segue into daydreams of being with a group of friends, and someone would ask if any of us spoke any foreign languages, and I would say I was fluent in Swahili, and they would all react with shock and tell me I must be lying, and then a Kenyan person would wander by, and I'd have a conversation with them in Swahili, and they'd say that I was the first American they'd ever met who was really fluent in Swahili, and then all my friends would be awed and decide I was the best person ever, and...

...and the point is that the thought of learning Swahili is pleasant, in the same easy-to-visualize but useless way that an extra bedroom for Grandma is pleasant. And the intention to learn Swahili is also pleasant, because it will lead to all those pleasant things.  And so, by reinforcement of mental behavior, I continue thinking about and intending to learn Swahili.

Now consider the behavior of studying Swahili. I've never done so, but I imagine it involves a lot of long nights hunched over books of Swahili grammar. Since I am not one of the lucky people who enjoys learning languages for their own sake, this will be an unpleasant task. And rewards will be few and far between: outside my fantasies, my friends don't just get together and ask what languages we know while random Kenyans are walking by.

In fact, it's even worse than this, because I don't exactly make the decision to study Swahili in aggregate, but only in the form of whether to study Swahili each time I get the chance. If I have the opportunity to study Swahili for an hour, this provides no clear reward - an hour's studying or not isn't going to make much difference to whether I can impress my friends by chatting with a Kenyan - but it will still be unpleasant to spend an hour of going over boring Swahili grammar. And time discounting makes me value my hour today much more than I value some hypothetical opportunity to impress people months down the line; Ainslie shows quite clearly I will always be better off postponing my study until later.

So the behavior of actually learning Swahili is thankless and unpleasant and very likely doesn't happen at all.

Thinking about studying Swahili is positively reinforced, actually studying Swahili is negatively reinforced. The natural and obvious result is that I intend to study Swahili, but don't.

The problem is that for some reason, some crazy people expect for the reinforcement of thoughts to correspond to the reinforcement of the object of those thoughts. Maybe it's that old idea of "preference": I have a preference for studying Swahili, so I should satisfy that preference, right? But there's nothing in my brain automatically connecting this node over here called "intend to study Swahili" to this node over here called "study Swahili"; any association between them has to be learned the hard way.

We can describe this hard way in terms of reinforcement learning: after intending to learn Swahili but not doing so, I feel stupid. This unpleasant feeling propagates back to its cause, the behavior of intending to learn Swahili, and negatively reinforces it. Later, when I start thinking it might be neat to learn Mongolian on a whim, this generalizes to behavior that has previously been negatively reinforced, so I avoid it (in anthropomorphic terms, I "expect" to fail at learning Mongolian and to feel stupid later, so I avoid doing so).

I didn't learn this the first time, and I doubt most other people do either. And it's a tough problem to call, because if you overdo the negative reinforcement, then you never try to do anything difficult ever again.

In any case, the lesson is that thoughts and intentions get reinforced separately from actions, and although you can eventually learn to connect intentions to actions, you should never take the connection for granted.

Wanting vs. Liking Revisited

34 Yvain 09 July 2011 08:54PM

In Are Wireheads Happy? I discussed the difference between wanting something and liking something. More recently, Luke went deeper into some of the science in his post Not for the Sake of Pleasure Alone.

In the comments of the original post, cousin_it asked a good question: why implement a mind with two forms of motivation? What, exactly, are "wanting" and "liking" in mind design terms?

Tim Tyler and Furcas both gave interesting responses, but I think the problem has a clear answer in a reinforcement learning perspective (warning: formal research on the subject does not take this view and sticks to the "two different systems of different evolutionary design" theory). "Liking" is how positive reinforcement feels from the inside; "wanting" is how the motivation to do something feels from the inside. Things that are positively reinforced generally motivate you to do more of them, so liking and wanting often co-occur. With more knowledge of reinforcement, we can begin to explore why they might differ.

CONTEXT OF REINFORCEMENT

Reinforcement learning doesn't just connect single stimuli to responses. It connects stimuli in a context to responses. Munching popcorn at a movie might be pleasant; munching popcorn at a funeral will get you stern looks at best.

In fact, lots of people eat popcorn at a movie theater and almost nowhere else. Imagine them, walking into that movie theater and thinking "You know, I should have some popcorn now", maybe even having a strong desire for popcorn that overrides the diet they're on - and yet these same people could walk into, I don't know, a used car dealership and that urge would be completely gone.

These people have probably eaten popcorn at a movie theater before and liked it. Instead of generalizing to "eat popcorn", their brain learned the lesson "eat popcorn at movie theaters". Part of this no doubt has to do with the easy availability of popcorn there, but another part probably has to do with context-dependent reinforcement.

I like pizza. When I eat pizza, and get rewarded for eating pizza, it's usually after smelling the pizza first. The smell of pizza becomes a powerful stimulus for the behavior of eating pizza, and I want pizza much more after smelling it, even though how much I like pizza remains constant. I've never had pizza at breakfast, and in fact the context of breakfast is directly competing with my normal stimuli for eating pizza; therefore, no matter how much I like pizza, I have no desire to eat pizza for breakfast. If I did have pizza for breakfast, though, I'd probably like it.

INTERMITTENT REINFORCEMENT

If an activity is intermittently reinforced; occasional rewards spread among more common neutral stimuli or even small punishments, it may be motivating but unpleasant.

Imagine a beginning golfer. He gets bogeys or double bogeys on each hole, and is constantly kicking himself, thinking that if only he'd used one club instead of the other, he might have gotten that one. After each game, he can't believe that after all his practice, he's still this bad. But every so often, he does get a par or a birdie, and thinks he's finally got the hang of things, right until he fails to repeat it on the next hole, or the hole after that.

This is a variable response schedule, Skinner's most addictive form of delivering reinforcement. The golfer may keep playing, maybe because he constantly thinks he's on the verge of figuring out how to improve his game, but he might not like it. The same is true for gamblers, who think the next pull of the slot machine might be the jackpot (and who falsely believe they can discover a secret in the game that will change their luck; they don't like sitting around losing money, but they may stick with it so that they don't leave right before they reach the point where their luck changes.

SMALL-SCALE DISCOUNT RATES

Even if we like something, we may not want to do it because it involves pain at the second or sub-second level.

Eliezer discusses the choice between reading a mediocre book and a good book:

You may read a mediocre book for an hour, instead of a good book, because if you first spent a few minutes to search your library to obtain a better book, that would be an immediate cost - not that searching your library is all that unpleasant, but you'd have to pay an immediate activation cost to do that instead of taking the path of least resistance and grabbing the first thing in front of you.  It's a hyperbolically discounted tradeoff that you make without realizing it, because the cost you're refusing to pay isn't commensurate enough with the payoff you're forgoing to be salient as an explicit tradeoff.

In this case, you like the good book, but you want to keep reading the mediocre book. If it's cheating to start our hypothetical subject off reading the mediocre book, consider the difference between a book of one-liner jokes and a really great novel. The book of one-liners you can open to a random page and start being immediately amused (reinforced). The great novel you've got to pick up, get into, develop sympathies for the characters, figure out what the heck lomillialor or a Tiste Andii is, and then a few pages in you're thinking "This is a pretty good book". The fear of those few pages could make you realize you'll like the novel, but still want to read the joke book. And since hyperbolic discounting overcounts reward or punishment in the next few seconds, it may seem like a net punishment to make the change.

SUMMARY

This deals yet another blow to the concept of me having "preferences". How much do I want popcorn? That depends very much on whether I'm at a movie theater or a used car dealership. If I browse Reddit for half an hour because it would be too much work to spend ten seconds traveling to the living room to pick up the book I'm really enjoying, do I "prefer" browsing to reading? Which has higher utility? If I hate every second I'm at the slot machines, but I keep at them anyway so I don't miss the jackpot, am I a gambling addict, or just a person who enjoys winning jackpots and is willing to do what it takes?

In cases like these, the language of preference and utility is not very useful. My anticipation of reward is constraining my behavior, and different factors are promoting different behaviors in an unstable way, but trying to extract "preferences" from the situation is trying to oversimplify a complex situation.

The Cognitive Costs to Doing Things

39 lionhearted 02 May 2011 09:13AM

What's the mental burden of trying to do something? What's it cost? What price are you going to pay if you try to do something out in the world.

I think that by figuring out what the usual costs to doing things are, we can reduce the costs and otherwise structure our lives so that it's easier to reach our goals.

When I sat down to identify cognitive costs, I found seven. There might be more. Let's get started -

Activation Energy - As covered in more detail in this post, starting an activity seems to take a larger of willpower and other resources than keeping going with it. Required activation energy can be adjusted over time - making something into a routine lowers the activation energy to do it. Things like having poorly defined next steps increases activation energy required to get started. This is a major hurdle for a lot of people in a lot of disciplines - just getting started.

Opportunity cost - We're all familiar with general opportunity cost. When you're doing one thing, you're not doing something else. You have limited time. But there also seems to be a cognitive cost to this - a natural second guessing of choices by taking one path and not another. This is the sort of thing covered by Barry Schwartz in his Paradox of Choice work (there's some faulty thought/omissions in PoC, but it's overall valuable). It's also why basically every significant military work ever has said you don't want to put the enemy in a position where their only way out is through you - Sun Tzu argued always leaving a way for the enemy to escape, which splits their focus and options. Hernan Cortes famously burned the boats behind him. When you're doing something, your mind is subtly aware and bothered by the other things you're not doing. This is a significant cost.

Inertia - Eliezer Yudkowsky wrote that humans are "Adaptation-Executers, not Fitness-Maximizers." He was speaking in terms of large scale evolution, but this is also true of our day to day affairs. Whatever personal adaptations and routines we've gotten into, we tend to perpetuate. Usually people do not break these routines unless a drastic event happens. Very few people self-scrutinize and do drastic things without an external event happening.

The difference between activation energy and inertia is that you can want to do something, but be having a hard time getting started - that's activation energy. Whereas inertia suggests you'll keep doing what you've been doing, and largely turn your mind off. Breaking out of inertia takes serious energy and tends to make people uncomfortable. They usually only do it if something else makes them more uncomfortable (or, very rarely, when they get incredibly inspired).

Ego/willpower depletion - The Wikipedia article on ego depletion is pretty good. Basically, a lot of recent research shows that by doing something that takes significant willpower your "battery" of willpower gets drained some, and it becomes harder to do other high-will-required tasks. From Wikipedia: " In an illustrative experiment on ego depletion, participants who controlled themselves by trying not to laugh while watching a comedian did worse on a later task that required self-control compared to participants who did not have to control their laughter while watching the video." I'd strongly recommend you do some reading on this topic if you haven't - Roy Baumeister has written some excellent papers on it. The pattern holds pretty firm - when someone resists, say, eating a snack they want, it makes it harder for them to focus and persist doing rote work later.

Neurosis/fear/etc - Almost all humans are naturally more risk averse than gain-inclined. This seems to have been selected for evolutionarily. We also tend to become afraid far in excess of what we should for certain kinds of activities - especially ones that risk social embarrassment.

I never realized how strong these forces were until I tried to break free of them - whenever I got a strong negative reaction from someone to my writing, it made it considerably harder to write pieces that I thought would be popular later. Basic things like writing titles that would make a post spread, or polishing the first paragraph and last sentence - it's like my mind was weighing on the "con" side of pro/con that it would generate criticism, and it was... frightening's not quite the right word, but something like that.

Some tasks can be legitimately said to be "neurosis-inducing" - that means, you start getting more neurotic when you ponder and start doing them. Things that are almost guaranteed to generate criticism or risk rejection frequently do this. Anything that risks compromising a person's self image can be neurosis inducing too.

Altering of hormonal balance - A far too frequently ignored cost. A lot of activities will change your hormonal balance for the better or worse. Entering into conflict-like situations can and does increase adrenalin and cortisol and other stress hormones. Then you face adrenalin withdrawal and crash later. Of course, we basically are biochemistry, so significant changing of hormonal balance affects a lot of our body - immune system, respiration, digestion, etc. A lot of people are aware of this kind of peripherally, but there hasn't been much discussion about the hormonal-altering costs of a lot of activities.

Maintenance costs from the idea re-emerging in your thoughts - Another under-appreciated cognitive cost is maintenance costs in your thoughts from an idea recurring, especially when the full cycle isn't complete. In Getting Things Done, David Allen talks about how "open loops" are "anything that's not where it's supposed to be." These re-emerge in our thoughts periodically, often at inopportune times, consuming thought and energy. That's fine if the topic is exceedingly pleasant, but if it's not, it can wear you out. Completing an activity seems to reduce the maintenance cost (though not completely). An example would be not having filled your taxes out yet - it emerges in your thoughts at random times, derailing other thought. And it's usually not pleasant.

Taking on any project, initiative, business, or change can generate these maintenance costs from thoughts re-emerging.

Conclusion I identified these seven as the mental/cognitive costs to trying to do something -

 

  1. Activation Energy
  2. Opportunity cost
  3. Inertia
  4. Ego/willpower depletion
  5. Neurosis/fear/etc
  6. Altering of hormonal balance
  7. Maintenance costs from the idea re-emerging in your thoughts

 

I think we can reduce some of these costs by planning our tasks, work lives, social lives, and environment intelligently. Others of them it's good to just be aware of so we know when we start to drag or are having a hard time. Thoughts on other costs, or ways to reduce these are very welcome.

How I Lost 100 Pounds Using TDT

70 Zvi 14 March 2011 03:50PM

Background Information: Ingredients of Timeless Decision Theory

Alternate Approaches Include: Self-empathy as a source of “willpower”, Applied Picoeconomics, Akrasia, hyperbolic discounting, and picoeconomics, Akrasia Tactics Review

Standard Disclaimer: Beware of Other-Optimizing

Timeless Decision Theory (or TDT) allowed me to succeed in gaining control over when and how much I ate in a way that previous attempts at precommitment had repeatedly failed to do. I did so well before I was formally exposed to the concept of TDT, but once I clicked on TDT I understood that I had effectively been using it. That click came from reading Eliezer’s shortest summary of TDT, which was:

The one-sentence version is:  Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation

You can find more here but my recommendation at least at first is to stick with the one sentence version. It is as simple as it can be, but no simpler. 

Utilizing TDT gave me several key abilities that I previously lacked. The most important was realizing that what I chose now would be the same choice I would make at other times under the same circumstances. This allowed me to compare having the benefits now to paying the costs now, as opposed to paying costs now for future benefits later. This ability allowed me to overcome hyperbolic discounting. The other key ability was that it freed me from the need to explicitly stop in advance to make precommitements each time I wanted to alter my instinctive behavior. Instead, it became automatic to make decisions in terms of which rules would be best to follow.

continue reading »

Working hurts less than procrastinating, we fear the twinge of starting

142 Eliezer_Yudkowsky 02 January 2011 12:15AM

When you procrastinate, you're probably not procrastinating because of the pain of working.

How do I know this?  Because on a moment-to-moment basis, being in the middle of doing the work is usually less painful than being in the middle of procrastinating.

(Bolded because it's true, important, and nearly impossible to get your brain to remember - even though a few moments of reflection should convince you that it's true.)

So what is our brain flinching away from, if not the pain of doing the work?

I think it's flinching away from the pain of the decision to do the work - the momentary, immediate pain of (1) disengaging yourself from the (probably very small) flow of reinforcement that you're getting from reading a random unimportant Internet article, and (2) paying the energy cost for a prefrontal override to exert control of your own behavior and begin working.

Thanks to hyperbolic discounting (i.e., weighting values in inverse proportion to their temporal distance) the instant pain of disengaging from an Internet article and paying a prefrontal override cost, can outweigh the slightly more distant (minutes in the future, rather than seconds) pain of continuing to procrastinate, which is, once again, usually more painful than being in the middle of doing the work.

I think that hyperbolic discounting is far more ubiquitous as a failure mode than I once realized, because it's not just for commensurate-seeming tradeoffs like smoking a cigarette in a minute versus dying of lung cancer later.

continue reading »

Optimizing Fuzzies And Utilons: The Altruism Chip Jar

95 orthonormal 01 January 2011 06:53PM

Related: Purchase Fuzzies and Utilons Separately

We genuinely want to do good in the world; but also, we want to feel as if we're doing good, via heuristics that have been hammered into our brains over the course of our social evolution. The interaction between these impulses (in areas like scope insensitivity, refusal to quantify sacred values, etc.) can lead to massive diminution of charitable impact, and can also suck the fun out of the whole process. Even if it's much better to write a big check at the end of the year to the charity with the greatest expected impact than it is to take off work every Thursday afternoon and volunteer at the pet pound, it sure doesn't feel as rewarding. And of course, we're very good at finding excuses to stop doing costly things that don't feel rewarding, or at least to put them off.

But if there's one thing I've learned here, it's that lamenting our irrationality should wait until one's properly searched for a good hack. And I think I've found one.

Not just that, but I've tested it out for you already.

This summer, I had just gone through the usual experience of being asked for money for a nice but inefficient cause, turning them down, and feeling a bit bad about it. I made a mental note to donate some money to a more efficient cause, but worried that I'd forget about it; it's too much work to make a bunch of small donations over the year (plus, if done by credit card, the fees take a bigger cut that way) and there's no way I'd remember that day at the end of the year.

Unless, that is, I found some way to keep track of it.

So I made up several jars with the names of charities I found efficient (SIAI and VillageReach) and kept a bunch of poker chips near them. Starting then, whenever I felt like doing a good deed (and especially if I'd passed up an opportunity to do a less efficient one), I'd take a chip of an appropriate value and toss it in the jar of my choice. I have to say, this gave me much more in the way of warm fuzzies than if I'd just waited and made up a number at the end of the year.

And now I've added up and made my contributions: $1,370 to SIAI and $566 to VillageReach.

continue reading »

Reference Points

32 lionhearted 17 November 2010 08:09AM

I just spent some time reading Thomas Schelling's "Choice and Consequences" and I heartily recommend it. Here's a Google books link to the chapter I was reading, "The Intimate Contest for Self Command."

It's fascinating, and if you like LessWrong, rationality, understanding things, decision theories, figuring people and the world out - well, then I think you'd like Schelling. Actually, you'll probably be amazed with how much of his stuff you're already familiar with - he really established a heck of a lot modern thinking on game theory.

Allow me to depart from Schelling a moment, and talk of Sam Snyder. He's a very intelligent guy who has lots of intelligent thoughts. Here's a link to his website - there's massive amounts of data and references there, so I'd recommend you just skim his site if you go visit until you find something interesting. You'll probably find something interesting pretty quickly.

I got a chance to have a conversation with him a while back, and we covered immense amounts of ground. He introduced me to a concept I've been thinking about nonstop since learning it from him - reference points.

Now, he explained it very eloquently, and I'm afraid I'm going to mangle and not do justice to his explanation. But to make a long story really short, your reference points affect your motivation a lot.

An example would help.

What does the average person think about he thinks of running? He thinks of huffing, puffing, being tired and sore, having a hard time getting going, looking fat in workout clothes and being embarrassed at being out of shape. A lot of people try running at some point in their life, and most people don't keep doing it.

On the other hand, what does a regular runner think of? He thinks of the "runner's high" and gliding across the pavement, enjoying a great run, and feeling like a million bucks afterwards.

Since that conversation, I've been trying to change my reference points. For instance, if I feel like I'd like some fried food, I try not to imagine/reference eating the salty greased food. Yes, eating french fries and a grilled chicken sandwich will be salty and fatty and delicious. It's a superstimulus, we're not really evolved to handle that stuff appropriately.

So when most people think of the McChicken Sandwich, large fry, large drink, they think about the grease and salt and sugar and how good it'll taste.

I still like that stuff. In fact, since I quit a lot of vices, sometimes I crave even harder for the few I have left. But I was able to cut my junk food consumption way down by changing my reference point. When I start to have a desire for that sort of food, I think about how my stomach and energy levels are going to feel 90 minutes after eating it. That answer is - not too good. So I go out to a local restaurant and order plain chicken, rice, and vegetables, and I feel good later.

continue reading »

Anti-Akrasia Reprise

5 dreeves 16 November 2010 11:16AM

A year and a half ago I wrote a LessWrong post on anti-akrasia that generated some great discussion. Here's an extended version of that post:  messymatters.com/akrasia

And here's an abstract:

The key to beating akrasia (i.e., procrastination, addiction, and other self-defeating behavior) is constraining your future self -- removing your ability to make decisions under the influence of immediate consequences. When a decision involves some consequences that are immediate and some that are distant, humans irrationally (no amount of future discounting can account for it) over-weight the immediate consequences. To be rational you need to make the decision at a time when all the consequences are distant. And to make your future self actually stick to that decision, you need to enter into a binding commitment. Ironically, you can do that by imposing an immediate penalty, by making the distant consequences immediate. Now your impulsive future self will make the decision with all the consequences immediate and presumably make the same decision as your dispassionate current self who makes the decision when all the consequences are distant. I argue that real-world commitment devices, even the popular stickK.com, don't fully achieve this and I introduce Beeminder as a tool that does.

(Also related is this LessWrong post from last month, though I disagree with the second half of it.)

My new claim is that akrasia is simply irrationality in the face of immediate consequences.  It's not about willpower nor is it about a compromise between multiple selves.  Your true self is the one that is deciding what to do when all the consequences are distant.  To beat akrasia, make sure that's the self that's calling the shots.

And although I'm using the multiple selves / sub-agents terminology, I think it's really just a rhetorical device.  There are not multiple selves in any real sense.  It's just the one true you whose decision-making is sometimes distorted in the presence of immediate consequences, which act like a drug.

Self-empathy as a source of "willpower"

51 Academian 26 October 2010 02:20PM

tl:dr; Dynamic consistency is a better term for "willpower" because its meaning is robust to changes in how we think constistent behavior actually manages to happen. One can boost consistency by fostering interactions between mutually inconsistent sub-agents to help them better empathize with each other.

Despite the common use of the term, I don't think of my "willpower" as an expendable resource, and mostly it just doesn't feel like one. Let's imagine Bob, who is somewhat overweight, likes to eat cake, and wants to lose weight to be more generically attractive and healthy. Bob often plans not to eat cake, but changes his mind, and then regrets it, and then decides he should indulge himself sometimes, and then decides that's just an excuse-meme, etc. Economists and veteran LessWrong readers know this oscillation between value systems is called dynamic inconsistency (q.v. Wikipedia). We can think of Bob as oscillating between being two different idealized agents living in the same body: a WorthIt agent, and a NotWorthIt agent.

The feeling of NotWorthIt-Bob's (in)ability to control WorthIt-Bob is likely to be called "(lack of) willpower", at least by NotWorthIt-Bob, and maybe even by WorthIt-Bob. But I find the framing and langauge of "willpower" fairly unhelpful. Instead, I think NotWorthIt-Bob and WorthIt-Bob just aren't communicating well enough. They try to ignore each other's relevance, but if they could both be present at the same time and actually talk about it, like two people in a healthy relationship, maybe they'd figure something out. I'm talking about self-empathy here, which is opposite to self-sympathy: relating to emotions of yours that you are not immediately feeling. Haven't you noticed you're better at convincing people to change their minds when you actually empathize with their position during the conversation? The same applies to convincing yourself.

Don't ask "Do I have willpower?", but "Am I a dynamically consistent team?"

continue reading »

Activation Costs

29 lionhearted 25 October 2010 09:30PM

Enter Wikipedia:

In chemistry, activation energy is a term introduced in 1889 by the Swedish scientist Svante Arrhenius, that is defined as the energy that must be overcome in order for a chemical reaction to occur.

In this article, I propose that:

  • Every action you take has an activation cost (perhaps zero)
  • These costs vary from person to person
  • These costs can change over time
  • Activation costs explain a lot of akrasia

After proposing that, I'd like to explore:

  • Factors that increase activation costs
  • Factors that decrease activation costs

Every action a person takes has an activation cost. The activation cost of a consistent, deeply embedded habit is zero. It happens almost automatically. The activation cost for most people in the United States to exercising is fairly high, and most people are inconsistent about exercising. However, there are people who - every single day - begin by putting their running shoes on and running. Their activation cost to running is effectively zero.

These costs vary from person to person. In the daily running example above, the activation cost to the runner is low. The runner simply starts running in the morning. For most people, it's higher for a variety of reasons we'll get to in a moment. The running example is fairly obvious, but you'll also see phenomenon like a neat person saying to a sloppy one, "Why don't you clean your desk? ... just f'ing do it, man." Assuming the messy person indeed wants to have a clean desk, then it's likely the messy person has a higher activation cost to cleaning his desk. (He could also have less energy/willpower)

continue reading »

View more: Prev | Next