Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Before I was very involved in the Less Wrong community, I heard that Eliezer was looking for people to sit with him while he worked, to increase writing productivity. I knew that he was doing important work in the world, and figured that this was the sort of contribution to improving humanity that I would like to make, which was within the set of things that would be easy and enjoyable for me.
So I got a hold of him and offered to come and sit with him, and did that once/week for about a year. As anticipated, it worked marvelously. I found it easy to sit and not talk, just getting my own work done. Eventually I became a beta reader for his "Bayes for Everyone Else" which is really great and helped me in my ability to estimate probabilities a ton. (Eliezer is still perfecting this work and has not yet released it, but you can find the older version here.)
In addition to learning the basics of Bayes from doing this, I also learned how powerful it is to have someone just to sit quietly with you to co-work on a regular schedule.
I’ve experimented with similar things since then, such as making skype dates with a friend to watch informational videos together. This worked for awhile until my friend got busy. I have two other recurring chat dates with friends to do dual n-back together, and those have worked quite well and are still going.
A client of mine, Mqrius, is working on his Master’s thesis and has found that the only way he has been able to overcome his akrasia so far is by co-working with a friend. Unfortunately, his friend does not have as much time to co-work as he’d like, so we decided to spend Mqrius’s counseling session today writing this Less Wrong post to see if we can help him and other people in the community who want to co-work over skype connect, since this will probably be much higher value to him as well as others with similar difficulties than the next best thing we could do with the time.
I encourage anyone who is interested in co-working, watching informational videos together, or any other social productivity experiments that can be done over skype or chat, to coordinate in the comments. For this to work best, I recommend being as specific as possible about the ideal co-working partner for you, in addition to noting if you are open to general co-working.
If you are specific, you are much more likely to succeed in finding a good co-working partner for you. While its possible you might screen someone out, its more likely that you will get the attention of your ideal co-working partner who otherwise would have glossed over your comment.
Here is my specific pitch for Mqrius:
If you are working on a thesis, especially if it’s related to nanotechnology like his thesis, and think that you are likely to be similarly motivated by co-working, please comment or contact him about setting up an initial skype trial run. His ideal scenario is to find 2-3 people to co-work with him for about 20 hours co-working/week time for him in total. He would like to find people who are dependable about showing up for appointments they have made and will create a recurring schedule with him at least until he gets his thesis done. He’d like to try an initial 4 hour co-working block as an experiment with interested parties. Please comment below if you are interested.
[Mqrius and I have predictions going about whether or not he will actually get a co-working partner who is working on a nanotech paper out of this, if others want to post predictions in the comments, this is encouraged. Its a good practice for reducing hindsight bias.]
An virtual co-working space has been created and is currently live, discussion and link to the room here.
Note: This was originally written in relation to this rather scary comment of lukeprog's on value drift. I'm now less certain that operant conditioning is a significant cause of value drift (leaning towards near/far type explanations), but I decided to share my thoughts on the topic of policy design anyway.
Several years ago, I had a reddit problem. I'd check reddit instead of working on important stuff. The more I browsed the site, the shorter my attention span got. The shorter my attention span got, the harder it was for me to find things that were enjoyable to read. Instead of being rejuvenating, I found reddit to be addictive, unsatisfying, and frustrating. Every time I thought to myself that I really should stop, there was always just one more thing to click on.
So I installed LeechBlock and blocked reddit at all hours. That worked really well... for a while.
Occasionally I wanted to dig up something I remembered seeing on reddit. (This wasn't always bad--in some cases I was looking up something related to stuff I was working on.) I tried a few different policies for dealing with this. All of them basically amounted to inconveniencing myself in some way or another whenever I wanted to dig something up.
After a few weeks, I no longer felt the urge to check reddit compulsively. And after a few months, I hardly even remembered what it was like to be an addict.
However, my inconvenience barriers were still present, and they were, well, inconvenient. It really was pretty annoying to make an entry in my notebook describing what I was visiting for and start up a different browser just to check something. I figured I could always turn LeechBlock on again if necessary, so I removed my self-imposed barriers. And slid back in to addiction.
After a while, I got sick of being addicted again and decided to do something about it (again). Interestingly, I forgot my earlier thought that I could just turn LeechBlock on again easily. Instead, thinking about LeechBlock made me feel hopeless because it seemed like it ultimately hadn't worked. But I did try it again, and the entire cycle then finished repeating itself: I got un-addicted, I removed LeechBlock, I got re-addicted.
This may seem like a surprising lack of self-awareness. All I can say is: Every second my brain gathers tons of sensory data and discards the vast majority of it. Narratives like the one you're reading right now don't get constructed on the fly automatically. Maybe if I had been following orthonormal's advice of keeping and monitoring a record of life changes attempted, I would've thought to try something different.
Part of the sequence: The Science of Winning at Life
After three months of practice, I now use a single algorithm to beat procrastination most of the times I face it.1 It probably won't work for you quite like it did for me, but it's the best advice on motivation I've got, and it's a major reason I'm known for having the "gets shit done" property. There are reasons to hope that we can eventually break the chain of akrasia; maybe this post is one baby step in the right direction.
How to Beat Procrastination explained our best current general theory of procrastination, called "temporal motivation theory" (TMT). As an exercise in practical advice backed by deep theories, this post explains the process I use to beat procrastination — a process implied by TMT.
As a reminder, here's a rough sketch of how motivation works according to TMT:
Or, as Piers Steel summarizes:
Decrease the certainty or the size of a task's reward — its expectancy or its value — and you are unlikely to pursue its completion with any vigor. Increase the delay for the task's reward and our susceptibility to delay — impulsiveness — and motivation also dips.
Of course, my motivation system is more complex than that. P.J. Eby likens TMT (as a guide for beating procrastination) to the "fuel, air, ignition, and compression" plan for starting your car: it might be true, but a more useful theory would include details and mechanism.
That's a fair criticism. Just as an fMRI captures the "big picture" of brain function at low resolution, TMT captures the big picture of motivation. This big picture helps us see where we need to work at the gears-and-circuits level, so we can become the goal-directed consequentialists we'd like to be.
So, I'll share my four-step algorithm below, and tackle the gears-and-circuits level in later posts.
So what you probably mean is, "I intend to do school to improve my chances on the market". But this statement is still false, unless it is also true that "I intend to improve my chances on the market". Do you, in actual fact, intend to improve your chances on the market?
I expect not. Rather, I expect that your motivation is to appear to be the sort of person who you think you would be if you were ambitiously attempting to improve your chances on the market... which is not really motivating enough to actually DO the work. However, by persistently trying to do so, and presenting yourself with enough suffering at your failure to do it, you get to feel as if you are that sort of person without having to actually do the work. This is actually a pretty optimal solution to the problem, if you think about it. (Or rather, if you DON'T think about it!) -- PJ Eby
I have become convinced that problems of this kind are the number one problem humanity has. I'm also pretty sure that most people here, no matter how much they've been reading about signaling, still fail to appreciate the magnitude of the problem.
Here are two major screw-ups and one narrowly averted screw-up that I've been guilty of. See if you can find the pattern.
- When I began my university studies back in 2006, I felt strongly motivated to do something about Singularity matters. I genuinely believed that this was the most important thing facing humanity, and that it needed to be urgently taken care of. So in order to become able to contribute, I tried to study as much as possible. I had had troubles with procrastination, and so, in what has to be one of the most idiotic and ill-thought-out acts of self-sabotage possible, I taught myself to feel guilty whenever I was relaxing and not working. Combine an inability to properly relax with an attempted course load that was twice the university's recommended pace, and you can guess the results: after a year or two, I had an extended burnout that I still haven't fully recovered from. I ended up completing my Bachelor's degree in five years, which is the official target time for doing both your Bachelor's and your Master's.
- A few years later, I became one of the founding members of the Finnish Pirate Party, and on the basis of some writings the others thought were pretty good, got myself elected as the spokesman. Unfortunately – and as I should have known before taking up the post – I was a pretty bad choice for this job. I'm good at expressing myself in writing, and when I have the time to think. I hate talking with strangers on the phone, find it distracting to look people in the eyes when I'm talking with them, and have a tendency to start a sentence over two or three times before hitting on a formulation I like. I'm also bad at thinking quickly on my feet and coming up with snappy answers in live conversation. The spokesman task involved things like giving quick statements to reporters ten seconds after I'd been woken up by their phone call, and live interviews where I had to reply to criticisms so foreign to my thinking that they would never have occurred to me naturally. I was pretty terrible at the job, and finally delegated most of it to other people until my term ran out – though not before I'd already done noticeable damage to our cause.
- Last year, I was a Visiting Fellow at the Singularity Institute. At one point, I ended up helping Eliezer in writing his book. Mostly this involved me just sitting next to him and making sure he did get writing done while I surfed the Internet or played a computer game. Occasionally I would offer some suggestion if asked. Although I did not actually do much, the multitasking required still made me unable to spend this time productively myself, and for some reason it always left me tired the next day. I felt somewhat unhappy with this, in that I felt I was doing something that anyone could do. Eventually Anna Salamon pointed out to me that maybe this was something that I was more capable of doing than others, exactly because so many people would feel that ”anyone” could do this and thus would prefer to do something else.
It may not be immediately obvious, but all three examples have something in common. In each case, I thought I was working for a particular goal (become capable of doing useful Singularity work, advance the cause of a political party, do useful Singularity work). But as soon as I set that goal, my brain automatically and invisibly re-interpreted it as the goal of doing something that gave the impression of doing prestigious work for a cause (spending all my waking time working, being the spokesman of a political party, writing papers or doing something else few others could do). "Prestigious work" could also be translated as "work that really convinces others that you are doing something valuable for a cause".
Shortly before the Summit, Alexandros posted a short discussion post wondering whether rationality training might cause akrasia by prompting folks to make more decisions using deliberate, conscious, "system II" reasoning (instead of rapid, automatic, "system I" heuristics) and, thereby, causing decision fatigue.
This conjecture sounded interesting to me, and I'd wondered similar things myself, so I put up a poll to gather data.
In 2009 I first described here on LessWrong a tool that Bethany Soule and I made to force ourselves to do things that otherwise fell victim to akrasia ("How a pathological procrastinator can lose weight"). We got an outpouring of encouragement and enthusiasm from the LessWrong community, which helped inspire us to quit our day jobs and turn this into a real startup: Beeminder (the me-binder!).
We've added everyone who got on the waitlist with invite code LESSWRONG and we're getting close to public launch so I wanted to invite any other LessWrong folks to get a beta account first: http://beeminder.com/secretsignup (no wait this time!)
(UPDATE: Beeminder is open to the public.)
It's definitely not for everyone since a big part of it is commitment contracts. But if you like the concept of stickK.com (forcing yourself to reach a goal via a monetary commitment contract) then we think you'll adore Beeminder.
StickK is just about the contracts -- Beeminder links it to your data. That has some big advantages:
1. You don't have to know what you're committing to when you commit, which sounds completely (oxy)moronic but what we mean is that you're committing to keeping your datapoints on a "yellow brick road" which you have control over as you go. You commit to something general like "work out more" or "lose weight" and then decide as you go what that means based on your data.
Frequently, we decide on a goal, and then we are ineffective in working towards this goal, due to factors wholly within our control. Failure modes include giving up, losing interest, procrastination, akrasia, and failure to evaluate return on time. In all these cases it seems that if our motivation were higher, the problem would not exist. Call the problem of finding the motivation to effectively pursue one's goals, the problem of motivation. This is a common failure of instrumental rationality which has been discussed from numerous different angles on LessWrong.
I wish to introduce another approach to the problem of motivation, which to my knowledge has not yet been discussed on LessWrong. This approach is summarized in the following paragraph:
We do not know what we value. Therefore, we choose goals that are not in harmony with our values. The problem of motivation is often caused by our goals not being in harmony with our values. Therefore, many cases of the problem of motivation can be solved by discovering what you value, and carrying out goals that conform to your values.
B.F. Skinner called thoughts "mental behavior". He believed they could be rewarded and punished just like physical behavior, and that they increased or declined in frequency accordingly.
Sadly, psychology has not yet advanced to the point where we can give people electric shocks for thinking things, so the sort of rewards and punishments that reinforce thoughts must be purely internal reinforcement. A thought or intention that causes good feelings gets reinforced and prospers; one that causes bad feelings gets punished and dies out.
(Roko has already discussed this in Ugh Fields; so much as thinking about an unpleasant task is unpleasant; therefore most people do not think about unpleasant tasks and end up delaying them or avoiding them completely. If you haven't already read that post, it does a very good job of making reinforcement of thoughts make sense.)
A while back, D_Malik published a great big List Of Things One Could Do To Become Awesome. As David_Gerard replied, the list was itself a small feat of awesome. I expect a couple of people started on some of the more awesome-sounding entries, then gave up after a few minutes and never thought about it again. Why?
When I was younger, I used to come up with plans to become awesome in some unlikely way. Maybe I'd hear someone speaking Swahili, and I would think "I should learn Swahili," and then I would segue into daydreams of being with a group of friends, and someone would ask if any of us spoke any foreign languages, and I would say I was fluent in Swahili, and they would all react with shock and tell me I must be lying, and then a Kenyan person would wander by, and I'd have a conversation with them in Swahili, and they'd say that I was the first American they'd ever met who was really fluent in Swahili, and then all my friends would be awed and decide I was the best person ever, and...
...and the point is that the thought of learning Swahili is pleasant, in the same easy-to-visualize but useless way that an extra bedroom for Grandma is pleasant. And the intention to learn Swahili is also pleasant, because it will lead to all those pleasant things. And so, by reinforcement of mental behavior, I continue thinking about and intending to learn Swahili.
Now consider the behavior of studying Swahili. I've never done so, but I imagine it involves a lot of long nights hunched over books of Swahili grammar. Since I am not one of the lucky people who enjoys learning languages for their own sake, this will be an unpleasant task. And rewards will be few and far between: outside my fantasies, my friends don't just get together and ask what languages we know while random Kenyans are walking by.
In fact, it's even worse than this, because I don't exactly make the decision to study Swahili in aggregate, but only in the form of whether to study Swahili each time I get the chance. If I have the opportunity to study Swahili for an hour, this provides no clear reward - an hour's studying or not isn't going to make much difference to whether I can impress my friends by chatting with a Kenyan - but it will still be unpleasant to spend an hour of going over boring Swahili grammar. And time discounting makes me value my hour today much more than I value some hypothetical opportunity to impress people months down the line; Ainslie shows quite clearly I will always be better off postponing my study until later.
So the behavior of actually learning Swahili is thankless and unpleasant and very likely doesn't happen at all.
Thinking about studying Swahili is positively reinforced, actually studying Swahili is negatively reinforced. The natural and obvious result is that I intend to study Swahili, but don't.
The problem is that for some reason, some crazy people expect for the reinforcement of thoughts to correspond to the reinforcement of the object of those thoughts. Maybe it's that old idea of "preference": I have a preference for studying Swahili, so I should satisfy that preference, right? But there's nothing in my brain automatically connecting this node over here called "intend to study Swahili" to this node over here called "study Swahili"; any association between them has to be learned the hard way.
We can describe this hard way in terms of reinforcement learning: after intending to learn Swahili but not doing so, I feel stupid. This unpleasant feeling propagates back to its cause, the behavior of intending to learn Swahili, and negatively reinforces it. Later, when I start thinking it might be neat to learn Mongolian on a whim, this generalizes to behavior that has previously been negatively reinforced, so I avoid it (in anthropomorphic terms, I "expect" to fail at learning Mongolian and to feel stupid later, so I avoid doing so).
I didn't learn this the first time, and I doubt most other people do either. And it's a tough problem to call, because if you overdo the negative reinforcement, then you never try to do anything difficult ever again.
In any case, the lesson is that thoughts and intentions get reinforced separately from actions, and although you can eventually learn to connect intentions to actions, you should never take the connection for granted.
In Are Wireheads Happy? I discussed the difference between wanting something and liking something. More recently, Luke went deeper into some of the science in his post Not for the Sake of Pleasure Alone.
In the comments of the original post, cousin_it asked a good question: why implement a mind with two forms of motivation? What, exactly, are "wanting" and "liking" in mind design terms?
Tim Tyler and Furcas both gave interesting responses, but I think the problem has a clear answer in a reinforcement learning perspective (warning: formal research on the subject does not take this view and sticks to the "two different systems of different evolutionary design" theory). "Liking" is how positive reinforcement feels from the inside; "wanting" is how the motivation to do something feels from the inside. Things that are positively reinforced generally motivate you to do more of them, so liking and wanting often co-occur. With more knowledge of reinforcement, we can begin to explore why they might differ.
CONTEXT OF REINFORCEMENT
Reinforcement learning doesn't just connect single stimuli to responses. It connects stimuli in a context to responses. Munching popcorn at a movie might be pleasant; munching popcorn at a funeral will get you stern looks at best.
In fact, lots of people eat popcorn at a movie theater and almost nowhere else. Imagine them, walking into that movie theater and thinking "You know, I should have some popcorn now", maybe even having a strong desire for popcorn that overrides the diet they're on - and yet these same people could walk into, I don't know, a used car dealership and that urge would be completely gone.
These people have probably eaten popcorn at a movie theater before and liked it. Instead of generalizing to "eat popcorn", their brain learned the lesson "eat popcorn at movie theaters". Part of this no doubt has to do with the easy availability of popcorn there, but another part probably has to do with context-dependent reinforcement.
I like pizza. When I eat pizza, and get rewarded for eating pizza, it's usually after smelling the pizza first. The smell of pizza becomes a powerful stimulus for the behavior of eating pizza, and I want pizza much more after smelling it, even though how much I like pizza remains constant. I've never had pizza at breakfast, and in fact the context of breakfast is directly competing with my normal stimuli for eating pizza; therefore, no matter how much I like pizza, I have no desire to eat pizza for breakfast. If I did have pizza for breakfast, though, I'd probably like it.
If an activity is intermittently reinforced; occasional rewards spread among more common neutral stimuli or even small punishments, it may be motivating but unpleasant.
Imagine a beginning golfer. He gets bogeys or double bogeys on each hole, and is constantly kicking himself, thinking that if only he'd used one club instead of the other, he might have gotten that one. After each game, he can't believe that after all his practice, he's still this bad. But every so often, he does get a par or a birdie, and thinks he's finally got the hang of things, right until he fails to repeat it on the next hole, or the hole after that.
This is a variable response schedule, Skinner's most addictive form of delivering reinforcement. The golfer may keep playing, maybe because he constantly thinks he's on the verge of figuring out how to improve his game, but he might not like it. The same is true for gamblers, who think the next pull of the slot machine might be the jackpot (and who falsely believe they can discover a secret in the game that will change their luck; they don't like sitting around losing money, but they may stick with it so that they don't leave right before they reach the point where their luck changes.
SMALL-SCALE DISCOUNT RATES
Even if we like something, we may not want to do it because it involves pain at the second or sub-second level.
Eliezer discusses the choice between reading a mediocre book and a good book:
You may read a mediocre book for an hour, instead of a good book, because if you first spent a few minutes to search your library to obtain a better book, that would be an immediate cost - not that searching your library is all that unpleasant, but you'd have to pay an immediate activation cost to do that instead of taking the path of least resistance and grabbing the first thing in front of you. It's a hyperbolically discounted tradeoff that you make without realizing it, because the cost you're refusing to pay isn't commensurate enough with the payoff you're forgoing to be salient as an explicit tradeoff.
In this case, you like the good book, but you want to keep reading the mediocre book. If it's cheating to start our hypothetical subject off reading the mediocre book, consider the difference between a book of one-liner jokes and a really great novel. The book of one-liners you can open to a random page and start being immediately amused (reinforced). The great novel you've got to pick up, get into, develop sympathies for the characters, figure out what the heck lomillialor or a Tiste Andii is, and then a few pages in you're thinking "This is a pretty good book". The fear of those few pages could make you realize you'll like the novel, but still want to read the joke book. And since hyperbolic discounting overcounts reward or punishment in the next few seconds, it may seem like a net punishment to make the change.
This deals yet another blow to the concept of me having "preferences". How much do I want popcorn? That depends very much on whether I'm at a movie theater or a used car dealership. If I browse Reddit for half an hour because it would be too much work to spend ten seconds traveling to the living room to pick up the book I'm really enjoying, do I "prefer" browsing to reading? Which has higher utility? If I hate every second I'm at the slot machines, but I keep at them anyway so I don't miss the jackpot, am I a gambling addict, or just a person who enjoys winning jackpots and is willing to do what it takes?
In cases like these, the language of preference and utility is not very useful. My anticipation of reward is constraining my behavior, and different factors are promoting different behaviors in an unstable way, but trying to extract "preferences" from the situation is trying to oversimplify a complex situation.
What's the mental burden of trying to do something? What's it cost? What price are you going to pay if you try to do something out in the world.
I think that by figuring out what the usual costs to doing things are, we can reduce the costs and otherwise structure our lives so that it's easier to reach our goals.
When I sat down to identify cognitive costs, I found seven. There might be more. Let's get started -
Activation Energy - As covered in more detail in this post, starting an activity seems to take a larger of willpower and other resources than keeping going with it. Required activation energy can be adjusted over time - making something into a routine lowers the activation energy to do it. Things like having poorly defined next steps increases activation energy required to get started. This is a major hurdle for a lot of people in a lot of disciplines - just getting started.
Opportunity cost - We're all familiar with general opportunity cost. When you're doing one thing, you're not doing something else. You have limited time. But there also seems to be a cognitive cost to this - a natural second guessing of choices by taking one path and not another. This is the sort of thing covered by Barry Schwartz in his Paradox of Choice work (there's some faulty thought/omissions in PoC, but it's overall valuable). It's also why basically every significant military work ever has said you don't want to put the enemy in a position where their only way out is through you - Sun Tzu argued always leaving a way for the enemy to escape, which splits their focus and options. Hernan Cortes famously burned the boats behind him. When you're doing something, your mind is subtly aware and bothered by the other things you're not doing. This is a significant cost.
Inertia - Eliezer Yudkowsky wrote that humans are "Adaptation-Executers, not Fitness-Maximizers." He was speaking in terms of large scale evolution, but this is also true of our day to day affairs. Whatever personal adaptations and routines we've gotten into, we tend to perpetuate. Usually people do not break these routines unless a drastic event happens. Very few people self-scrutinize and do drastic things without an external event happening.
The difference between activation energy and inertia is that you can want to do something, but be having a hard time getting started - that's activation energy. Whereas inertia suggests you'll keep doing what you've been doing, and largely turn your mind off. Breaking out of inertia takes serious energy and tends to make people uncomfortable. They usually only do it if something else makes them more uncomfortable (or, very rarely, when they get incredibly inspired).
Ego/willpower depletion - The Wikipedia article on ego depletion is pretty good. Basically, a lot of recent research shows that by doing something that takes significant willpower your "battery" of willpower gets drained some, and it becomes harder to do other high-will-required tasks. From Wikipedia: " In an illustrative experiment on ego depletion, participants who controlled themselves by trying not to laugh while watching a comedian did worse on a later task that required self-control compared to participants who did not have to control their laughter while watching the video." I'd strongly recommend you do some reading on this topic if you haven't - Roy Baumeister has written some excellent papers on it. The pattern holds pretty firm - when someone resists, say, eating a snack they want, it makes it harder for them to focus and persist doing rote work later.
Neurosis/fear/etc - Almost all humans are naturally more risk averse than gain-inclined. This seems to have been selected for evolutionarily. We also tend to become afraid far in excess of what we should for certain kinds of activities - especially ones that risk social embarrassment.
I never realized how strong these forces were until I tried to break free of them - whenever I got a strong negative reaction from someone to my writing, it made it considerably harder to write pieces that I thought would be popular later. Basic things like writing titles that would make a post spread, or polishing the first paragraph and last sentence - it's like my mind was weighing on the "con" side of pro/con that it would generate criticism, and it was... frightening's not quite the right word, but something like that.
Some tasks can be legitimately said to be "neurosis-inducing" - that means, you start getting more neurotic when you ponder and start doing them. Things that are almost guaranteed to generate criticism or risk rejection frequently do this. Anything that risks compromising a person's self image can be neurosis inducing too.
Altering of hormonal balance - A far too frequently ignored cost. A lot of activities will change your hormonal balance for the better or worse. Entering into conflict-like situations can and does increase adrenalin and cortisol and other stress hormones. Then you face adrenalin withdrawal and crash later. Of course, we basically are biochemistry, so significant changing of hormonal balance affects a lot of our body - immune system, respiration, digestion, etc. A lot of people are aware of this kind of peripherally, but there hasn't been much discussion about the hormonal-altering costs of a lot of activities.
Maintenance costs from the idea re-emerging in your thoughts - Another under-appreciated cognitive cost is maintenance costs in your thoughts from an idea recurring, especially when the full cycle isn't complete. In Getting Things Done, David Allen talks about how "open loops" are "anything that's not where it's supposed to be." These re-emerge in our thoughts periodically, often at inopportune times, consuming thought and energy. That's fine if the topic is exceedingly pleasant, but if it's not, it can wear you out. Completing an activity seems to reduce the maintenance cost (though not completely). An example would be not having filled your taxes out yet - it emerges in your thoughts at random times, derailing other thought. And it's usually not pleasant.
Taking on any project, initiative, business, or change can generate these maintenance costs from thoughts re-emerging.
Conclusion I identified these seven as the mental/cognitive costs to trying to do something -
- Activation Energy
- Opportunity cost
- Ego/willpower depletion
- Altering of hormonal balance
- Maintenance costs from the idea re-emerging in your thoughts
I think we can reduce some of these costs by planning our tasks, work lives, social lives, and environment intelligently. Others of them it's good to just be aware of so we know when we start to drag or are having a hard time. Thoughts on other costs, or ways to reduce these are very welcome.
View more: Next