The Cognitive Costs to Doing Things
What's the mental burden of trying to do something? What's it cost? What price are you going to pay if you try to do something out in the world.
I think that by figuring out what the usual costs to doing things are, we can reduce the costs and otherwise structure our lives so that it's easier to reach our goals.
When I sat down to identify cognitive costs, I found seven. There might be more. Let's get started -
Activation Energy - As covered in more detail in this post, starting an activity seems to take a larger of willpower and other resources than keeping going with it. Required activation energy can be adjusted over time - making something into a routine lowers the activation energy to do it. Things like having poorly defined next steps increases activation energy required to get started. This is a major hurdle for a lot of people in a lot of disciplines - just getting started.
Opportunity cost - We're all familiar with general opportunity cost. When you're doing one thing, you're not doing something else. You have limited time. But there also seems to be a cognitive cost to this - a natural second guessing of choices by taking one path and not another. This is the sort of thing covered by Barry Schwartz in his Paradox of Choice work (there's some faulty thought/omissions in PoC, but it's overall valuable). It's also why basically every significant military work ever has said you don't want to put the enemy in a position where their only way out is through you - Sun Tzu argued always leaving a way for the enemy to escape, which splits their focus and options. Hernan Cortes famously burned the boats behind him. When you're doing something, your mind is subtly aware and bothered by the other things you're not doing. This is a significant cost.
Inertia - Eliezer Yudkowsky wrote that humans are "Adaptation-Executers, not Fitness-Maximizers." He was speaking in terms of large scale evolution, but this is also true of our day to day affairs. Whatever personal adaptations and routines we've gotten into, we tend to perpetuate. Usually people do not break these routines unless a drastic event happens. Very few people self-scrutinize and do drastic things without an external event happening.
The difference between activation energy and inertia is that you can want to do something, but be having a hard time getting started - that's activation energy. Whereas inertia suggests you'll keep doing what you've been doing, and largely turn your mind off. Breaking out of inertia takes serious energy and tends to make people uncomfortable. They usually only do it if something else makes them more uncomfortable (or, very rarely, when they get incredibly inspired).
Ego/willpower depletion - The Wikipedia article on ego depletion is pretty good. Basically, a lot of recent research shows that by doing something that takes significant willpower your "battery" of willpower gets drained some, and it becomes harder to do other high-will-required tasks. From Wikipedia: " In an illustrative experiment on ego depletion, participants who controlled themselves by trying not to laugh while watching a comedian did worse on a later task that required self-control compared to participants who did not have to control their laughter while watching the video." I'd strongly recommend you do some reading on this topic if you haven't - Roy Baumeister has written some excellent papers on it. The pattern holds pretty firm - when someone resists, say, eating a snack they want, it makes it harder for them to focus and persist doing rote work later.
Neurosis/fear/etc - Almost all humans are naturally more risk averse than gain-inclined. This seems to have been selected for evolutionarily. We also tend to become afraid far in excess of what we should for certain kinds of activities - especially ones that risk social embarrassment.
I never realized how strong these forces were until I tried to break free of them - whenever I got a strong negative reaction from someone to my writing, it made it considerably harder to write pieces that I thought would be popular later. Basic things like writing titles that would make a post spread, or polishing the first paragraph and last sentence - it's like my mind was weighing on the "con" side of pro/con that it would generate criticism, and it was... frightening's not quite the right word, but something like that.
Some tasks can be legitimately said to be "neurosis-inducing" - that means, you start getting more neurotic when you ponder and start doing them. Things that are almost guaranteed to generate criticism or risk rejection frequently do this. Anything that risks compromising a person's self image can be neurosis inducing too.
Altering of hormonal balance - A far too frequently ignored cost. A lot of activities will change your hormonal balance for the better or worse. Entering into conflict-like situations can and does increase adrenalin and cortisol and other stress hormones. Then you face adrenalin withdrawal and crash later. Of course, we basically are biochemistry, so significant changing of hormonal balance affects a lot of our body - immune system, respiration, digestion, etc. A lot of people are aware of this kind of peripherally, but there hasn't been much discussion about the hormonal-altering costs of a lot of activities.
Maintenance costs from the idea re-emerging in your thoughts - Another under-appreciated cognitive cost is maintenance costs in your thoughts from an idea recurring, especially when the full cycle isn't complete. In Getting Things Done, David Allen talks about how "open loops" are "anything that's not where it's supposed to be." These re-emerge in our thoughts periodically, often at inopportune times, consuming thought and energy. That's fine if the topic is exceedingly pleasant, but if it's not, it can wear you out. Completing an activity seems to reduce the maintenance cost (though not completely). An example would be not having filled your taxes out yet - it emerges in your thoughts at random times, derailing other thought. And it's usually not pleasant.
Taking on any project, initiative, business, or change can generate these maintenance costs from thoughts re-emerging.
Conclusion I identified these seven as the mental/cognitive costs to trying to do something -
- Activation Energy
- Opportunity cost
- Inertia
- Ego/willpower depletion
- Neurosis/fear/etc
- Altering of hormonal balance
- Maintenance costs from the idea re-emerging in your thoughts
I think we can reduce some of these costs by planning our tasks, work lives, social lives, and environment intelligently. Others of them it's good to just be aware of so we know when we start to drag or are having a hard time. Thoughts on other costs, or ways to reduce these are very welcome.
Convincing Arguments Aren’t Necessarily Correct – They’re Merely Convincing
I've been studying a lot of finance lately, and it strikes me that it's a field that requires a very high degree of rationality and ability to cut through the noise to get to correct arguments.
What's nice about investing especially, though, is that it has a very similar utility curve for all players. People have slightly different goals in terms of finance and investing, but generally speaking, people are measuring utility in terms of financial return. There's some differences between time preferences and risk tolerance, but generally speaking, we can sort the winning strategies from the losing ones over time. There's a fairly clear and objective standard for what worked and what didn't, which could make it a very helpful field for the aspiring rationalist to study and learn from.
I originally wrote this post, "Convincing Arguments Aren’t Necessarily Correct – They’re Merely Convincing" for my blog, so the tone is more colloquial than you'd normally see on LessWrong, and the audience is slightly different. A friend of mine suggested I post it up here too as it might be interesting to the LW crowd, so here we go -
Defecting by Accident - A Flaw Common to Analytical People
Related to: Rationalists Should Win, Why Our Kind Can't Cooperate, Can Humanism Match Religion's Output?, Humans Are Not Automatically Strategic, Paul Graham's "Why Nerds Are Unpopular"
The "Prisoner's Dilemma" refers to a game theory problem developed in the 1950's. Two prisoners are taken and interrogated separately. If either of them confesses and betrays the other person - "defecting" - they'll receive a reduced sentence, and their partner will get a greater sentence. However, if both defect, then they'll both receive higher sentences than if neither of them confessed.
This brings the prisoner to a strange problem. The best solution individually is to defect. But if both take the individually best solution, then they'll be worst off overall. This has wide ranging implications for international relations, negotiation, politics, and many other fields.
Members of LessWrong are incredibly smart people who tend to like game theory, and debate and explore and try to understand problems like this. But, does knowing game theory actually make you more effective in real life?
I think the answer is yes, with a caveat - you need the basic social skills to implement your game theory solution. The worst-case scenario in an interrogation would be to "defect by accident" - meaning that you'd just blurt out something stupidly because you didn't think it through before speaking. This might result in you and your partner both receiving higher sentences... a very bad situation. Game theory doesn't take over until basic skill conditions are met, so that you could actually execute any plan you come up with.
The Purpose of This Post: I think many smart people "defect" by accident. I don't mean in serious situations like a police investigation. I mean in casual, everyday situations, where they tweak and upset people around them by accident, due to a lack of reflection of desired outcomes.
Rationalists should win. Defecting by accident frequently results in losing. Let's examine this phenomenon, and ideally work to improve it.
Contents Of This Post
- I'll define "defecting by accident."
- I'll explain a common outcome of defecting by accident.
- I'll give some recent, mild examples of accidental defections.
- I'll give examples of how to turn accidental defections into cooperation.
- I'll give some examples of how this can make you more successful at your goals.
- I'll list some books I recommend if you decide to learn more on the topic.
"Nahh, that wouldn't work"
After having it recommended to me for the fifth time, I finally read through Harry Potter and the Methods of Rationality. It didn't seem like it'd be interesting to me, but I was really mistaken. It's fantastic.
One thing I noticed is that Harry threatens people a lot. My initial reaction was, "Nahh, that wouldn't work."
It wasn't to scrutinize my own experience. It wasn't to do a google search if there's literature available. It wasn't to ask a few friends what their experiences were like and compare them.
After further thought, I came to realization - almost every time I've threatened someone (which is rarely), it's worked. Now, I'm kind of tempted to write that off as "well, I had the moral high ground in each of those cases" - but:
1. Harry usually or always has the moral high ground when he threatens people in MOR.
2. I don't have any personal anecdotes or data about threatening people from a non-moral high ground, but history provides a number of examples, and the threats often work.
This gets me to thinking - "Huh, why did I write that off so fast as not accurate?" And I think the answer is because I don't want the world to work like that. I don't want threatening people to be an effective way of communicating.
It's just... not a nice idea.
And then I stop, and think. The world is as it is, not as I think it ought to be.
And going further, this makes me consider all the times I've tried to explain something I understood to someone, but where they didn't like the answer. Saying things like, "People don't care about your product features, they care about what benefit they'll derive in their own life... your engineering here is impressive, but 99% of people don't care that you just did an amazing engineering feat for the first time in history if you can't explain the benefit to them."
Of course, highly technical people hate that, and tend not to adjust.
Or explaining to someone how clothing is a tool that changes people's perceptions of you, and by studying the basics of fashion and aesthetics, you can achieve more of your aims in life. Yes, it shouldn't be like that in an ideal world. But we're not in that ideal world - fashion and aesthetics matter and people react to it.
I used to rebel against that until I wizened up, studied a little fashion and aesthetics, and started dressing to produce outcomes. So I ask, what's my goal here? Okay, what kind of first impression furthers that goal? Okay, what kind of clothing helps make that first impression?
Then I wear that clothing.
And yet, when confronted with something I don't like - I dismiss it out of hand, without even considering my own past experiences. I think this is incredibly common. "Nahh, that wouldn't work" - because the person doesn't want to live in a world where it would work.
Reference Points
I just spent some time reading Thomas Schelling's "Choice and Consequences" and I heartily recommend it. Here's a Google books link to the chapter I was reading, "The Intimate Contest for Self Command."
It's fascinating, and if you like LessWrong, rationality, understanding things, decision theories, figuring people and the world out - well, then I think you'd like Schelling. Actually, you'll probably be amazed with how much of his stuff you're already familiar with - he really established a heck of a lot modern thinking on game theory.
Allow me to depart from Schelling a moment, and talk of Sam Snyder. He's a very intelligent guy who has lots of intelligent thoughts. Here's a link to his website - there's massive amounts of data and references there, so I'd recommend you just skim his site if you go visit until you find something interesting. You'll probably find something interesting pretty quickly.
I got a chance to have a conversation with him a while back, and we covered immense amounts of ground. He introduced me to a concept I've been thinking about nonstop since learning it from him - reference points.
Now, he explained it very eloquently, and I'm afraid I'm going to mangle and not do justice to his explanation. But to make a long story really short, your reference points affect your motivation a lot.
An example would help.
What does the average person think about he thinks of running? He thinks of huffing, puffing, being tired and sore, having a hard time getting going, looking fat in workout clothes and being embarrassed at being out of shape. A lot of people try running at some point in their life, and most people don't keep doing it.
On the other hand, what does a regular runner think of? He thinks of the "runner's high" and gliding across the pavement, enjoying a great run, and feeling like a million bucks afterwards.
Since that conversation, I've been trying to change my reference points. For instance, if I feel like I'd like some fried food, I try not to imagine/reference eating the salty greased food. Yes, eating french fries and a grilled chicken sandwich will be salty and fatty and delicious. It's a superstimulus, we're not really evolved to handle that stuff appropriately.
So when most people think of the McChicken Sandwich, large fry, large drink, they think about the grease and salt and sugar and how good it'll taste.
I still like that stuff. In fact, since I quit a lot of vices, sometimes I crave even harder for the few I have left. But I was able to cut my junk food consumption way down by changing my reference point. When I start to have a desire for that sort of food, I think about how my stomach and energy levels are going to feel 90 minutes after eating it. That answer is - not too good. So I go out to a local restaurant and order plain chicken, rice, and vegetables, and I feel good later.
Activation Costs
Enter Wikipedia:
In chemistry, activation energy is a term introduced in 1889 by the Swedish scientist Svante Arrhenius, that is defined as the energy that must be overcome in order for a chemical reaction to occur.
In this article, I propose that:
- Every action you take has an activation cost (perhaps zero)
- These costs vary from person to person
- These costs can change over time
- Activation costs explain a lot of akrasia
After proposing that, I'd like to explore:
- Factors that increase activation costs
- Factors that decrease activation costs
Every action a person takes has an activation cost. The activation cost of a consistent, deeply embedded habit is zero. It happens almost automatically. The activation cost for most people in the United States to exercising is fairly high, and most people are inconsistent about exercising. However, there are people who - every single day - begin by putting their running shoes on and running. Their activation cost to running is effectively zero.
These costs vary from person to person. In the daily running example above, the activation cost to the runner is low. The runner simply starts running in the morning. For most people, it's higher for a variety of reasons we'll get to in a moment. The running example is fairly obvious, but you'll also see phenomenon like a neat person saying to a sloppy one, "Why don't you clean your desk? ... just f'ing do it, man." Assuming the messy person indeed wants to have a clean desk, then it's likely the messy person has a higher activation cost to cleaning his desk. (He could also have less energy/willpower)
The Problem With Trolley Problems
A trolley problem is something that's used increasing often in philosophy to get at people's beliefs and debate on them. Here's an example from Wikipedia:
As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you - your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?
I believe trolley problems are fundamentally flawed - at best a waste of time, and at worst lead to really sloppy thinking. Here's four reasons why:
1. It assumes perfect information about outcomes.
2. It ignores the global secondary effects that local choices create.
3. It ignores real human nature - which would be to freeze and be indecisive.
4. It usually gives you two choices and no alternatives, and in real life, there's always alternatives.
First, trolley problems contain perfect information about outcomes - which is rarely the case in real life. In real life, you're making choices based on imperfect information. You don't know what would happen for sure as a result of your actions.
Second, everything creates secondary effects. If putting people involuntarily in harm's way to save others was an acceptable result, suddenly we'd all have to be really careful in any emergency. Imagine living in a world where anyone would be comfortable ending your life to save other people nearby - you'd have to not only be constantly checking your surroundings, but also constantly on guard against do-gooders willing to push you onto the tracks.
Third, it ignores human nature. Human nature is to freeze up when bad things happen unless you're explicitly trained to react. In real life, most people would freeze or panic instead of react. In order to get over that, first responders, soldiers, medics, police, firefighters go through training. That training includes dealing with questionable circumstances and how to evaluate them, so you don't have a society where your trained personnel act randomly in emergencies.
Fourth, it gives you two choices and no alternatives. I firmly reject this - I think there's almost always alternative ways to get there from here if you open your mind to it. Once you start thinking that your only choice is to push the one guy in front of the trolley or to stand there doing nothing, your mind is closed to all other alternatives.
At best, this means trolley problems are just a harmless waste of time. But I think they're not just a harmless waste of time.
I think "trolley problem" type thinking is commonly used in real life to advocate and justify bad policy.
Here's how it goes:
Activist says, "We've got to take from this rich fat cat and give it to these poor people, or the poor people will starve and die. If you take the money, the fat cat will buy less cars and yachts, and the poor people will become much more successful and happy."
You'll see all the flaws I described above in that statement.
First, it assumes perfect information. The activist says that taking more money will lead to less yachts and cars - useless consumption. He doesn't consider that people might first cut their charity budget, or their investment budget, or something else. Higher tax jurisdictions, like Northern Europe, have very low levels of charitable giving. They also have relatively low levels of capital investment.
Second, it ignores secondary effects. The activist assumes he can milk the cow and the cow won't mind. In reality, people start spending their time on minimizing their tax burden instead of doing productive work. It ripples through society.
Third, it ignores human nature. Saying "the fat cat won't miss it" is false - everyone is loss averse.
Fourth, the biggest problem of all, it gives two choices and no alternatives. "Tax the fat cat, or the poor people starve" - is there no other way to encourage charitable giving? Could we give charity visas where anyone giving $500,000 in philanthropy to the poor can get fast-track residency into the USA? Could we give larger tax breaks to people who choose to take care of distant relatives as a dependent? Are there other ways? Once the debate gets constrained to, "We must do this, or starvation is the result" you've got problems.
And I think that these poor quality thoughts on policy are a direct descendant of trolley problems. It's the same line of thinking - perfect information, ignores secondary effects, ignores human nature, and gives two choices while leaving no other alternatives. That's not real life. That's sloppy thinking.
Edit: This is being very poorly received so far... well, it was quickly voted up to +3, and now it's down to -2, which means controversial but generally negative reception.
Do people disagree? I understand trolley problems are an established part of critical thinking on philosophy, however, I think they're flawed and I wanted to highlight those flaws.
The best counterargument I see right now is that the value of a trolley problem is it reduces everything to just the moral decision. That's an interesting point, however, I think you could come up with better hypotheticals that don't suffer from this flaw. Or perhaps the particular politics example isn't popular? You can substitute in similar arguments for prohibition of alcohol, and perhaps I ought to have done that to make it less controversial. In any event, I welcome discussion and disagreement.
Questions for you: I think that trolley problems contain perfect information about outcomes in advance of them happening, ignore secondary effects, ignore human nature, and give artificially false constraints. Do you agree with that part? I think that's pretty much fact. Now, I think that's bad. Agree/disagree there? Okay, finally, I think this kind of thinking seeps over into politics, and it's likewise bad there. Agree/disagree? I know this is a bit of controversial argument since trolley problems are common in philosophy, but I'd encourage you to have a think on what I wrote and agree, disagree, and otherwise discuss.
Collecting and hoarding crap, useless information
I am realizing something that many, many intelligent people are guilty of - collecting and hoarding and accumulating crap, useless information. This is dangerous, because it feels like you're doing something useful, but you're not.
However, speaking personally - once I decide to start focusing and researching something systematically to get better at it, it gets harder to do. For instance, I taught myself statistics mostly using baseball stats. It was a fun, easy, harmless context to learn statistics.
I read lots of history and historical fiction. I read up lots on business and entrepreneurship. This is easy and fun and enjoyable.
But then, when I decide to really hone in, it becomes much harder. For instance, I'm doing some casual research on the history of insurgencies and asymmetrical warfare. This is the kind of thing I'd read all the time for fun, but now that I'm working on it systematically, it becomes a lot harder.
Likewise business and entrepreneurship - I read lots and lots on technology, financing, market research, marketing, etc. But now that I'm really nailing down one aspect for my next business, it becomes almost strenuous to work on that.
It's like... collecting and hoarding useless, unfocused information is for us what collecting and hoarding a bunch of useless consumer shit is for most people. I'd reckon that people that hang out here are smarter with money and less into buying junk, but, at least for me, I'm spending a lot of my time buying junk information.
Alright, back to reading about Tienanmen Square and Rome/Carthage and the Tet Offensive, and nailing down the buying criteria and budgets of the market I want to be in. Why it is so much easier to focus and collect crap mentally than to do it systematically on meaningful topics? Do you do this? I seriously doubt I'm the only one...
Steps to Achievement: The Pitfalls, Costs, Requirements, and Timelines
Reply to: Humans Are Not Automatically Strategic
In "Humans Are Not Automatically Strategic," Anna Salamon outlined some ways that people could take action to be more successful and achieve goals, but do not:
But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out. We do not automatically:
- (a) Ask ourselves what we’re trying to achieve;
- (b) Ask ourselves how we could tell if we achieved it (“what does it look like to be a good comedian?”) and how we can track progress;
- (c) Find ourselves strongly, intrinsically curious about information that would help us achieve our goal;
- (d) Gather that information (e.g., by asking as how folks commonly achieve our goal, or similar goals, or by tallying which strategies have and haven’t worked for us in the past);
- (e) Systematically test many different conjectures for how to achieve the goals, including methods that aren’t habitual for us, while tracking which ones do and don’t work;
- (f) Focus most of the energy that *isn’t* going into systematic exploration, on the methods that work best;
- (g) Make sure that our "goal" is really our goal, that we coherently want it and are not constrained by fears or by uncertainty as to whether it is worth the effort, and that we have thought through any questions and decisions in advance so they won't continually sap our energies;
- (h) Use environmental cues and social contexts to bolster our motivation, so we can keep working effectively in the face of intermittent frustrations, or temptations based in hyperbolic discounting;
.... or carry out any number of other useful techniques. Instead, we mostly just do things.
I believe that's a fantastic list of achievement/victory heuristics. Some of these are difficult to do, though. Let's look to make this into a practical, actionable sort of document. I believe the steps outlined above can be broadly grouped. I've done it with some minor rephrasing to make it in first person plural -
A "Failure to Evaluate Return-on-Time" Fallacy
I don't have a good name for this fallacy, but I hope to work it out with everyone here through thinking and discussion.
It goes like this: a large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped.
A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)