How habits work and how you may control them

64 Kaj_Sotala 12 October 2013 12:17PM

Some highlights from The Power of Habit: Why We Do What We Do in Life And Business by Charles Duhigg, a book which seems like an invaluable resource for pretty much everyone who wants to improve their lives. The below summarizes the first three chapters of the book, as well as the appendix, for I found those to be the most valuable and generally applicable parts. These chapters discuss individual habits, while the rest of the book discusses the habits of companies and individuals. The later chapters also contain plenty of interesting content (some excerpts: [1 2 3]), and help explain the nature of e.g. some institutional failures.

(See also two previous LW discussions on an online article by the author of the book.)

Chapter One: The Habit Loop - How Habits Work

When a rat first navigates a foreign environment, such as a maze, its brain is full of activity as it works to process the new environment and to learn all the environmental cues. As the environment becomes more familiar, the rat's brain becomes less and less active, until even brain structures related to memory quiet down a week later. Navigating the maze no longer requires higher processing: it has become an automatic habit.

The process of converting a complicated sequence of actions into an automatic routine is known as "chunking", and human brains carry out a similar process. They vary in complexity, from putting toothpaste on your toothbrush before putting it in your mouth, to getting dressed or preparing breakfast, to very complicated processes such as backing one's car out of the driveway. All of these actions initially required considerable effort to learn, but eventually they became so automatic as to be carried out without conscious attention. As soon as we identify the right cue, such as pulling out the car keys, our brain activates the stored habit and lets our conscious minds focus on something else. In order to conserve effort, the brain will attempt to turn almost any routine into a habit.

However, it can be dangerous to deactivate our brains at the wrong time, for there may be something unanticipated in the environment that will turn a previously-safe routine into something life-threatening. To help avoid such situations, our brains evaluate prospective habits using a three-stage habit loop:

continue reading »

Love and Rationality: Less Wrongers on OKCupid

19 Relsqui 11 October 2010 06:35AM

Last month, Will_Newsome started a thread about OKCupid, one of the major players among online dating sites--especially for the young-and-nerdy set, given their mathematical approach to matching. He opened it up for individual profile evaluation, which occurred, but so did a lot of fruitful meta-discussion about attraction in general and online dating mechanisms in particular. This post is a summary of the parts of that thread which specifically address the practical aspect of good profile editing and critique. (It also incorporates some ideas I had previously but hadn't collected yet.) A little of it is specific to OKCupid, but most of it can be applied to any dating site, and some to dating in general. I've cited points which came from single comments (i.e. not suggested by several people); if I missed one of yours, please comment with a link and I'll add the reference.

continue reading »

Don't judge a skill by its specialists

49 Academian 26 September 2010 08:56PM

tl;dr: The marginal benefits of learning a skill shouldn't be judged heavily on the performance of people who have had it for a long time. People are unfortunately susceptible to these poor judgments via the representativeness heuristic.

Warn and beware of the following kludgy argument, which I hear often and have to dispel or refine:

"Naively, learning «skill type» should help my performance in «domain». But people with «skill type» aren't significantly better at «domain», so learning it is unlikely to help me."

In the presence or absence of obvious mediating factors, skills otherwise judged as "inapplicable" might instead present low hanging fruit for improvement. But people too often toss them away using biased heuristics to continue being lazy and mentally stagnant. Here are some parallel examples to give the general idea (these are just illustrative, and might be wrong):

Weak argument: "Gamers are awkward, so learning games won't help my social skills."
Mediating factor: Lack of practice with face-to-face interaction.
Ideal: Socialite acquires moves-ahead thinking and learns about signalling to help get a great charity off the ground.

Weak argument: "Physicists aren't good at sports, so physics won't help me improve my game."
Mediating factor: Lack of exercise.
Ideal: Athlete or coach learns basic physics and tweaks training to gain a leading edge.

Weak argument: "Mathematicians aren't romantically successful, so math won't help me with dating."
Mediating factor: Aversion to unstructured environments.
Ideal: Serial dater learns basic probability to combat cognitive biases in selecting partners.

Weak argument: "Psychologists are often depressed, so learning psychology won't help me fix my problems."
Mediating factor: Time spent with unhappy people.
Ideal: College student learns basic neuropsychology and restructures study/social routine to accommodate better unconscious brain functions.

continue reading »

Stop Voting For Nincompoops

38 Eliezer_Yudkowsky 02 January 2008 06:00PM

Followup toThe Two-Party Swindle, The American System and Misleading Labels

If evolutionary psychology could be simplified down to one sentence (which it can't), it would be:  "Our instincts are adaptations that increased fitness in the ancestral environment, and we go on feeling that way regardless of whether it increases fitness today."  Sex with condoms, tastes for sugar and fat, etc.

In the ancestral environment, there was no such thing as voter confidentiality.  If you backed a power faction in your hunter-gatherer band, everyone knew which side you'd picked.  The penalty for choosing the losing side could easily be death.

Our emotions are shaped to steer us through hunter-gatherer life, not life in the modern world.  It should be no surprise, then, that when people choose political sides, they feel drawn to the faction that seems stronger, better positioned to win.  Even when voting is confidential.  Just as people enjoy sex, even when using contraception.

(George Orwell had a few words to say in "Raffles and Miss Blandish" about where the admiration of power can go.  The danger, not of lusting for power, but just of feeling drawn to it.)

In a recent special election for California governor, the usual lock of the party structure broke down - they neglected to block that special case, and so you could get in with 65 signatures and $3500.  As a result there were 135 candidates.

continue reading »

(Virtual) Employment Open Thread

35 Will_Newsome 23 September 2010 04:25AM

tl;dr: Some people on LW have a hard time finding worthwhile employment. Share advice and help them out!

Working sucks. I'd rather not work. But alas, a lot of the time, we have to choose between working and starvation. At the very least I'd like to minimize work. I'd like to work somewhere cheap and comfortable... you know, like on the beach in Thailand, like LW (ab)user Louie did. Then I could spend my spare time on things like self-improvement and ahem 'studying nootropics' all day. I'd like to travel, if possible, and not be chained to an iffy job. It'd be cool to have flexible hours. I've read The 4-Hour Work Week but it seemed kinda difficult and scary and... I just don't wanna do it. I can't code, and I'd rather not learn how to. At least, I'd rather not have my job depend on it. I never graduated from college. Hell, I never got my high school diploma, even. A team of medical experts has confirmed that my sleep cycle is of the Chaotic Evil variety. (For those who read HP:MoR, imagine Harry Potter Syndrome, except on crack. I bet a lot of people have similar sleep cycles.) I'm 18, and therefore automatically low status for employment purposes: I'm obviously much too young to make a good teacher, or store manager, or police officer. I can imagine having health problems, or severe social anxiety, or a nearly useless liberal arts degree, or just a general setback limiting my employment opportunities. And if it turned out that I wanted to work 14 hour days all of a sudden because I really needed the money, well then it'd be cool to have that option as well. Alas, none of this is possible, so I might as well just give up and keep on being stressed and feeling useless... or should I?

I bet a whole bunch of Less Wrongers aren't aware of chances for alternative employment. I myself hear myths of people who work via the internet, or blog for a living, or code an hour a day and still make enough to survive comfortably. Sites like elance and vworker (which looks kinda intimidating) exist, and I bet we could find others. Are there such people on Less Wrong that could tell us their secret? Do others know about how to snag one of these gigs? What sorts of skills are easiest to specialize in that could get returns in virtual work? Are virtual markets hard to break into? Can I just blog for an hour or two a day and afford to live a life of simplistic luxury in Thailand? Pretty much everyone on Less Wrong has exceptional writing ability: are there relatively well-paying writing gigs we could get? Alternatively, are there other non-internet jobs that people can break into that don't require tons of experience or great connections or that dreaded and inscrutable bane of nerds everywhere, 'people skills'? Share your knowledge or do some research and help Less Wrong become more happy, more productive, and more awesome!

Oh, and this is really important: we don't have to reinvent the wheel. As wedrifid demonstrated in the earlier Intelligence Amplification Open Thread, a link to an already existent forum is worth ten thousand words or more.

Rational Health Optimization

20 jacob_cannell 18 September 2010 07:47PM

Possibly Related To: Diseased Thinking, Thou Art Godshatter

There are 8760 hours in a typical year.  A typical 30-year old will spend about 2900 of those hours sleeping, around 160 of them impaired or incapacitated by illness and will experience perhaps 2000 hours of peak mental function.

As one ages, the fraction of hours spent sleeping decreases slightly, but eventually the annual hours of peak mental function declines as well, and the annual hours spent ill increases nonlinearlly until one eventually makes that final hospital visit.  

There is a hope that medical technology, accelerated via a Singularity, will advance to the point where we have full mastery over biology and can economically repair organ and cellular damage faster than aging accumulates it.  There is sufficient evidence to put a reasonable bet on that happening by mid-century.

But for most of us that still leaves an unnaceptably high risk of death in the cumulative years between now and then.  Cyronics enrollment offers a further hope, but in practice probably only results in a modest improvement in long term survival odds after full discounting for the technical risks and uncertainties.

In the end it all comes down to a die roll.  Wouldn't you like to get an extra +1 or two?

With a simple evolutionary health optimization, one can:

  • achieve perhaps a 10% increase in peak mental hours per year
  • slow aging and prolong expected lifespan by at least ten years (before considering future medical advances)
  • significantly reduce chance of death before mid-century
  • shift body weight to a healthier equilibrium, increase attractiveness, general mood and happiness

Evolution and Health

Our bodies are the collective result of countless layers of mindless complex adaptations, evolutionary godshatter from a bygone history.  The current sub-species or races of humans today are just a small sampling of a much larger space of genetically related human ancestors who roamed the earth for hundreds of thousands of years before the modern era.  Our modern genomes are a wide and highly irregular sampling of this diverse set of historical adaptations.

continue reading »

Compartmentalization in epistemic and instrumental rationality

77 AnnaSalamon 17 September 2010 07:02AM

Related to: Humans are not automatically strategic, The mystery of the haunted rationalist, Striving to accept, Taking ideas seriously

I argue that many techniques for epistemic rationality, as taught on LW, amount to techniques for reducing compartmentalization.  I argue further that when these same techniques are extended to a larger portion of the mind, they boost instrumental, as well as epistemic, rationality.

Imagine trying to design an intelligent mind.

One problem you’d face is designing its goal.  

Every time you designed a goal-indicator, the mind would increase action patterns that hit that indicator[1].  Amongst these reinforced actions would be “wireheading patterns” that fooled the indicator but did not hit your intended goal.  For example, if your creature gains reward from internal indicators of status, it will increase those indicators -- including by such methods as surrounding itself with people who agree with it, or convincing itself that it understood important matters others had missed.  It would be hard-wired to act as though “believing makes it so”. 

A second problem you’d face is propagating evidence.  Whenever your creature encounters some new evidence E, you’ll want it to update its model of  “events like E”.  But how do you tell which events are “like E”? The soup of hypotheses, intuition-fragments, and other pieces of world-model is too large, and its processing too limited, to update each belief after each piece of evidence.  Even absent wireheading-driven tendencies to keep rewarding beliefs isolated from threatening evidence, you’ll probably have trouble with accidental compartmentalization (where the creature doesn’t update relevant beliefs simply because your heuristics for what to update were imperfect).

Evolution, AFAICT, faced just these problems.  The result is a familiar set of rationality gaps:

continue reading »

Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality

105 patrissimo 14 September 2010 04:17PM

Introduction

Less Wrong is explicitly intended is to help people become more rational.  Eliezer has posted that rationality means epistemic rationality (having & updating a correct model of the world), and instrumental rationality (the art of achieving your goals effectively).  Both are fundamentally tied to the real world and our performance in it - they are about ability in practice, not theoretical knowledge (except inasmuch as that knowledge helps ability in practice).  Unfortunately, I think Less Wrong is a failure at instilling abilities-in-practice, and designed in a way that detracts from people's real-world performance.

It will take some time, and it may be unpleasant to hear, but I'm going to try to explain what LW is, why that's bad, and sketch what a tool to actually help people become more rational would look like.

(This post was motivated by Anna Salomon's Humans are not automatically strategic and the response, more detailed background in footnote [1].)

Update / Clarification in response to some comments: This post is based on the assumption that a) the creators of Less Wrong wish Less Wrong to result in people becoming better at achieving their goals (instrumental rationality, aka "efficient productivity"), and b) Some (perhaps many) readers read it towards that goal.  It is this I think is self-deception.  I do not dispute that LW can be used in a positive way (read during fun time instead of the NYT or funny pictures on Digg), or that it has positive effects (exposing people to important ideas they might not see elsewhere).  I merely dispute that reading fun things on the internet can help people become more instrumentally rational.  Additionally, I think instrumental rationality is really important and could be a huge benefit to people's lives (in fact, is by definition!), and so a community value that "deliberate practice towards self-improvement" is more valuable and more important than "reading entertaining ideas on the internet" would be of immense value to LW as a community - while greatly decreasing the importance of LW as a website.

Why Less Wrong is not an effective route to increasing rationality.

Definition:

Work: time spent acting in an instrumentally rational manner, ie forcing your attention towards the tasks you have consciously determined will be the most effective at achieving your consciously chosen goals, rather than allowing your mind to drift to what is shiny and fun.

By definition, Work is what (instrumental) rationalists wish to do more of.  A corollary is that Work is also what is required in order to increase one's capacity to Work.  This must be true by the definition of instrumental rationality - if it's the most efficient way to achieve one's goals, and if one's goal is to increase one's instrumental rationality, doing so is most efficiently done by being instrumentally rational about it. [2]

That was almost circular, so to add meat, you'll notice in the definition an embedded assumption that the "hard" part of Work is directing attention - forcing yourself to do what you know you ought to instead of what is fun & easy.  (And to a lesser degree, determining your goals and the most effective tasks to achieve them).  This assumption may not hold true for everyone, but with the amount of discussion of "Akrasia" on LW, the general drift of writing by smart people about productivity (Paul Graham: Addiction, Distraction, Merlin Mann: Time & Attention), and the common themes in the numerous productivity/self-help books I've read, I think it's fair to say that identifying the goals and tasks that matter and getting yourself to do them is what most humans fundamentally struggle with when it comes to instrumental rationality.

Figuring out goals is fairly personal, often subjective, and can be difficult.  I definitely think the deep philosophical elements of Less Wrong and it's contributions to epistemic rationality [3] are useful to this, but (like psychedelics) the benefit comes from small occasional doses of the good stuff.  Goals should be re-examined regularly, but occasionally (roughly yearly, and at major life forks).  An annual retreat with a mix of close friends and distant-but-respected acquaintances (Burning Man, perhaps) will do the trick - reading a regularly updated blog is way overkill.

And figuring out tasks, once you turn your attention to it, is pretty easy.  Once you have explicit goals, just consciously and continuously examining whether your actions have been effective at achieving those goals will get you way above the average smart human at correctly choosing the most effective tasks.  The big deal here for many (most?) of us, is the conscious direction of our attention.

What is the enemy of consciously directed attention?  It is shiny distraction.  And what is Less Wrong?  It is a blog, a succession of short fun posts with comments, most likely read when people wish to distract or entertain themselves, and tuned for producing shiny ideas which successfully distract and entertain people.  As Merlin Mann says: "Joining a Facebook group about creative productivity is like buying a chair about jogging".  Well, reading a blog to overcome akrasia IS joining a Facebook group about creative productivity.  It's the opposite of this classic piece of advice.

continue reading »

So You Think You're a Bayesian? The Natural Mode of Probabilistic Reasoning

48 Matt_Simpson 14 July 2010 04:51PM

Related to: The Conjunction Fallacy, Conjunction Controversy

The heuristics and biases research program in psychology has discovered many different ways that humans fail to reason correctly under uncertainty.  In experiment after experiment, they show that we use heuristics to approximate probabilities rather than making the appropriate calculation, and that these heuristics are systematically biased. However, a tweak in the experiment protocols seems to remove the biases altogether and shed doubt on whether we are actually using heuristics. Instead, it appears that the errors are simply an artifact of how our brains internally store information about uncertainty. Theoretical considerations support this view.

EDIT: The view presented here is controversial in the heuristics and biases literature; see Unnamed's comment on this post below.

EDIT 2: The author no longer holds the views presented in this post. See this comment.

A common example of the failure of humans to reason correctly under uncertainty is the conjunction fallacy. Consider the following question:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

What is the probability that Linda is:

(a) a bank teller

(b) a bank teller and active in the feminist movement

In a replication by Gigerenzer, 91% of subjects rank (b) as more probable than (a), saying that it is more likely that Linda is active in the feminist movement AND a bank teller than that Linda is simply a bank teller (1993). The conjunction rule of probability states that the probability of two things being true is less than or equal to the probability of one of those things being true. Formally, P(A & B) ≤ P(A). So this experiment shows that people violate the conjunction rule, and thus fail to reason correctly under uncertainty. The representative heuristic has been proposed as an explanation for this phenomenon. To use this heuristic, you evaluate the probability of a hypothesis by comparing how "alike" it is to the data. Someone using the representative heuristic looks at the Linda question and sees that Linda's characteristics resemble those of a feminist bank teller much more closely than that of just a bank teller, and so they conclude that Linda is more likely to be a feminist bank teller than a bank teller.

This is the standard story, but are people really using the representative heuristic in the Linda problem? Consider the following rewording of the question:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

There are 100 people who fit the description above. How many of them are:

(a) bank tellers

(b) bank tellers and active in the feminist movement

Notice that the question is now strictly in terms of frequencies. Under this version, only 22% of subjects rank (b) as more probable than (a) (Gigerenzer, 1993). The only thing that changed is the question that is asked; the description of Linda (and the 100 people) remains unchanged, so the representativeness of the description for the two groups should remain unchanged. Thus people are not using the representative heuristic - at least not in general.

continue reading »

Virtue Ethics for Consequentialists

33 Will_Newsome 04 June 2010 04:08PM

Meta: Influenced by a cool blog post by Kaj, which was influenced by a cool Michael Vassar (like pretty much everything else; the man sure has a lot of ideas). The name of this post is intended to be taken slightly more literally than the similarly titled Deontology for Consequentialists.

 

There's been a hip new trend going around the Singularity Institute Visiting Fellows house lately, and it's not postmodernism. It's virtue ethics. "What, virtue ethics?! Are you serious?" Yup. I'm so contrarian I think cryonics isn't obvious and that virtue ethics is better than consequentialism. This post will explain why.

When I first heard about virtue ethics I assumed it was a clever way for people to justify things they did when the consequences were bad and the reasons were bad, too. People are very good at spinning tales about how virtuous they are, even more so than at finding good reasons that they could have done things that turned out unpopular, and it's hard to spin the consequences of your actions as good when everyone is keeping score. But it seems that moral theorists were mostly thinking in far mode and didn't have too much incentive to create a moral theory that benefited them the most, so my Hansonian hypothesis falls flat. Why did Plato and Aristotle and everyone up until the Enlightenment find virtue ethics appealing, then? Well...

continue reading »

View more: Next