Anti-Akrasia Reprise

5 dreeves 16 November 2010 11:16AM

A year and a half ago I wrote a LessWrong post on anti-akrasia that generated some great discussion. Here's an extended version of that post:  messymatters.com/akrasia

And here's an abstract:

The key to beating akrasia (i.e., procrastination, addiction, and other self-defeating behavior) is constraining your future self -- removing your ability to make decisions under the influence of immediate consequences. When a decision involves some consequences that are immediate and some that are distant, humans irrationally (no amount of future discounting can account for it) over-weight the immediate consequences. To be rational you need to make the decision at a time when all the consequences are distant. And to make your future self actually stick to that decision, you need to enter into a binding commitment. Ironically, you can do that by imposing an immediate penalty, by making the distant consequences immediate. Now your impulsive future self will make the decision with all the consequences immediate and presumably make the same decision as your dispassionate current self who makes the decision when all the consequences are distant. I argue that real-world commitment devices, even the popular stickK.com, don't fully achieve this and I introduce Beeminder as a tool that does.

(Also related is this LessWrong post from last month, though I disagree with the second half of it.)

My new claim is that akrasia is simply irrationality in the face of immediate consequences.  It's not about willpower nor is it about a compromise between multiple selves.  Your true self is the one that is deciding what to do when all the consequences are distant.  To beat akrasia, make sure that's the self that's calling the shots.

And although I'm using the multiple selves / sub-agents terminology, I think it's really just a rhetorical device.  There are not multiple selves in any real sense.  It's just the one true you whose decision-making is sometimes distorted in the presence of immediate consequences, which act like a drug.

Self-empathy as a source of "willpower"

51 Academian 26 October 2010 02:20PM

tl:dr; Dynamic consistency is a better term for "willpower" because its meaning is robust to changes in how we think constistent behavior actually manages to happen. One can boost consistency by fostering interactions between mutually inconsistent sub-agents to help them better empathize with each other.

Despite the common use of the term, I don't think of my "willpower" as an expendable resource, and mostly it just doesn't feel like one. Let's imagine Bob, who is somewhat overweight, likes to eat cake, and wants to lose weight to be more generically attractive and healthy. Bob often plans not to eat cake, but changes his mind, and then regrets it, and then decides he should indulge himself sometimes, and then decides that's just an excuse-meme, etc. Economists and veteran LessWrong readers know this oscillation between value systems is called dynamic inconsistency (q.v. Wikipedia). We can think of Bob as oscillating between being two different idealized agents living in the same body: a WorthIt agent, and a NotWorthIt agent.

The feeling of NotWorthIt-Bob's (in)ability to control WorthIt-Bob is likely to be called "(lack of) willpower", at least by NotWorthIt-Bob, and maybe even by WorthIt-Bob. But I find the framing and langauge of "willpower" fairly unhelpful. Instead, I think NotWorthIt-Bob and WorthIt-Bob just aren't communicating well enough. They try to ignore each other's relevance, but if they could both be present at the same time and actually talk about it, like two people in a healthy relationship, maybe they'd figure something out. I'm talking about self-empathy here, which is opposite to self-sympathy: relating to emotions of yours that you are not immediately feeling. Haven't you noticed you're better at convincing people to change their minds when you actually empathize with their position during the conversation? The same applies to convincing yourself.

Don't ask "Do I have willpower?", but "Am I a dynamically consistent team?"

continue reading »

Willpower: not a limited resource?

26 Jess_Riedel 25 October 2010 12:06PM

Stanford Report has a university public press release about a recent paper [subscription required] in Psychological Science.  The paper is available for free from a website of one of the authors.

The gist is that they find evidence against the (currently fashionable) hypothesis that willpower is an expendable resource.  Here is the leader:

Veronika Job, Carol S. Dweck, and Gregory M. Walton
Stanford University


Abstract:

Much recent research suggests that willpower—the capacity to exert self-control—is a limited resource that is depleted after exertion. We propose that whether depletion takes place or not depends on a person’s belief about whether willpower is a limited resource. Study 1 found that individual differences in lay theories about willpower moderate ego-depletion effects: People who viewed the capacity for self-control as not limited did not show diminished self-control after a depleting experience. Study 2 replicated the effect, manipulating lay theories about willpower. Study 3 addressed questions about the mechanism underlying the effect. Study 4, a longitudinal field study, found that theories about willpower predict change in eating behavior, procrastination, and self-regulated goal striving in depleting circumstances. Taken together, the findings suggest that reduced self-control after a depleting task or during demanding periods may reflect people’s beliefs about the availability of willpower rather than true resource depletion.

(HT: Brashman, as posted on HackerNews.)

Human performance, psychometry, and baseball statistics

24 Craig_Heldreth 15 October 2010 01:13PM

I. Performance levels and age

Human ambition for achievement in modest measure gives meaning to our lives, unless one is an existentialist pessimist like Schopenhauer who taught that life with all its suffering and cruelty simply should not be. Psychologists study our achievements under a number of different descriptions--testing for IQ, motivation, creativity, others. As part of my current career transition, I have been examining my own goals closely, and have recently read a fair amount on these topics which are variable in their evidence.

A useful collection of numerical data on the subject of human performance is the collection of Major League Baseball player performance statistics--the batting averages, number home runs, runs batted in, slugging percentage--of the many thousands of participants in the hundred years since detailed statistical records have been kept and studied by the players, journalists, and fans of the sport. The advantage of examining issues like these from the angle of Major League Baseball player performance statistics is the enormous sample size of accurately measured and archived data.

The current senior authority in this field is Bill James, who now works for the Boston Red Sox; for the first twenty-five years of his activity as a baseball statistician James was not employed by any of the teams. It took him a long time to find a hearing for his views on the inside of the industry, although the fans started buying his books as soon as he began writing them.

In one of the early editions of his Baseball Abstract, James discussed the biggest fallacies that managers and executives held regarding the achievements of baseball players. He was adamant about the most obvious misunderstood fact of player performance: it is sharply peaked at age 27 and decreases rapidly, so rapidly that only the very best players were still useful at the age of 35. He was able to observe only one executive that seemed to intuit this--a man whose sole management strategy was to trade everybody over the age of 30 for the best available player under the age of 30 he could acquire.

continue reading »

Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model

46 Yvain 04 August 2010 09:16AM

Related to: Alien Parasite Technical Guy, A Master-Slave Model of Human Preferences

In Alien Parasite Technical Guy, Phil Goetz argues that mental conflicts can be explained as a conscious mind (the "alien parasite”) trying to take over from an unsuspecting unconscious.

Last year, Wei Dai presented a model (the master-slave model) with some major points of departure from Phil's: in particular, the conscious mind was a special-purpose subroutine and the unconscious had a pretty good idea what it was doing1. But Wei said at the beginning that his model ignored akrasia.

I want to propose an expansion and slight amendment of Wei's model so it includes akrasia and some other features of human behavior. Starting with the signaling theory implicit in Wei's writing, I'll move on to show why optimizing for signaling ability would produce behaviors like self-signaling and akrasia, speculate on why the same model would also promote some of the cognitive biases discussed here, and finish with even more speculative links between a wide range of conscious-unconscious conflicts.

The Signaling Theory of Consciousness

This model begins with the signaling theory of consciousness. In the signaling theory, the conscious mind is the psychological equivalent of a public relations agency. The mind-at-large (hereafter called U for “unconscious” and similar to Wei's “master”) has socially unacceptable primate drives you would expect of a fitness-maximizing agent like sex, status, and survival. These are unsuitable for polite society, where only socially admirable values like true love, compassion, and honor are likely to win you friends and supporters. U could lie and claim to support the admirable values, but most people are terrible liars and society would probably notice.

So you wall off a little area of your mind (hereafter called C for “conscious” and similar to Wei's “slave”) and convince it that it has only admirable goals. C is allowed access to the speech centers. Now if anyone asks you what you value, C answers "Only admirable things like compassion and honor, of course!" and no one detects a lie because the part of the mind that's moving your mouth isn't lying.

This is a useful model because it replicates three observed features of the real world: people say they have admirable goals, they honestly believe on introspection that they have admirable goals, but they tend to pursue more selfish goals. But so far, it doesn't explain the most important question: why do people sometimes pursue their admirable goals and sometimes not?

continue reading »

The Threat of Cryonics

36 lsparrish 03 August 2010 07:57PM

It is obvious that many people find cryonics threatening. Most of the arguments encountered in debates on the topic are not calculated to persuade on objective grounds, but function as curiosity-stoppers. Here are some common examples:

  • Elevated burden of proof. As if cryonics demands more than a small amount of evidence to be worth trying.
  • Elevated cost expectation. Thinking that cryonics is (and could only ever be) affordable only for the very rich.
  • Unresearched suspicions regarding the ethics and business practices of cryonics organizations.
  • Sudden certainty that earth-shattering catastrophes are just around the corner.
  • Assuming the worst about the moral attitudes of humanity's descendants towards cryonics patients.
  • Associations with prescientific mummification, or sci-fi that handwaves the technical difficulties.

The question is what causes this sensation that cryonics is a threat? What does it specifically threaten?

continue reading »

Alien parasite technical guy

61 PhilGoetz 27 July 2010 04:51PM

Custers & Aarts have a paper in the July 2 Science called "The Unconscious Will: How the pursuit of goals operates outside of conscious awareness".  It reviews work indicating that people's brains make decisions and set goals without the brains' "owners" ever being consciously aware of them.

A famous early study is Libet et al. 1983, which claimed to find signals being sent to the fingers before people were aware of deciding to move them.  This is a dubious study; it assumes that our perception of time is accurate, whereas in fact our brains shuffle our percept timeline around in our heads before presenting it to us, in order to provide us with a sequence of events that is useful to us (see Dennett's Consciousness Explained).  Also, Trevina & Miller repeated the test, and also looked at cases where people did not move their fingers; and found that the signal measured by Libet et al. could not predict whether the fingers would move.

Fortunately, the flaws of Libet et al. were not discovered before it spawned many studies showing that unconscious priming of concepts related to goals causes people to spend more effort pursuing those goals; and those are what Custers & Aarts review.  In brief:  If you expose someone, even using subliminal messages, to pictures, words, etc., closely-connected to some goals and not to others, people will work harder towards those goals without being aware of it.

continue reading »

So You Think You're a Bayesian? The Natural Mode of Probabilistic Reasoning

48 Matt_Simpson 14 July 2010 04:51PM

Related to: The Conjunction Fallacy, Conjunction Controversy

The heuristics and biases research program in psychology has discovered many different ways that humans fail to reason correctly under uncertainty.  In experiment after experiment, they show that we use heuristics to approximate probabilities rather than making the appropriate calculation, and that these heuristics are systematically biased. However, a tweak in the experiment protocols seems to remove the biases altogether and shed doubt on whether we are actually using heuristics. Instead, it appears that the errors are simply an artifact of how our brains internally store information about uncertainty. Theoretical considerations support this view.

EDIT: The view presented here is controversial in the heuristics and biases literature; see Unnamed's comment on this post below.

EDIT 2: The author no longer holds the views presented in this post. See this comment.

A common example of the failure of humans to reason correctly under uncertainty is the conjunction fallacy. Consider the following question:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

What is the probability that Linda is:

(a) a bank teller

(b) a bank teller and active in the feminist movement

In a replication by Gigerenzer, 91% of subjects rank (b) as more probable than (a), saying that it is more likely that Linda is active in the feminist movement AND a bank teller than that Linda is simply a bank teller (1993). The conjunction rule of probability states that the probability of two things being true is less than or equal to the probability of one of those things being true. Formally, P(A & B) ≤ P(A). So this experiment shows that people violate the conjunction rule, and thus fail to reason correctly under uncertainty. The representative heuristic has been proposed as an explanation for this phenomenon. To use this heuristic, you evaluate the probability of a hypothesis by comparing how "alike" it is to the data. Someone using the representative heuristic looks at the Linda question and sees that Linda's characteristics resemble those of a feminist bank teller much more closely than that of just a bank teller, and so they conclude that Linda is more likely to be a feminist bank teller than a bank teller.

This is the standard story, but are people really using the representative heuristic in the Linda problem? Consider the following rewording of the question:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

There are 100 people who fit the description above. How many of them are:

(a) bank tellers

(b) bank tellers and active in the feminist movement

Notice that the question is now strictly in terms of frequencies. Under this version, only 22% of subjects rank (b) as more probable than (a) (Gigerenzer, 1993). The only thing that changed is the question that is asked; the description of Linda (and the 100 people) remains unchanged, so the representativeness of the description for the two groups should remain unchanged. Thus people are not using the representative heuristic - at least not in general.

continue reading »

Some Thoughts Are Too Dangerous For Brains to Think

15 WrongBot 13 July 2010 04:44AM
[EDIT - While I still support the general premise argued for in this post, the examples provided were fairly terrible. I won't delete this post because the comments contain some interesting and valuable discussions, but please bear in mind that this is not even close to the most convincing argument for my point.]
A great deal of the theory involved in improving computer and network security involves the definition and creation of "trusted systems", pieces of hardware or software that can be relied upon because the input they receive is entirely under the control of the user. (In some cases, this may instead be the system administrator, manufacturer, programmer, or any other single entity with an interest in the system.) The only way to protect a system from being compromised by untrusted input is to ensure that no possible input can cause harm, which requires either a robust filtering system or strict limits on what kinds of input are accepted: a blacklist or a whitelist, roughly.
One of the downsides of having a brain designed by a blind idiot is that said idiot hasn’t done a terribly good job with limiting input or anything resembling “robust filtering”. Hence that whole bias thing. A consequence of this is that your brain is not a trusted system, which itself has consequences that go much, much deeper than a bunch of misapplied heuristics. (And those are bad enough on their own!)
In discussions of the AI-Box Experiment I’ve seen, there has been plenty of outrage, dismay, and incredulity directed towards the underlying claim: that a sufficiently intelligent being can hack a human via a text-only channel. But whether or not this is the case (and it seems to be likely), the vulnerability is trivial in the face of a machine that is completely integrated with your consciousness and can manipulate it, at will, towards its own ends and without your awareness.
Your brain cannot be trusted. It is not safe. You must be careful with what you put into it, because it  will decide the output, not you.
continue reading »

A Rational Identity

31 Kaj_Sotala 12 July 2010 10:59PM

How facts backfire (previous discussion) discusses the phenomenon where correcting people's mistaken beliefs about political issues doesn't actually make them change their minds. In fact, telling them the truth about things might even reinforce their opinions and entrench them even firmer in their previous views. "The general idea is that it’s absolutely threatening to admit you’re wrong", says one of the researchers quoted in the article.

This should come as no surprise to the people here. But the interesting bit is that the article suggests a way to make people evaluate information in a less biased manner. They mention that one's willingness to accept contrary information is related to one's self-esteem: Nyhan worked on one study in which he showed that people who were given a self-affirmation exercise were more likely to consider new information than people who had not. In other words, if you feel good about yourself, you’ll listen — and if you feel insecure or threatened, you won’t.

I suspect that the beliefs that are the hardest to change, even if the person had generally good self-esteem, are those which are central to their identity. If someone's identity is built around capitalism being evil, or socialism being evil, then any arguments about the benefits of the opposite economical system are going to fall on deaf ears. Not only will that color their view of the world, but it's likely that they're deriving a large part of their self-esteem from that identity. Say something that challenges the assumptions built into their identity, and you're attacking their self-esteem.

Keith Stanovich tells us that simply being intelligent isn't enough to avoid bias. Intelligent people might be better at correcting for bias, but there's no strong correlation between intelligence and the disposition to actually correct for your own biases. Building on his theory, we can assume that threatening opinions will push even non-analytical people into thinking critically, but non-threatening ones won't. Stanovich believes that spreading awareness of biases might be enough to help a lot of people, and to some degree it might. But we also know about the tendency to only use your awareness of bias to attack arguments you don't like. In the same way that telling people facts about politics sometimes only polarizes opinions, telling people about biases might similarly only polarize the debate as everyone thinks their opposition is hopelesly deluded and biased.

So we need to create a new thinking disposition, not just for actively attacking the perceived threats, but for critically evaluating your opinions. That's hard. And I've found for a number of years now that the main reason I try to actively re-evaluate my opinions and update them as necessary is because doing so is part of my identity. I pride myself on not holding onto ideology and for changing my beliefs when it feels like they should be changed. Admitting that somebody else is right and I am wrong does admittedly hurt, but it also feels good that I was able to do so despite the pain. And when I'm in a group where everyone seems to agree about something as self-evident, it frequently works as a warning sign that makes me question the group consensus. Part of the reason why I do that is that I enjoy the feeling of knowing that I'm actively on guard against my mind just adopting whatever belief happens to be fashionable in the group I'm in.

It seems to me that if we want to actually raise the sanity waterline and make people evaluate things critically, and not just conform to different groups than is the norm, a crucial part of that is getting people to adopt an identity of critical thinking. This way, the concept of identity ceases to be something that makes rational thinking harder and starts to actively aid it. I don't really know how one can effectively promote a new kind of identity, but we should probably take lessons from marketers and other people who appeal strongly to emotions. You don't usually pick your identity based on logical arguments. (On the upside, this provides a valuable hint to the question of how to raise rationalist children.)

View more: Prev | Next