Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Akrasia as a collective action problem

4 Post author: fortyeridania 07 December 2010 03:44PM

Related to: Self-empathy as a source of "willpower" and some comments.

It has been mentioned before that akrasia might be modeled as the result of inner conflict. I think this analogy is great, and would like to propose a refinement.1

Here's the mental conflict theory of akrasia, as I understand it:

Though Maud appears to external observers (such as us) be a single self, she is in fact a kind of team. Maud's mind is composed of sub-agents, each of whom would like to pursue its own interests. Maybe when Maud goes to bed, she sets the alarm for 6 AM. When it buzzes the next morning, she hits the snooze...again and again and again. To explain this odd behavior, we invoke the idea that BedtimeMaud is not the same person as MorningMaud. In particular, BedtimeMaud is a person who likes to get up early, while MorningMaud is that bully BedtimeMaud's poor victim.The point is that the various decisionmakers that inhabit her brain are not always after the same ball. The subagents that compose the mind might not be mutually antagonistic; they're just not very empathetic to each other.

I like to think of this situation as a collective action problem akin to those we find in political science and economics. What we have is a misalignment of costs and benefits. If Maud rises at 6, then MorningMaud bears the whole cost of this decision, while a different Maud, or set of Mauds, enjoys the benefits. The costs are concentrated in MorningMaud's lap, while the benefits are dispersed among many Mauds throughout the day. Thus Maud sleeps in.

Put differently, MorningMaud's behavior produces a negative externality: she enjoys the whole benefit of sleeping in, but the rest of the day's Mauds bear the costs.

So, how can we get MorningMaud to lie in the bed she makes, as it were, and get a more efficient outcome?

We can:

  • Legislate. Maud tirelessly tells herself to be less lazy and exerts willpower to get the job done. This is analogous to direct, blanket government action (such as banning coal) in response to a negative externality (such as once-verdant, now barren hillsides). But it's expensive, and it doesn't always work.
  • Negotiate. Maud rewards herself when she gets up on time by taking a hot shower right away or eating a nice breakfast (the latter has a cost borne by MoneyMaud); or she allows herself to sleep in once a week. If MorningMaud follows through, then this one's a winner. Maybe this is analogous to Coasian bargaining?
  • Deputize. Maud enlists her friend Traci to hold her feet to the fire. Or she signs up on Stikk, Egonomics, or some similar site.

The analogy's not perfect. (I can't see a way to fit in Pigovian taxes .)

But is it a fruitful analogy? Is it more than just renaming the key terms of the subagent theory--could one use welfare economics to improve one's own dynamic consistency?

1I got this idea partly from a slip, possibly Freudian (I think I said "externality" instead of "akrasia"), and partly from this page on the Egonomics website.

Comments (4)

Comment author: Miller 11 December 2010 08:21:11PM 1 point [-]

In the end, we don't believe that there are independent agents in the brain precisely, at least not operating near the robust level of a mini-person subject to negotiation, or legislation.

Competition at a low level is fundamental. Perception of some ambiguous item like a Necker Cube is probably resulting in one group of neurons arguing one way, and another arguing a different way, and some process of arbitration. But we shouldn't imply capacities to these neuron groups that don't exist e.g. FrontFacingMiller vs. BackFacingMiller.

Introspection or observation definitely shows that our moods, our level of wakefulness, etc. influence our thoughts, behaviors, motivations, etc. However, this is a complicated system of knobs and dials that are being turned to adjust this jangling automaton and it's probably best to focus on the detail of what is known biochemically or through experimentation and operate at that level.

So for instance, motivation is related to dopamine levels. So, what raises dopamine? Coffee, sex, novelty perhaps.. Have trouble getting up in the morning despite having decided that the snooze button game was useless? Get a bed with an ejection mechanism or that clock that has wheels and takes off like a Roomba. Try something, monitor the results and side-effects closely, and modify.

Comment author: Manfred 07 December 2010 06:36:34PM 0 points [-]

Well, if you treated each entity like it was conscious and had its own separate conditioning, you could punish or reward it and it would eventually get the drift. Or it wouldn't, and then wouldn't you feel silly.

But I think I mentioned somewhere that even if that managed to be how we worked, it would still get hella complicated. Ah yes.

Comment author: fortyeridania 07 December 2010 11:08:08PM *  1 point [-]

Thanks for the link.

Yes, you could reward and punish recalcitrant subagents separately from other subagents. Is this an example of what you're talking about?

or she allows herself to sleep in once a week

In practice, it might be hard to target the relevant subagent and keep the reward/punishment confined in that subagent's domain. Other than occasionally allowing more indulgence, I'm not sure how to do it. Any ideas?

Comment author: Manfred 08 December 2010 07:25:02PM *  0 points [-]

Is this an example of what you're talking about?

That would be something like "satiating" them, i.e. a different model of how these things work.

Conditioning would be allowing yourself to sleep in tomorrow only if you wake up on time today, or intentionally depriving yourself of sleep the day after you sleep through the alarm. This assumes that your subagents actually be conditionable, which is implied if you treat them like they're conscious. But I'm not at all convinced that's the case - this is just a thought experiment.

In practice, it might be hard to target the relevant subagent and keep the reward/punishment confined in that subagent's domain.

Given the unrealistic "they're like individuals" model, operant conditioning should work fine - yes, you'll also punish the other agents who like sleep, but by definition you're giving the most punishment to the ones who were most responsible for you sleeping in.

But I think our brains would work a lot differently if we had conscious subagents running around in it. The evidence I can think of points to, at a minimum, these subagents being really stupid, which I think favors the hypothesis that we're really us, just that we follow a list of rules rather than always being rational.