So what do you do if you actually have hit global coordination failure amongst your interests and find yourself without the ability to commit to future actions? Like, you think you're just going to play video games instead of doing your homework anyway, so you don't sign up for the class in the first place because you know you're not going to put the work in to actually pass it.
Option 1: Fail with abandon. Spiral into depression.
Option 2: Fortunately, you only have to threaten yourself with enough failure to motivate coordination. You're allowed to try to get your life back in shape, so long as the recovery process is slow and painful enough that the threat remains credible! Be careful, though: if you're trying to "put yourself together" by reconstructing treaty-type negotiations, you can't get too good at snapping back into shape, or you threaten your ability to threaten yourself!
Option 3: Well, that was pretty bad. Maybe try fusion-type coordination this time? You know, not self-punish? Build healthy habits, don't limit yourself by what you think you can't do, maybe get therapy or at least listen to audiobooks about cognitive behavioral therapy -- whatever you can manage.
Option 4: You may already be a winner. Perhaps you were 100% right to avoid taking classes. keep following your intrinsic motivation. Stop justifying it in terms of what you can't do. Stop feeling guilty about it. Just ask yourself what you really want to do in any situation, and do it. You may be afraid that this will turn you into a sociopath or a slob, but if you really fear those things, you're already motivated not to do them, at some level. Some people have found that giving up on the "shoulds" causes that motivation to re-surface, so that things start falling into place. Maybe try it for a time-boxed period, like a week.
A potential explanation I think is implicit in Ziz's writing: the software for doing coordination within ourselves and externally is reused. External pressures can shape your software to be of a certain form; for instance, culture can write itself into people so that they find some ideas/patterns basically unthinkable.
So, one possibility is that fusion is indeed superior for self-coordination, but requires a software change that is difficult to make and can have significant costs to your ability to engage in treaties externally. Increased Mana allows you to offset some of the costs, but not all; some interactions are just pretty direct attempts to check that you've installed the appropriate mental malware.
Habit formation and willpower training are great, but you need to do it every day. To complement your post, here's some things that can help even if you do them just once:
1) Sign up for a sport class
2) Ditch your smartphone
3) Take a few minutes to lower your expectations
Thanks for this thought-provoking essay. I really appreciate posts that take robustly useful concepts (e.g. from game theory), apply them to unusual contexts (here a different level of behaviour - not a group, but an individual), and seeing what comes out. This is an especially great example for helping synthesise others' ideas in the community. I liked it on my first read, and will come back to this. For these reasons I've curated it.
This whole post could have been shorter.
There is a self coordination strategy that looks something like one part of yourself taking over. At least that gets shit done. There is another coordination strategy that looks like a consensus among parts of a congruent self. They both work but probably work for different people.
Pick your method. I pick congruent self.
Also something about unpreferential game theory equilibrium states.
I agree with pushing for short posts, but I really want some gears to the theory as well. I did include an abstract which was basically the same as your summary. What I wanted to do was point at how it is fairly confusing that the fusion paradigm exists, given the breadth of thinking that points in the direction of treaty-style coordination. For my purposes, the post you describe is something I've heard articulated before (by Ziz and others) and not really what I'm getting at. I guess I didn't make that clear in the title/abstract -- I'll think about editing a bit.
Did I need to go into all three domain-specific cases (property rights, side-taking hypothesis, Ainslee's willpower model) after briefly reviewing Schelling? Probably not. I do think each one adds something.
Putting it that way, the short post for my purposes would have been an open-ended question:
"You know Ainslee's willpower model, and how it's like tit-for-tat? You know the stuff Ziz and Nate talk about, which doesn't seem like tit-for-tat? How could that possibly work??"
But it seems like a lot more needs to be said to explain in what way the Ziz+Nate model is not like tit-for-tat.
Yeah. My own immediate impression was that a) the post was a bit rambly (even given it's goals), but b) I was very grateful for someone finally doing a fairly in depth dig into how these paradigms contrast, what the gears are, explaining them in enough detail so that someone who naturally gravitates towards treaties can understand fusion (or vice versa, although I think it's less common for fusion people to be confused about treaties)
https://docs.google.com/document/d/165gF52jGhNn82F2uLsng7tiH3uRGRD_anx8Gh1HQMhY/edit?usp=sharing
lots of things wrong with the post. If those were all solved, more might be visible.
Doing a bunch of line editing on the post is very nice of you, but also comes off as possibly passive-agressive in the context of you not having said anything nice about the post... most of the edit suggestions just seem helpful, but I'm left feeling like your goal is to prove that the post is bad rather than improve it (especially since you say "If those were all solved, more might be visible" rather than something encouraging).
All I'm saying is I'm a bit weirded out. Maybe I'm mis-reading bluntness as hostility.
Anyway, I'll probably try and incorporate some of the suggested edits soon.
In game theory, assumptions of rationality imply that any "solution" of a game must be an equilibrium.* However, most games have many equilibria, and realistic agents don't always know which equilibrium they are in. Certain equilibrium strategies, such as tit-for-tat in iterated prisoner's dilemma, can also be seen in this broader context as coordination strategies: adopting them teaches others to adopt them, because you punish anyone playing some other strategy. In a narrow sense, these strategies solve both the game itself and the equilibrium selection problem. (Technically, such strategies are the evolutionarily stable ones.)
I want to make an informal point about two very different ways this can work out in real life: coordination strategies in which it feels like everyone is fighting to pull the system in different directions but it all cancels out, vs situations where it feels like the coordination strategy is your friend because it saves everyone effort. I believe the second case exists, but it is rather puzzling in terms of the existing literature.
*Different rationality assumptions give different equilibrium concepts; Nash equilibria are the most popular. Correlated equilibria are the second most popular and somewhat more relevant to the discussion here, but I won't get into enough technical details for it to matter.
This post made possible by discussions with Steve Rayhawk, Harmanas Chopra, Jennifer RM, Anna Salamon, and Andrew Critch. Added some edits proposed by Elo.
Schelling Negotiations
Schelling discussed agents solving the equilibrium selection problem by choosing points which other agents are most likely to choose based on prominence, and the term Schelling point was coined to describe such likely equilibria. The classic examples revolve around agents who cannot communicate with one another (highlighting the need for guesswork about each other's behavior), but adding the ability to communicate does not eliminate the equilibrium-selection problem; our community tends to use the term 'Schelling fence' to refer to the analogous concept when open negotiation is involved -- though my impression is that economic literature uses Schelling point for this case as well.
In A Positive Account of Property Rights, David Friedman** explains the emergence of property rights through the negotiation of Schelling fences. Negotiators want as many resources as possible, but also want to minimize the costs of conflict. If there were nothing special about any one patch of land, then both sides could always demand a little more land -- there's no good stopping-point for the bargaining. However, natural divisions in the land such as rivers can serve as Schelling fences. Once such a solution has been proposed, neither side wants to demand a little more for themselves, because breaking the Schelling fence opens up the door for the other person to do the same.
Even if the territory is blank, Schelling fences can be made with abstract reasoning: a half-half split, for example, is simpler than other options.
The coordination problem does re-assert itself at a higher level: gerrymandering conceptual categories. Is the river or the rocky ridge the more natural dividing line? What really counts as the "border" of the rocky ridge, exactly? Participants will tend to engage in motivated cognition to justify whichever potential Schelling fence is a better fit for them.
The side-taking hypothesis of morality, discussed in The Side-Taking Hypothesis for Moral Judgement by Peter DeScioli and A Solution to the Mysteries of Morality by DeScioli and Kurzban, gives a similar account of where our moral intuitions come from. This time, rather than just thinking about two people negotiating a conflict about property rights, we think about the bystanders. Bystanders may get caught up in a conflict, as the two contestants call on their allies for support. People may have their own interests in the outcome, but they also prefer to end up on the winning side rather than the losing side. So, everyone is trying to predict which side others will take, in order to choose sides. This creates an equilibrium-selection situation, so simple rules about right and wrong can dominate complicated social calculations. (As before, complex social considerations come back due to the possibility of gerrymandering the concepts which make up the Schelling points.)
Ziz's blog has good discussions of social order and justice in a Schelling-type framework, and coins the term Schelling reach to quantify a population's coordination power (how complex can one's reasoning be and still settle on the same Schelling point as others?). We can also understand language as an equilibrium-selection game: when you try to say something, you have to balance the various plausible interpretations which your audience might place on the various options of language at your disposal. People will gerrymander word meanings to make their preferred arguments more compelling. Ziz discusses consequences of this in DRM'd Ontology.
Willpower as Self-Coordination
Now let's relate this to arguments you have within yourself, via Ainslie's explanation of willpower in Breakdown of Will. Ainslie gives evidence that humans have systematically different preferences at different points in time. Moods and drives make different things desirable. Easy pleasure gets more tempting as it gets nearer. You set an alarm at night thinking it would be good to get up early and get more done, yet you prefer to shut it off and sleep in in the morning.
Ainslie suggests that we view this as a negotiation between instances of you across time. He calls these "interests" to avoid constantly sounding like he's talking about multiple personality disorder. Ainslie's definition of willpower is successful coordination between these interests via bright-line rules. Each interest knows that if it breaks the rules, it breaks your ability to coordinate with yourself; although each interest has somewhat different goals, the threat of global inability to coordinate is great enough to balance against almost any temptation.
This is why willpower feels sort of like a top-down imposition: some interests are blackmailing other interests with the threat of global discoordination, to make yourself do something which you don't want to do right now but which fits with your concept of "what you want to do".
One problem with this is that you have to use simple rules. Why? It's a lot like the Schelling negotiations discussed earlier. The interests have to coordinate on an equilibrium, which means it must be simple enough that there aren't plausible alternatives to bargain for. (Although, interests may try to bend even the simplest rules. There's a special Schelling-art to calculating excuses. "This is a special occasion, I can break my diet just this once!")
Anna Salamon has described these systems of rules as ego structures rather than willpower (private communication). I really like thinking of it this way, and had written a post draft about it, but it wasn't written very well so I may or may not post it.
A second problem is that in order to threaten yourself with a global coordination failure, you have to be willing to follow through. This isn't such a problem with humans, because habit-building works this way already: it isn't very plausible that you'll be able to build healthy habits in the future if you keep breaking them in the present. However, greater threats provide greater incentives. This makes some people engage in more directly self-punishing behavior if they don't live up to their own standards, such as mentally beating themselves up and feeling awful about things, depriving themselves of other pleasures, etc.
Two Coordination Styles
Ziz strongly advises against Ainslie's willpower strategy, calling it self-blackmail. Nate Soares seems to do the same in the replacing guilt sequence. Most of Ziz's blog is about an alternative technique called fusion. (I don't recommend just reading that link; read Ziz from the beginning. The posts before the fusion posts are prerequisites.) Nate similarly spends most of his blog explaining how to do better. Both of them use self-coordination strategies which have aspects of Ainslie's approach: they view themselves as made up of sub-agents, and explicitly think about the coordination of those sub-agents. However, the flavor is much more like building up trust between sub-agents rather than blackmail. Other people in the bay area rationality community also seem to advocate similar approaches, particularly Andrew Critch (in in-person conversations). It seems like the people who do this have more Getting Stuff Done power. But how could this work? Schelling's framework seems rather compelling. Is there some way around it?
So, I've finally got to the point I promised to make at the beginning: it seems like there are two different sorts of coordination strategies. I don't have any better terminology lined up, so as per Schelling-nature, I'll borrow Ziz's: treaties vs fusion.
How is this possible? What are they doing differently? If I buy Ainslie's psychological model even approximately, this seems rather difficult to explain.
Here are several ideas:
**David Friedman is an expert in the economic analysis of law, and draws striking relationships between what rules are economically efficient, what's intuitively just, and what's used in practice. His book Law's Order contains more in this direction.