This is a very useful concept to have. I do this sort of thing a lot both at work and at home. It feels like a mirror image of Trigger-Action Plans - using automation to break a chain instead of to form a chain.
My solution for meetings is that every morning I set up alarms on my smartphone for all metings during that day, 10 minutes before the meeting. Smartphone allows any number of alarms. I check my calendar for today's meetings right before the morning daily meeting (which I have a repeating alarm set up for).
Thanks for this new technique! I'll try to apply it next time I have this kind of problem.
For your failed example about distractions, isn't the issue that the chain is too specific? That is, if the chain is instead pomodoro -> bored -> surfing the net -> distracted, then the weakest link is again interesting, because you can just forbid yourself access to internet.
Mark Forster, the best productivity author I've read (even though I don't agree with all of his ideas), makes a similar point in one of his books. Though he doesn't frame it as finding the weakest link in the negative chain. Rather, as identifying the break in the positive chain that normally makes you act correctly, in cases where you usually get it right, e.g. if usually show up on time for a meeting, but sometimes fail to. You have to think through what actually happened this time that made it go wrong - exactly where did the chain break, and why, and what can you do to avert it? E.g. the traffic is usually OK but this time was bad, so you should always check the traffic in advance, or aim to arrive early.
This is a rationality technique I’ve been experimenting with. Thank you to Jack Ryan, Thomas Kwa, Sydney Von Arx, Noa Nabeshima, and Kyle Scott for helping me refine the method.
Algorithm
Examples
Many of these examples do not involve me. I wrote them all in the first person for easier comprehensibility.
Example: messenger
Example: distracted while exercising
Example: sleep
Example: Facebook
Example: morning phone usage
Failed example: sleep
The problem with this example is that the causal chain involved very complicated processes, so even though one could try and construct a causal chain leading to the undesirable event, there was still opacity that made it difficult to locate an intervention
Lesson: sometimes causal chains involve nodes like mental states, which are extremely difficult to get more specific about.
Failed example: being late to meetings
Warning: slightly fictional, but am confident this general pattern exists.
This example might not be a failed example if trying harder actually ends up working. In my experience, however, resolving to try harder almost never results in changing the state of the world. It might work initially but will likely end up lapsing. In this case, the example fails because the causal chain does not reflect the true dynamics at play. A better causal chain might look like: meeting coming up -> don’t realize until one minute before -> am late, which might suggest an intervention of increasing the amount of forewarning one’s calendar gives.
Lesson: inaccurate causal chains make interventions ineffective.
Failed example: distractions
This example fails because there are many many paths to getting distracted. Writing down a single causal chain will almost certainly fail to capture the complete picture. Thus, even if it seems the chain-as-written suggests an easy intervention, in practice said intervention will only end up shifting reality to a new chain. This is akin to a programmer adding if/then statements to their code so they can pass specific test cases. Focusing on the feeling of boredom might be a productive next step, but it is more difficult.
Lesson: sometimes the easiest link to break is also the most redundant.
Virtue: Specificity
Many problems disappear when you get specific enough. For example, Murphyjitsu can point out specific failure modes for your plan; pre-hindsight might let me know that my plan to “book a plane ticket” might fail because I have an aversion to using the clunky interface and will constantly delay the task. Most of the time, however, it is very difficult to be specific.
However, a problem with many rationality techniques is that they benefit from specificity, but do not make it easy to be specific. This is because they ask you to consider hypotheticals. Murphyjitsu asks you to imagine your plan failed. Goal factoring asks you to speculate on possible goals an action might help you achieve. TAPs work better the more specific they are, but still ask you to come up with a specific TAP on your own. Noticing asks you to notice… what exactly?
Chain breaking has the benefit of having the bulk of the work be writing down a causal chain for something that has actually happened. It is much easier to be specific about things that actually happen. Imagine a keyboard. How many letters are between Y and P? Now look at your keyboard. How many? It’s much easier with your keyboard right in front of you; you can just count!
Similarly, while it might be difficult to imagine a plan that you haven’t done yet, it should be easy to imagine something that has happened to you many times. For instance, I find it very easy to imagine myself eating too many chocolate almonds; I just have to recall what happened 25 minutes ago. I was using my computer and I got slightly hungry. Since there is a jar of chocolate almonds conveniently within reach, I mindlessly grabbed it. I then proceeded to eat multiple handfuls.
Jerry Cleaver said: “What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball.” It’s much easier to keep your eye on the ball when it’s right in front of you.
Limitations
As demonstrated by the failed examples, this technique has limitations.
Some chains cannot be specified
When attempting to determine that chain of events that leads to an outcome, sometimes the chain involves forces that you do not have a white box understanding of. For example, sometimes I might get distracted “because I’m tired” or fail to exercise because “I didn’t have enough energy.” It is unclear what is meant by “tiredness” or “energy”, nor is it clear how to decrease tiredness and increase energy.
In these situations, it is very difficult to get more specific. One way around this is to attempt to specify other parts of the causal chain and intervene in those places. For example, tired -> distraction could be further specific into tired -> go on Reddit -> distracted, suggesting an intervention involving blocking Reddit. If you can’t identify an intervention, further specifying any part of your causal chain is likely going to be useful.
However, there are going to be some situations where the true causal chain is almost entirely controlled by vague concepts. In these situations, chain breaking might not have any insights to offer. One possibility is to combine this with post-hoc noticing, where you realize you’ve been on Reddit for 30 minutes and tries say/write/think all of the things you can remember about the previous 30 minutes, like “I felt a vague interest in what was going on in the world” and “there was no clear stopping point so I never stopped.” The hope is that diligent practice will shorten the lag time until you’re able to notice the relevant phenomenon in real-time.
False narratives
One benefit of chain breaking is the ease of specificity; one flaw is the existence of hindsight bias. When attempting to describe why something happened to you in the past, it’s easy to generate fake explanations. For instance, I might be late for a meeting and conclude that it was because I didn’t remember and that my memory was poor because I drink too much alcohol. [2] A more realistic example might be the adoption of bird augury, which might have been because of false causal narratives of various bird sightings leading to increased rice yield.
In general, it’s much easier to convince yourself that you have understood than developing an actual understanding. The truth is likely that most phenomena are not even predictable in retrospect. Asking yourself “what happened?” and getting the correct answer is a non-trivial feat. For example, yesterday my body failed before I was able to complete my exercise routine. What happened? I am not sure. Maybe I ate poorly or slept poorly. One particularly tempting narrative is that I overexerted myself on Monday and hadn’t recovered yet, but I suspect that contains less truth than falsehood.
For this reason, I recommend using chain breaking on events that have happened many times in the past. If something has happened to you many times, you will have a much better sense of the various dynamics at play. For example, I had gotten distracted while exercising ~7 times before I pinned down my phone as the actual cause. Other plausible narratives included just needing longer recovery times or exercising on too low energy.
Links are replaceable
When thinking about various goals, I find that a useful question to ask oneself is “how far is this goal from the default outcome?” If my goal is to not get into a car crash, since I don’t make a habit of getting into car crashes, this goal is pretty close to the default. However, if my goal is to write 10,000 words in a day, since I usually write much less than 10,000 words in a day, this goal is pretty far from the default. One way of thinking about systemization is as making achieving various goals closer to the default outcome.
Chain breaking is far more useful when it’s trying to answer the question of “why did the default outcome not happen?” as opposed to “why did I fail to prevent the default?” For instance, if I usually don’t exercise, the question “why didn’t I exercise?” is likely to have many more answers than if I do usually exercise. If I usually turn in assignments on time, me turning in an assignment late is much more likely to contain a specific causal reason for the lateness than if lateness was the default.
Default outcomes usually happen because of many redundant causal links. Outcomes that different from the default usually happen because of specific causal reasons. Trying to break a redundant causal chain is much harder than breaking a unique one. Trying to prevent yourself from pulling all-nighters is much easier than trying to prevent yourself from sleeping.
There is some tension between wanting to focus on non-default outcomes and also wanting to focus on things that have happened many times. Balancing this tension carefully is choosing a problem that you understand well enough to solve, happens frequently enough to make a solution useful, and is far enough from being a robust default that it is tractable. For instance, about once a year I get really unproductive for a week or two. Solving this problem would be great, but I don’t understand it well enough to ensure a robust solution. On the other hand, I lost about 30 minutes every day due to small transition costs between tasks or work/break cycles. It would be great if I could reclaim this time, but it happens so robustly that it is not very tractable.
I advise caution about solving problems via acting to prevent yourself from getting what you want. These sorts of strategies work for me because I actually do want to exercise, but sometimes just forget. Blocking my phone helps me remember. ↩︎
This is fictional. Coming up with an example of this proved too difficult. Wittgenstein: “If there were a verb meaning ‘to believe falsely,’ it would not have any significant first person, present indicative.” ↩︎