Aaron Schwartz's post Theory of Change suggests that it can be a good strategy to generally do this even for long-term goals.
That's a great post! Five-paragraph summary:
Imagine you want to decrease the size of the defense budget. The typical way you might approach this is to look around at the things you know how to do and do them on the issue of decreasing the defense budget. So, if you have a blog, you might write a blog post about why the defense budget should be decreased and tell your friends about it on Facebook and Twitter. If you’re a professional writer, you might write a book on the subject. If you’re an academic, you might publish some papers. Let’s call this strategy a “theory of action”: you work forwards from what you know how to do to try to find things you can do that will accomplish your goal.
A theory of change is the opposite of a theory of action — it works backwards from the goal, in concrete steps, to figure out what you can do to achieve it. To develop a theory of change, you need to start at the end and repeatedly ask yourself, “Concretely, how does one achieve that?” A decrease in the defense budget: how does one achieve that? Yes, you.
AUDIENCE MEMBER: Congress passes a new budget with a smaller authorization for defense next year. [...] you get a majority of the House and Senate to vote for it and the President to sign it.
Great, great — so how do you get them to do that? Now we have to think about what motivates politicians to support something. [...] if you have a politician with a given set of beliefs, how do you convince them that cutting the defense budget advances those beliefs? [...] you need to find people the politicians trust and get them to convince the politicians. [...] we can continue down this road for a while — figuring out who politicians trust, figuring out how to persuade them, figuring out how to get them to, in turn, persuade the politicians, etc. [...] You can see that this can take quite a while.
It’s not easy. It could take a while before you get to a concrete action that you can take. But do you see how this is entirely crucial if you want to be effective? Now maybe if you’re only writing a blog post, it’s not worth it. Not everything we do has to be maximally effective. But DC is filled with organizations that spend millions of dollars each year and have hardly even begun to think about these questions. I’m not saying their money is totally wasted — it certainly has some positive impacts — but it could do so much more if the people in charge thought, concretely, about how it was supposed to accomplish their goals.
Hmm, I view those as medium-term goals, where this method can be quite effective - but note even there the threat of lost purposes. Decreasing the defense budget is likely a proxy for another, deeper, goal. If you build an organization and program dedicated to decreasing the defense budget and it turns out that other paths would have been more effective, you may find yourself constrained by the actions you've already taken.
On the other hand, many of the early actions you might take do build up the sort of generalized advantage you might be able to use even if another scenario proves to be more relevant - but once you get far enough on the chain you risk overcommitting to one particular route or subproblem.
I don't think it's just a matter of long vs. short term that makes or breaks backwards chaining--it's more a matter of the backwards branching factor.
For chess, this is enormous--you can't disjunctively consider every possible mate, nor can you break them into useful categories to reason about. And for each possible mate, there are too many immediate predecessors to them to get useful informaton. You can try to break the mates into categories and reason about those, but the details are so important here that you're unlikely to get any insights more useful than "removing the opponent's pieces while keeping mine is a good idea".
Fighting a war is a bit better--since you mention Imperial Japan in another comment, let's sketch their thought process. (I might garble some details, but I think it'll work for our purposes) Their end goal was roughly that western powers not break up the Japanese Empire. Ways this might happen: a) Western powers are diplomatically convinced not to intervene. b) Japan uses some sort of deterrent threat to convince Western powers not to intervene. c) Japan's land forces can fight off any attempted attack on their empire. d) Japan controls the seas, so foreign powers can't deliver strong attacks. This is a short enough list that you can consider them one by one, and close enough to exhaustive to make the exercise have some value. Choosing the latter pretty much means abandoning a clean backward chain, which you should be willing to do, but the backwards chain has already done a lot for you! And it's possible that with the US's various advantages, a decisive battle was the only way to get even a decent chance at a war win, in which case the paths do victory do converge there and Japan was right to backwards chain from that, even if it didn't work out in the end.
As for defense budgets, you might consider that we're backwards chaining on the question "How to make the world better on a grand scale?" You might get a few options: a) Reduce poverty, b) cure diseases, c) prevent wars, d) mitigate existential risk. Probably not exhaustive, but again, this short list contains enough of the solution space to make the exercise worthwhile. Looking into c), you might group wars into categories and decide that "US-initiated invasions" is a large category that could be solved all at once, much more easily than, say, "religious civil wars". And from there, you could very well end up thinking about the defense budget.
I don't think you need overcommitment when doing backchaining. I don't think Aaron Schwartz had any problem with overcommiting.
That's not what lead to his death, that was rather miscalculation of the political forces.
The Ripple effects from Aaron Schwartz actions are immense.
I think backchaining becomes much more powerful when done on several different timescales.
To use the chess example, on the baseline timescale, the game looks like a sequence of moves and positions. Zooming out to higher timescale, the game looks more like strategic maneuvering (e.g. what is material balance, is the queenside/kingside is strong/weak, is the king in a vulnerable/safe position, etc.) So, on the higher timescales, you think in terms of gaining strategic advantages instead of making specific moves. But you can still think of it as backchaining.
To use another example, suppose you're working on a software product. Your ultimate goal is (for example) selling your company for a certain sum of money. You backchained from that, and the current subgoal in the chain is releasing a new version of the product. Zooming-in to a lower timescale, you backchain from releasing a new version and the current subgoal is fixing a particular bug. Zooming-in even further, you backchain from fixing the bug and the current subgoal is adding a few lines of code that emit some useful data to the log file. Et cetera. But what you don't do is try to plan in detail the release of next's year's version: you don't zoom-in to a low timescale on subgoals that are too far along the high timescale chain.
On the other hand, it's often important to think in terms of several disjunctive / branching scenarios rather than a single linear chain.
I'm appreciative of short/sweet posts describing important concepts succinctly (though I do think it might be good to tag this sort of post as "summarizing/explaining existing concepts")
(As far as I can tell there is very little, if any, difference between this and backward induction in game theory, but I've heard several in the community call this "backchaining" recently and so use that term here.)
They're different. Backward induction starts with all possible end states, and then looks at all 2nd-to-last states and "solves" those by looking at what action is best; then looks at all possible 3rd-to-last states and "solves" those by choosing the action leading to the best 2nd-to-last state, and so on until you've solved the entire game.
This seems like a case of generalizing from one example. Chess is a toy scenario. The easiest way to make a chess AI yields an AI that's good at chess but not much else. And there are any number of toy scenarios we could construct in which backchaining is or isn't useful. To give an example of a toy scenario where backchaining is clearly useful, consider the case of trying to find the shortest path between two points on a graph. If you do a bidirectional search, searching from both the starting point and the ending point simultaneously, you can cut your algorithm's exponent in half.
And even for chess, I think you are stating your case too strongly. Suppose I've never played chess before and I'm sitting down at a chess board for the first time. My odds of winning will improve if I know what the word "checkmate" means and I have various examples of how checkmate can happen. My odds of winning will further improve if I have tried out various endgame positions and I know whether e.g. it's possible to win with a king and two knights. Perhaps an experienced player is familiar with chess endgames and knows from memory which piece combinations can win. This might represent a case where the player has reached the point of diminishing returns from backchaining (cf your "you have a reasonable sense of where you're going" premise), but that doesn't mean their study of endgames was useless.
For a quick example of a real-world scenario where I suspect backchaining is useful, consider Nick Bostrom's existential risks paper. I think in general, looking at real world scenarios is a more useful way to address this question. For example, in chess, if all the knights on the board have been captured, the king and two knights scenario is one I can definitively rule out. Knowledge of that sort of endgame is no longer useful. But it's hard to definitively rule out any of the risks on Bostrom's list.
A fairly similar statement holds for practically all game playing algorithms; in this respect, chess differs little from go, even though the algorithms used to solve each tend to be quite different. However, the story changes when we move to AI planning algorithms more generally; backchaining is common for planning.
I think studying AI algorithms for these sorts of things is, generally, quite informative with respect to what types of reasoning you can expect to work well. Especially if you then practice the skill yourself and watch for what kinds of reasoning you're doing, and when they're effective.
There are many other situations where this sort of thing applies, some with quite more serious consequences than a game of chess.
For instance, during World War II the Imperial Japanese Navy had a strategy called Kantai Kessen ("naval fleet decisive battle", often referred to simply as the Decisive Battle Doctrine), which was essentially a big backchain of this sort.
Reasoning that a naval war between Japan and the United States would culminate in a decisive battle between the fleets and that winning this battle would win the war (as it had for the Japanese with the Battle of Tsushima against the Russians in 1905), Japanese strategists designed a war plan that focused heavily on putting themselves into a strong position to initate such a decisive battle, chaining back from this all the way to the level of what types of ships to build.
However, this reasoning backfired. The Japanese fixation on concentrating forces for a major battle lead them to ignore elements of the war that could have given them an advantage. For instance, Japan never had a serious anti-commerce raiding strategy on either offense or defense; their submarines were focused on whittling down the enemy fleet in preparation for a final battle and they neglected attacks on US shipping and inadequately defended their own shipping from similar methods.
By contrast, while the United States had begun the war with similar "decisive battle" ideas (these were quite in vogue thanks to Mahan's influence), they were ironically forced to come up with a new strategy following heavy losses at Pearl Harbor. Their "island hopping" strategy focused on building incremental advantages and didn't rely on staging a specific battle until circumstances presented that as the best option - and indeed proved far more effective.
Now, there are of course other factors at work here - the US had very relevant industrial and commerce advantages, for instance - but this does seem a non-toy example where focusing on chaining backwards from a desired end point too far in the future led to serious strategic errors.
Hm, I can't say I find this example very convincing either. In Bostrom's paper, he identifies many different ways in which the human species could go extinct. If the Japanese thought the same way Bostrom did, they would have brainstormed many different scenarios under which they could lose the war. Their failure to do so represents a lack of lateral thinking, which seems orthogonal to the forward chain vs backward chain thing. Lack of lateral thinking can come up during forward chaining too, if you don't fully explore your options (e.g. spending all of your time thinking about air power and none of your time thinking about sea power).
Anyway, I suspect a balance of both forward and back chaining is best. Backchaining is good for understanding which factors are actually important. Sometimes it's not the ones you think would give you "generalized advantage". For example, during the Vietnam War, the Tet Offensive was a military loss for the North, so a naive notion of "generalized advantage" might have indicated it was a bad idea. But it ended up being what allowed them to win the war in the long run due to its psychological effect on the American public. If the US military had backchained and tried to brainstorm all of the scenarios under which the South could lose the war ("murphyjitsu"), they might have realized at a certain point that demoralization of the American public was one of the few remaining ways for them to lose. Further backchaining, through thinking like the enemy and trying to generate maximally demoralizing attack scenarios, might have suggested the idea of a surprise attack during the Lunar New Year truce period.
I'd expect our intuitions about "generalized advantage" to be least reliable in domains where we have little experience, such as future technologies that haven't been developed yet. But I think backchaining can be useful in other scenarios as well--e.g. if my goal is to be President, I could look at the resume of every President at the time they were elected, and try to figure out what elements they had in common and how they were positioned right before the start of their successful run.
Even without strong guidlines for when to apply backchaining vs "forward chaining", it seems like one can get a boost in decision making from just realizing that there are multiple options.
Good post! I have several questions:
Recently, some have been discussing "backchaining" as a strategic planning technique. In brief, this technique involves selecting a target outcome, then chaining backwards from there to determine what actions you should take; in other words, rather than starting with your current position and extrapolating forward, you start with the desired position and extrapolate back. (As far as I can tell there is very little, if any, difference between this and backward induction in game theory, but I've heard several in the community call this "backchaining" recently and so use that term here.)
For instance, you might say "Okay, to get people on board with this project we'll need to give a presentation. To make a presentation we'll need a slide deck. To make a slide deck we'll need to get the relevant metrics. Therefore, let's go get the relevant metrics."
Backchaining can be useful in that it allows you to ensure that your actions are cutting through to the objective. However, there are some scenarios where backchaining isn't really an appropriate technique - namely ones where the objective is long-term in nature or involves too many unknown unknowns.
One example that might prove fruitful to investigate is that of chess. In chess, backchaining from desired positions can be useful. However, humans simply can't establish long enough chains for this to be a practical tool in long-term planning. It would be absurd to say "All right, this game I'm going to go for a king-and-rook mate on the 'a' file against my opponent's king alone. To do that, I'll need to use the principle of zugzwang to force my opponent's king to move into an unsafe position. To do that, I'll need to..."
Instead, one usually focuses on accumulating generalized advantage in the opening and midgame, and once sufficient advantage has been established one can then transition into planning for specific endgame positions. This approach is much more effective - fixating on too particular a scenario constrains your options, so instead you just focus on building an advantage.
A similar thing is true in other domains. While backchaining can be a useful technique for accomplishing projects with relatively concrete and legible goals, it starts to falter and even become counterproductive when aimed at a target that's too far in the future. It's easy for this type of reasoning to lead to overcommitting to specific paths that may or may not be the best ones to take - better to instead focus on building generalized advantage, at least insofar as you have a reasonable sense of where you're going.