Recently, some have been discussing "backchaining" as a strategic planning technique. In brief, this technique involves selecting a target outcome, then chaining backwards from there to determine what actions you should take; in other words, rather than starting with your current position and extrapolating forward, you start with the desired position and extrapolate back. (As far as I can tell there is very little, if any, difference between this and backward induction in game theory, but I've heard several in the community call this "backchaining" recently and so use that term here.)
For instance, you might say "Okay, to get people on board with this project we'll need to give a presentation. To make a presentation we'll need a slide deck. To make a slide deck we'll need to get the relevant metrics. Therefore, let's go get the relevant metrics."
Backchaining can be useful in that it allows you to ensure that your actions are cutting through to the objective. However, there are some scenarios where backchaining isn't really an appropriate technique - namely ones where the objective is long-term in nature or involves too many unknown unknowns.
One example that might prove fruitful to investigate is that of chess. In chess, backchaining from desired positions can be useful. However, humans simply can't establish long enough chains for this to be a practical tool in long-term planning. It would be absurd to say "All right, this game I'm going to go for a king-and-rook mate on the 'a' file against my opponent's king alone. To do that, I'll need to use the principle of zugzwang to force my opponent's king to move into an unsafe position. To do that, I'll need to..."
Instead, one usually focuses on accumulating generalized advantage in the opening and midgame, and once sufficient advantage has been established one can then transition into planning for specific endgame positions. This approach is much more effective - fixating on too particular a scenario constrains your options, so instead you just focus on building an advantage.
A similar thing is true in other domains. While backchaining can be a useful technique for accomplishing projects with relatively concrete and legible goals, it starts to falter and even become counterproductive when aimed at a target that's too far in the future. It's easy for this type of reasoning to lead to overcommitting to specific paths that may or may not be the best ones to take - better to instead focus on building generalized advantage, at least insofar as you have a reasonable sense of where you're going.
This seems like a case of generalizing from one example. Chess is a toy scenario. The easiest way to make a chess AI yields an AI that's good at chess but not much else. And there are any number of toy scenarios we could construct in which backchaining is or isn't useful. To give an example of a toy scenario where backchaining is clearly useful, consider the case of trying to find the shortest path between two points on a graph. If you do a bidirectional search, searching from both the starting point and the ending point simultaneously, you can cut your algorithm's exponent in half.
And even for chess, I think you are stating your case too strongly. Suppose I've never played chess before and I'm sitting down at a chess board for the first time. My odds of winning will improve if I know what the word "checkmate" means and I have various examples of how checkmate can happen. My odds of winning will further improve if I have tried out various endgame positions and I know whether e.g. it's possible to win with a king and two knights. Perhaps an experienced player is familiar with chess endgames and knows from memory which piece combinations can win. This might represent a case where the player has reached the point of diminishing returns from backchaining (cf your "you have a reasonable sense of where you're going" premise), but that doesn't mean their study of endgames was useless.
For a quick example of a real-world scenario where I suspect backchaining is useful, consider Nick Bostrom's existential risks paper. I think in general, looking at real world scenarios is a more useful way to address this question. For example, in chess, if all the knights on the board have been captured, the king and two knights scenario is one I can definitively rule out. Knowledge of that sort of endgame is no longer useful. But it's hard to definitively rule out any of the risks on Bostrom's list.
Interesting suggestion! Is there a starting point you would recommend for this sort of study?