Yes, participating in it, it kinda felt like that!
I remember in at least 1 document, people actually came up with the meta-strategy that you mention at the end and wrote it out in the doc!
Personally, I felt that beyond just organizing the information, it was helpful if the steps to solving the problem were broken down into self-contained problems, each of which doesn't need understanding of the full problem. This wasn't always possible (e.g. understanding where a bug is often requires understanding both the problem and implementation), but when it happened I think it really helped with progress.
I actually participated in the original relay experiment described here as one of the players, and this seems like a good place to share my experience with it.
The format of the iterative collaboration with different people felt a lot like collaborating with myself across time, when I might drop a problem to think about something else and then come back to it. If I haven't taken the right notes on what I tried and (crucially) what I should do next, I have to pay a tax in time and mental resources to figure out what I did.
It was interesting to experience it in this case, as the combination of the time limit and collaboration with other people meant that I felt a lot of pressure each time figuring out what happens.
More concretely, for some problems, I could actually understand the problem quickly enough and follow along with the current solution so far. For others, I would be greeted with 10 page google doc, documenting all the things that were tried and without a clear next step. In a few cases, it took me 10 minutes just to read through the full problem and things that were tried and I didn't even have time to rephrase it.
The cases where I felt I made the most progress on the problem, there was a clear next step (or list of possible steps) that I could implement. For instance, for one of the prime problems a clear intermediate step was that we needed a fast primality test, and that was a feasible 10 min implementation that didn't need understanding of the full problem.
I still think a lot about my experiment in the Relay, as it has affected how I think about documenting my progress on my projects so that I can take them up faster next time.
It seems that people are focused a lot on the visualization as a tool for removing biases, rather than as a tool for mapping biases. Indeed, visualizations can have value as a summary tool rather than as a way to logically constrain thinking.
Some examples of such visualizations:
In these kinds of visualizations, you get a different way to look at the problem which may appeal to a different sense. I can already see value in this as a way to summarize biased thought.
That said, I do agree with the comments about perhaps tuning the diagram to provide a bit more constraints. Going off of abramdemski's comment above, I think perhaps coloring or changing the lines by the type of reasoning that is happening would be useful. For instance, in your examples, you could have the attributes of "future prediction" for the planning fallacy example or something like "attribute inference" for the Bayesian inference example and maybe undistributed example. By disambiguating between these types in your diagram, you can add rules about the necessary input to correct a biased inference. A "future prediction" line without the "outside view" box would be highly suspect.
I agree that at first glance, it may seem like advertising, but it is different in quite a few ways:
Really, I see nothing wrong with offering rational advising on a site that aims to improve human rationality.
In what year would more than 30% of US adults own a device whose primary interaction mode is through augmented reality (such as google glass or the rumored apple AR glasses)?