Really enjoyed using these safety and risk visualizations in this context!
It was serendipitous when a safety blogger I follow posted this:
https://safetyrisk.net/the-risk-matrix-myth/
I'd recommend reading and watching a lot of his videos in reference to this.
The balance of new capability (aka innovation) vs. safety (reducing known risks) is an important dynamic in any team that builds. I haven't found a way to build perfect safety without severely limiting all new capabilities. The second best way is to have good ability to react when you find problems.
There is a lot to be said about this for avoiding errors, common biases, or 'blunders.' This is probably why 'pair programming' works the way it does.
It makes me wonder if it is more important to have these examples in the moment of practice rather than before the moment of practice. The space is so large that selecting for solving a real problem in front of you helps avoid wasting time.
One way I've found helpful is to use a deck of cards that include questions or provocations (e.g. Oblique Strategies, Trigger Cards, etc.). It can help to have a related set of questions if they should be considered but rarely are. However, provocations that are unrelated can still create interesting results.
Another possibility is to do something like characteristic matching of the problem (or solution) like TRIZ or what is outlined in this paper:
It reminds me a bit like a scenario planning matrix where you are using a main issue to be the center point in the 2x2. The two axis are the value and the shadow. You could then do it as a team through private ideation of each quadrant.
Are you looking to learn them or consider them when doing something actively? I've found randomization card decks where there are a lot of options. It allows you to explore a bit more than you would have and doesn't depend on your ability to recall or that you won't be biased for/against certain ones.
I talk about it from an ideation POV here:
https://interaction19.ixda.org/program/talk-using-randomness-to-break-down-biases-chris-butler/
I hadn't heard of that before. Thanks!
I'm excited to try this out in both strong and weak forms.
There are parallels of getting to the crux of something in design and product discovery research. It is called Why Laddering. I have used it when trying to understand the reasons behind a particular person's problem or need. If someone starts too specific it is a great way to step back from solutions they have preconceived before knowing the real problem (or if there is even one).
It also attempts to get to a higher level in a system of needs.
Are there ever times that the double crux have resulted in narrowing with each crux? Or do they generally become more general?
There is the reverse as well called How Laddering which tries to find solutions for more general problems.
It sounds like the 'reverse double crux' would be to find a new, common solution after a common crux has been found.
You should check out Vaughn Tan's new work on "not knowing." I think the uncertainty of possible actions, possible outcomes, linkage of actions to outcomes, and value of outcomes could be a way to consider these vague goals.
https://vaughntan.org/notknowing
I've been joining his Interintellect conversations and they have been really great:
https://interintellect.com/series/thinking-about-not-knowing/