I recently read this post that discusses the author’s experience leading military decision-making trainings for ROTC cadets during the pandemic. I’m going to briefly discuss what those decision-making trainings looked like, and how the principles could be adapted to teach software engineering effectively.
To quote from that article, a Tactical Decision Game (TDG) is a “deceptively simple military decision-making exercises usually consisting of no more than a map and a few paragraphs of text describing a situation. Students are placed in the role of the commander of a unit with a mission and a specified set of resources. You have some information about the enemy, but not as much as you’d like. Then something unexpected happens, upending the situation and requiring you to come up with a new action plan on the spot. Then, after issuing your new orders, you must explain your assessment of the new situation and the rationale behind your decision.”
So, you are given some initial information and make a guess on what to do based on that information. Then you are given more information and you have to figure out something new to do. That sounded a lot to me like watching software requirements change over time, and getting to see whether your initial data structures and code shape hold up to changes.
To adapt these trainings for software engineering, you could imagine a programmer be given a set of requirements, and they implement code for those requirements. Then, they have to either add or change the functionality. This will lay bare any issues with code maintainability that the old approach had. Run enough games like this, possibly paired with suggestions for improvement that clearly would have been easy to refactor or change, and programmers will learn how to write more maintainable code. I think this would be an excellent addition to any college “computer science” curriculum that wants to teach software engineering, and have a high amount of value for the amount of effort it takes.
This describes a high-severity operational event (network outage, crash bug, etc.) pretty well. The call leader is organizing teams and devs to focus on different parts of diagnosis and mitigation, and there's tons of unknowns and both relevant and irrelevant information coming in all the time.
Many companies/teams do dry-runs of such things, but on-the-job training is most effective. The senior people guide the junior people through it as it happens, and after a few repetitions, the junior people (or the ones who display maturity and aptitude) become the leads.
For "regular" software engineering, I'd rather not encourage the idea that short, distinct scenarios are representative of the important parts. Making something that's maintainable and extensible over many years isn't something that can be trained in small bites.
I think this describes many internships or onboarding projects for developers. My general opinion is that, when nobody's shooting at you, it's best to do this on real software, rather than training simulations. The best simulation is reality itself.