We have at least one person who was part of Caltech culture for a while, so that is probably where we got it.
I am happy to have helped cause this post to exist.
Our first iteration had questions extremely similar to yours, actually. I believe we had rewordings of each of those questions.
I don't have a good idea of what made them work, because I only started participating after they'd started to decline. There was a lot of socializing that dragged down discussions, but even when we limited socializing I didn't notice any improvement.
Personally I'm skeptical of the entire endeavour. People claimed lots of positive effects, but then as soon as I tried to measure them they disappeared. I kind of suspect that people notice lots of possibilities during weekly review, and feel like they've accomplished something, but then don't actually follow through.
However I think it's pretty plausible that there exists a useful weekly review structure, so I plan to continue testing them.
I'm not sure if you have any data on your weekly reviews (maybe how often you change a behavior as a result?) but I'd be very interested.
Habitica is an app for tracking habits and tasks, designed to mimic an RPG, including the part where players form parties.
(If you want the full list I'd need an email so I can share the relevant spreadsheet)
I might have represented these as more social than I intended? 3/8 (sprints, worksheets, and Sphex) are not inherently group-focused. We tend to do them in groups, but I've tried all three on my own and seen approximately the same results.
Still now that you point it out I think we (meaning the Boston community) are biasing ourselves towards generating group interventions. Will try a few rounds of generating purely individual interventions/techniques.
I think it's better to think about it on the question level:
(Worth noting that we were only recording data towards the end of Sphex's life, because that was when I started to organize it, and I care a lot about gathering data).
I've personally replaced Sphex with a set of check-ins spaced to occur at a reasonable rate ("When was the last time you made a git commit?", "When was the last time you read a machine learning paper?"), etc. The general idea of "create check-ins" might work for other people, but the questions are probably too specific to me to be useful.
Maybe worth saying: I think of all of these as instrumental rationality outputs. They aren't meant to make people more rational, they're ideas a rational person might come up with in order to accomplish other goals.
From a software engineering perspective, your first founder is completely correct. The second you have something that runs you want to show it to users, because they're part of your feedback loop. You want to see how someone who knows nothing about your system will interact with it, whether they'll be able to use the interface, what new bugs they'll turn up with their fiddling, etc.
I wonder if perhaps the market research use of an MVP and the software engineering use have been confused, because they're both "the first time you show the product to users".
Jokes on you I can just edit the CSS.