Just a reminder that almost all real-world interactions have elements of adversarial, cooperative, and random-distribution games. If you over-simplify to focus on just one simple analysis, you're going to end up with cider in your ear.
I don't disagree entirely: there are elements of each in most situations. But I do think there are many cases where either cooperative or adversarial effects dominate, which is most of what you care about when making rough predictions.
One can probably model this effect in a control theory setting as a type of linear or non-linear feedback, including sign and delay. Come to think of it, there is probably research already out there.
I wonder if adding the concept of network effects to collaborative and that of closed/private/proprietary to adversarial helps in imaging the reaction to the change
You can often make pretty clear predictions about what sort of effect something would have in the world as it exists today, but when you start to take into account how the world might change in response it gets very difficult. One tool here is thinking about whether you mostly expect collaborative or adversarial effects.
In a collaborative context other people respond to your change by building on it, and it typically has a larger effect than you'd naively expect. For example, say you create an icon to communicate an idea in your program, other people start using it, and it becomes the standard representation for the concept. While an A/B test might have been only mildly positive, or even negative, after the world adapted to the new icon it would have become very clearly the right symbol to use.
In an adversarial context other people respond to your change by countering it, and it typically has a smaller effect. You figure out a way to detect bot traffic, but the bot operators are very motivated to shift their behavior to undo your work.
Some contexts are also neutral, where both collaborative and adversarial effects are minimal or nonexistent. A baby toy design doesn't get better or worse with the generations.
Some examples:
Diseases are adversarial: antibiotics lose resistance, new variants evade immunity. While something looks good in initial testing, that advantage will erode over time. An interesting exception here is cancer, which (almost always) has to evolve anew in each patient, and so cancer treatments don't become less effective over time (in new patients).
Adopting standards is collaborative: each computer with USB increased demand for devices that spoke USB and vice-versa. Someone thinking "let's not bother with USB, it's not that widely supported yet" would have been ignoring collaborative effects.
Same with building on a platform: the more iPhones sold the more people wanted apps, and the more apps there were the more people wanted iPhones.
Interaction with nature is generally adversarial: cockatoos figure out how to get into trash, weeds evolve herbicide resistance.
A new kind of ad has both kinds of effects: adversarial in that people will notice novel ads more, but cooperative in that advertisers haven't yet learned how to make the most of the format.
This sort of effect is especially important to think through when considering rolling something out based on the results of an experiment. You've almost certainly not run the experiment long enough to see how the world will adjust, but you can use these patterns to predict whether the long-term results will be stronger or weaker than you've measured.