Very useful - note that in many cases, this is intentional (or semi-intentional). The proposer of the change is labeling it an experiment, but doesn't actually have any intention to learn anything or to roll back the change under foreseeable circumstances.
Your point is valid, but it's a mistake to think it can be fixed by trying to formalize the "experiment". In many cases, when "experiment" is just a euphemism for "implement without discussion", it'll be more effective to just disagree with the policy (if you do) than to object to the "experimental setup".
Management at a company decides that it's going to "experiment with"...
Yeah, in this case you already lost, because the managers will be the ones doing the evaluation, so you can bet that if they want to do X, they will declare X to be a success regardless of anything that happened in reality.
To make it more legit, they may even give everyone a questionnaire like "Did open space increase your productivity?" Note that the only options are "increased" and "not increased", there is no "decreased", so you can only be neutral or positive about the whole thing. Furthermore, if all managers choose "increased" that alone is enough to conclude that "50% of employees answered that their productivity increased".
Indeed, these aren't controlled experiments at all, but sometimes they are also not policy-sneaking. Sometimes they are just using the phrase "experimenting with" in place of "trying out" to frame policy-implementation. At that point, the decision has already been made to try (not necessarily to assess whether trying is a good idea, it's already been endorsed as such), and presumably the conditions for going back to the original version are: 1) It leads to obviously-bad results on the criteria "management" was looking at to motivate the change in the first place or 2) It leads to complaints among the underlings.
The degree of skepticism, then, really just depends on your prior for whether the change will be effective, just like anything else. Whether there should have been more robust discussion depends either on the polarity of those priors (imagine a boardroom where someone raises the change and no one really objects vs. one where another person suggests forming an exploratory committee to discuss it further), or on whether you believe more people should have been included in the discussion ("you changed the bull pen without asking any of the bulls?!"). It has little to do with the fact that it was labeled an experiment, since again, it's likely being used as business-speak rather than as a premeditated ploy. I would love to have data on that though- do people who specifically refer to experimentation when they could just use a simpler word tend to use it innocuously or in a sneaky way?
"trying out" also has the implication of reversibility, and should get (some of) the same criticism as "experiment" as a policy weasel-word. The degree of skepticism depends on your prior for the effect of the change, and also for the motivation of the proposer, and also of whether there is actually a path to measurement (decision to reverse) and reversal (implementation of reversal).
In the examples given, they seem very likely to be "policy-sneaking".
Similar thoughts. How one organisation defines "experiment" may be different to another, or how the employees themselves could interpret (business speak vs weasel). There's also the factors of company values and culture which provide the guardrails for what "experimentation", along with other hefty words such as "productivity" (depth of work vs breadth vs quality vs so on), means to them specifically. Assuming the employees have bought into those values for the most part (and hopefully why they became employees in the first instance) maybe there's an implied, unique understanding of these terms.
Such broad concepts are often used to paint employee satisfaction surveys, and might appear to policy-sneak, but it's easy to miss seemingly unimportant definitional particulars from multiple angles. Not to say that sneaking doesn't occur, and values can definitely be lost in translation, especially if management only takes a top-down approach and stakes the goalposts but doesn't elicit, receive or adequately respond to feedback. The ability to metricise arises from wrangling with the devil in the details, and not every company takes the time to.
Yeah, I should point out that not all cases of experiments without evaluation are "sneaking" by any means -- sometimes one might have a well-intentioned idea for a change and just not go about testing it very systematically. However, in some ways the negative consequences can be similar.
This seems true in some cases, but very false in others. In particular:
Your observations seems most true when there's institutional inertia, so that calling something an experiment can be a useful pretext.
I think you are making an important point.
A relevant follow up question could be: What makes it more (or less) likely that a group or an organisation does make plans to evaluate the results of an experiment?
Some ideas:
Culture. It would be helpful to have (or create) a culture that doesn't see a "failed" experiment as a failure but as an important learning opportunity.
Intellectual humility. Running a true experiment (and not just calling one's plan "experiment") requires accepting that one has less certainty about how the world works.
A few thoughts on this:
However, as you say in your comment oftentimes such things are introduced more out of an implementation of a belief system about what is "right", rather than a curious investigation into which is more effective. It often requires utterly disastrously bad results before reverting the change is likely.
This seems obvious to me, at least in the specific context that you mention (a community-level change). But the post is clear, short, and to the point, so it will surely help some readers.
Military strategy is probably more careful than science in how many contingencies it expects, yet how few are truly accurate. Legitimately, science is more careful to actually try and do things it thinks has merit, while wars are fought off of conjecture and often just error. I think you can find a great number of amazing examples of both careful planning and completely spurious decision making, often on purpose to discredit another leader, within war. von Falkenhayn's memoir of the Great War is a wonderful example.
He also brings up an absolutely excellent point, that is often under-considered.
"Whether the solution proved itself in actual life depended, to be sure, as in all things in this imperfect world, primarily upon the men who had to put these principles in practice."
offtopic: why is the editor so hard to use right now, I don't remember quote blocks being this fiddly.
It appears you only used soft line breaks for every paragraph, making your comment de-facto one big paragraph. That will make blockquotes a lot harder. As in most text-editors, you get paragraph breaks by pressing enter, and soft line-breaks by pressing shift-enter. Sadly, some places on the internet (like IM apps) have conditioned me to often use shift-enter in places where it doesn't make sense, because enter also submits the message, so my guess is you were caught by those habits and were using shift-enter in the above?
Sometimes, people propose "experiments" with new norms, policies, etc. that don't have any real means of evaluating whether or not the policy actually succeeded or not.
This should be viewed with deep skepticism -- it often seems to me that such an "experiment" isn't really an experiment at all, but rather a means of sneaking a policy in by implying that it will be rolled back if it doesn't work, while also making no real provision for evaluating whether it will be successful or not.
In the worst cases, the process of running the experiment can involve taking measures that prevent the experiment from actually being implemented!
Here are some examples of the sorts of thing I mean:
Now, I'm not saying that one should have to register a formal plan for evaluation with timelines, metrics, etc. for any new change being made or program you want to try out -- but you should have at least some idea of what it would look like for the experiment to succeed and what it would look like for it to fail, and for things that are enough of a shakeup more formal or established metrics might well be justified.