I think this is an important issue because it's at least not obvious to people that speech needs to be as free as possible. It's at least possible to think of situations where it would seemingly be a bad thing. For example, if agent A knows that agent B would act in a good or bad way according to agent A's utility function depending on agent B's belief in statement S, agent A would be motivated to present only information to B that would cause it to place a probability on S that maximizes A's utility function. If agent B has goals that operate against A's goals, A might have the incentive to manipulate or present as much false information to B as possible.
More generally, assuming you know how an agent processes information, you might be compelled to control the information flow to that agent. And this becomes more of a viable and worthwhile option the more powerful an agent gets and the more information it has access to (and control over). And I don't think this only applies to superintelligent agents, but actually would apply to human groups and organizations with varying incentives.
Of course within a group of agents which share the same goals, the agents should be incentivized to share information accurately and truthfully (although which information gets shared between which agents would still presumably be controlled).
But as society becomes increasingly polarized, with people seeming to cluster around heavily opposed ideologies, we should expect to see a lot more speech blocking and attempts to restrict or control access to information in general. The question is, if we like free access to high quality information in general, how do we act against this trend?
There are situation where the social control part of the speech is not zero-sum. When you rally your allies to fight against an enemy for the control of a single resource, yes, this is zero-sum. But when you coordinate with your group for the exploration of a new environment, when you play or when you as a group create something new, well basically any win/win situation is positive sum.
I find the metaphor plausible. Let's see if I understand where you're coming from.
I've been looking into predecision processes as a means of figuring out where human decisionmaking systematically goes wrong. One such process is hypothesis generation. I found an interesting result in this paper; the researchers compared the hypothesis sets generated by individuals, natural groups and synthetic groups. In this study, a synthetic group's hypothesis set is agglomerated from the hypothesis sets of individuals who never interact socially. They found that natural groups generate more hypotheses than individuals, and that synthetic groups generate more hypotheses than either. It appears that social interaction somehow reduces the number of alternatives that a group considers relative to what the sum of their considerations would be if they were not a group.
Now, this could just be biased information search. One person poses a hypothesis aloud, and then the alternatives become less available to the entire group. But information search itself could be mediated by motivational factors, like if I write "one high-status person poses a hypothesis aloud...", and this is now a hypothesis about biased information search and a zero-sum social-control component. It does seem worth noting that biased search is currently a sufficient explanation by itself, so we might prefer it by parsimony, but at this level of the world model, it seems like things are often multiply determined.
Importantly, creating synthetic groups doesn't look like punishing memetic-warfare/social-control at all. It looks like preventing it altogether. This seems like an intervention that would be difficult to generate if you thought about the problem in the usual way.