Oh geez.
You are surprised? But obviously, any reply to the original post giving examples as sought will, by definition, raise contention.
Richard, responses of that form stopped because it takes a long time to explain.
That may have been your reason, but that does not imply that it's everyone else's reason -- no more than your distaste for alcohol is a reason for you to disbelieve other people's enjoyment of it.
All along, I suspect, people were using the "mutual information" criterion to determine whether something "has a model" of something else
This is flatly at variance with the uses of "model" I listed, drawn from OB/LW, and the way the word is defined in every book on model-based control. The only time people try to redefine "X is a model of Y" to mean "X has mutual information with Y" is when someone points out that systems of the sort that I described do not contain models. For some reason, people need to believe that those systems work by means of models, despite the clear lack of them, and immediately redefine the word as necessary to be able to say that. But having redefined the word, they are saying something different.
"X has mutual information with Y" is not a technical explanation of an informal concept labelled "model". It is a completely different concept. The concept of a model, as I and everyone else outside these threads uses it, is very clear, unambiguous, and far narrower than mere mutual information. Vladimir Nesov objected to the word "correspondence" as vague; but if you want a technical elaboration of that, look in the direction of "isomorphism", not "mutual information".
And I don't think this is just an issue of arguing definitions. There's a broader issue about whether you can helpfully carve conceptspace in a way that captures Richard's definition of "model" but excludes things that "merely" have mutual information.
Well, you have my answer to that. Conceptspace is carved along one line called "model", and along another line called "mutual information". Both lines matter, both have their uses, and they are in very different places. You want to erase the former or move it to coincide with the latter, but I have seen no argument for doing this.
If you want to take this on, it is no small mountain that I would have to see climbed. What it would take would be a radical reconstruction of control theory based on the concept of mutual information which eschews the word "model" altogether (because it's taken, and there is already a perfectly good term for mutual informaation: "mutual information"), and which can be used directly for the design of control systems that are provably as good or better than those designed by existing techniques, both model-based and non-model-based. It should explain the real reason why those more primitive methods of design work (or don't work, when they don't), and provide better ways of making better designs.
Something like what Jaynes did for statistics. This is the level of isshokenmei at least. (ETA: no, one level higher: "extraordinary effort".)
I do not know if this is possible. Certainly, it has not been done. When I've looked for information-theoretic or Bayesian analyses of control, I have found nothing substantial. Of course, I'm aware of the use of Bayesian techniques within control theory, such as Kalman filters. This is asking for the reverse inclusion. That is the substantial issue here.
All along, I suspect, people were using the "mutual information" criterion to determine whether something "has a model" of something else
This is flatly at variance with the uses of "model" I listed, drawn from OB/LW, and the way the word is defined in every book on model-based control.
No, you just asserted that people were using "model" in your sense in some posts you cited; there was nothing clear in any of the examples that implied they meant it in your sense rather than mine. And you didn't quote from any b...
This article is a deliberate meta-troll. To be successful I need your trolling cooperation. Now hear me out.
In The Strangest Thing An AI Could Tell You Eliezer talks about asognostics, who have one of their arm paralyzed, and what's most interesting are in absolute denial of this - in spite of overwhelming evidence that their arm is paralyzed they will just come with new and new rationalizations proving it's not.
Doesn't it sound like someone else we know? Yes, religious people! In spite of heaps of empirical evidence against existence of their particular flavour of the supernatural, internal inconsistency of their beliefs, and perfectly plausible alternative explanations being well known, something between 90% and 98% of humans believe in the supernatural world, and is in a state of absolute denial not too dissimilar to one of asognostics. Perhaps as many as billions of people in history have even been willing to die for their absurd beliefs.
We are mostly atheists here - we happen not to share this particular delusion. But please consider an outside view for a moment - how likely is it that unlike almost everyone else we don't have any other such delusions, for which we're in absolute denial of truth in spite of mounting heaps of evidence?
If the delusion is of the kind that all of us share it, we won't be able to find it without building an AI. We might have some of those - it's not too unlikely as we're a small and self-selected group.
What I want you to do is try to trigger absolute denial macro in your fellow rationalists! Is there anything that you consider proven beyond any possibility of doubt by both empirical evidence and pure logic, and yet saying it triggers automatic stream of rationalizations in other people? Yes, I pretty much ask you to troll, but it's a good kind of trolling, and I cannot think of any other way to find our delusions.