To the extent that an agent is predictable, it must be:
- observable, and
- have a knowable internal structure
The first implies that the predictor has collected data emitted by the agent.
The second implies that the agent has internal structure and that the predictor has the capacity to represent the internal structure of the other agent.
In general, we can say that people do not have the capacity to explicitly represent other people very well. People are unpredictable to each other. This is what makes us free. When somebody is utterly predictable to us, their rigidity is a sign of weakness or stupidity. They are following a simple algorithm.
We are able to model the internal structure of worms with available computing power.
As we build more and more powerful predictive systems, we can ask: is our internal structure in principle knowable by this powerful machine?
(x-posted to digifesto)
I'm not clear on the distinction you're drawing. Can you give a concrete example? Of course, you could have a causal model of the internals which was wrong but gave the same answers as the right one, for the observations you are able to make. But it is not clear what a causal model of what you will see when you interact with the agent could fail to be a causal model, accurate or otherwise, of the agent's internals.
I don't know how cars work, but almost nothing my car does can surprise me. Only unusual one-off problems require help from somebody who knows the internal structure.
But cars are designed to be usable by laypeople, so this is maybe an unfair example.