Wes_W comments on prediction and capacity to represent - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (19)
I'm not clear on the distinction you're drawing. Can you give a concrete example? Of course, you could have a causal model of the internals which was wrong but gave the same answers as the right one, for the observations you are able to make. But it is not clear what a causal model of what you will see when you interact with the agent could fail to be a causal model, accurate or otherwise, of the agent's internals.
I don't know how cars work, but almost nothing my car does can surprise me. Only unusual one-off problems require help from somebody who knows the internal structure.
But cars are designed to be usable by laypeople, so this is maybe an unfair example.
You don't know anything about how cars work?
I have a model of what inputs produce what outputs ("pressing on the gas pedal makes the engine go; not changing the oil every few months makes things break"). I do not have a causal model of the internals of the system.
At best I can make understandish-sounding noises about engines, but I could not build or repair one, nor even identify all but the most obvious parts.