I'm not clear on the distinction you're drawing. Can you give a concrete example?
I don't know how cars work, but almost nothing my car does can surprise me. Only unusual one-off problems require help from somebody who knows the internal structure.
But cars are designed to be usable by laypeople, so this is maybe an unfair example.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Don't focus on internal knowledge vs black-box prediction, instead think of model complexity and how big our constructed model has to be in order to predict correctly.
A human may be its own best model, meaning that perfect prediction requires a model at least as complex as the thing itself. Or the internals may contain a bunch of redundancy and inefficiency, in which case it's possible to create a perfect model of behavior and interaction that's smaller than the human itself.
If we build the predictive model from sufficient observation and black-box techniques, we might be able to build a smaller model that is perfectly representative, or we might not. If we build it solely from internal observation and replication, we're only ever going to get down to the same complexity as the original.
I include hybrid approaches (use internal and external observations to build models that don't operate identically to the original mechanisms) in the first category: that's still black-box thinking - use all info to model input/output without blindly following internal structure.
This seems correct to me. Thank you.