To the extent that an agent is predictable, it must be:

  • observable, and
  • have a knowable internal structure

The first implies that the predictor has collected data emitted by the agent.

The second implies that the agent has internal structure and that the predictor has the capacity to represent the internal structure of the other agent.

In general, we can say that people do not have the capacity to explicitly represent other people very well. People are unpredictable to each other. This is what makes us free. When somebody is utterly predictable to us, their rigidity is a sign of weakness or stupidity. They are following a simple algorithm.

We are able to model the internal structure of worms with available computing power.

As we build more and more powerful predictive systems, we can ask: is our internal structure in principle knowable by this powerful machine?

(x-posted to digifesto)

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 10:30 AM
[-][anonymous]9y80

Really? I suppose it depends on what you mean by an agent, but I can know that birds will migrate at certain times of the year while knowing nothing about their insides.

Do you think it is something external to the birds that make them migrate?

[-][anonymous]9y00

Generalizing from that approach you can sink into the dark swamps of behaviorism.

[This comment is no longer endorsed by its author]Reply

Generalizing from that approach you can sink into the dark swamps of behaviorism.

It's possible to predict the behavior of black boxes without knowing anything about their internal structure.

In general, we can say that people do not have the capacity to explicitly represent other people very well. People are unpredictable to each other. This is what makes us free. When somebody is utterly predictable to us, their rigidity is a sign of weakness or stupidity.

That says a lot more about your personal values then the general human condition. Many people want romantic partners that understand them and don't associate this desire with either party being weak or stupid.

We are able to model the internal structure of worms with available computing power.

What do you mean with that sentence? It's obviously true because you can model anything. You can model cows as spherical bodies. We can model human behavior as well. Both our models of worms and of humans aren't perfect. The models of worms might be a bit better at predicting worm behavior but they are not perfect.

It's possible to predict the behavior of black boxes without knowing anything about their internal structure.

Elaborate?

That says a lot more about your personal values then the general human condition.

I suppose you are right.

The models of worms might be a bit better at predicting worm behavior but they are not perfect.

They are significantly closer to being perfect than our models of humans. I think you are right in pointing out that where you draw the line is somewhat arbitrary. But the point is the variation on the continuum.

Internal structure is about causality, prediction just needs a good statistical model of the observations.

prediction just needs a good statistical model of the observations.

Only if you're never going to interact with the agent. Once you do, you're making interventions and a causal model is required.

A causal model about what input produces specific output but no causal model about how the internals of the system work.

I'm not clear on the distinction you're drawing. Can you give a concrete example? Of course, you could have a causal model of the internals which was wrong but gave the same answers as the right one, for the observations you are able to make. But it is not clear what a causal model of what you will see when you interact with the agent could fail to be a causal model, accurate or otherwise, of the agent's internals.

I'm not clear on the distinction you're drawing. Can you give a concrete example?

I don't know how cars work, but almost nothing my car does can surprise me. Only unusual one-off problems require help from somebody who knows the internal structure.

But cars are designed to be usable by laypeople, so this is maybe an unfair example.

You don't know anything about how cars work?

I have a model of what inputs produce what outputs ("pressing on the gas pedal makes the engine go; not changing the oil every few months makes things break"). I do not have a causal model of the internals of the system.

At best I can make understandish-sounding noises about engines, but I could not build or repair one, nor even identify all but the most obvious parts.

The thread that starts this discussion speaks about the importance of modelling internals for predictions.

In drug research the company usually search for a molecule that binds some protein that does something in a specific pathway. Even your clinical trials demonstrate that the drug works and helps with the illness you want to treat, you haven't demonstrated that it works via the pathway you target. It might work because of off-target interactions.

This is an example of the sort I described: the model is wrong, but by chance made a right prediction. An incorrect model of internal mechanisms is still a model of internal mechanisms. The possibility of getting lucky is a poor thing on which to base a claim that modelling internal mechanisms is unnecessary.

The possibility of getting lucky is a poor thing on which to base a claim that modelling internal mechanisms is unnecessary.

Gives failure rates of >90 any getting a drug through clinical trials is always "getting lucky".

The issue depends on how much successful drugs are successful do to understanding of the pathways and how many are successful because of luck and good empirical measurement of effects of drugs.

I personally think that medicine would be improved if we would reroute capital currently trying to understand pathways to researching better ways of empirical measurement.

Don't focus on internal knowledge vs black-box prediction, instead think of model complexity and how big our constructed model has to be in order to predict correctly.

A human may be its own best model, meaning that perfect prediction requires a model at least as complex as the thing itself. Or the internals may contain a bunch of redundancy and inefficiency, in which case it's possible to create a perfect model of behavior and interaction that's smaller than the human itself.

If we build the predictive model from sufficient observation and black-box techniques, we might be able to build a smaller model that is perfectly representative, or we might not. If we build it solely from internal observation and replication, we're only ever going to get down to the same complexity as the original.

I include hybrid approaches (use internal and external observations to build models that don't operate identically to the original mechanisms) in the first category: that's still black-box thinking - use all info to model input/output without blindly following internal structure.

This seems correct to me. Thank you.

black-box the shit out if it. any "agent" is a black box. we don't need to know the internals to predict its actions. indeed we don't know how the internals of a worm actually work.

Are we measuring ourselves up against the power of a machine or measuring machines up against the power of ourselves...?

Sometimes a black box approach works, sometimes not. Through neuroscience we are learning many things about how the mind works, and the varieties of human minds that the old black box behaviorist approach never came close to. In the stock market, technical investors (vs the Warren Buffet types) are the black boxers. Sometimes they are going along very nicely when they encounter a big exception to what they think they know and their strategy crashes. Ideally we'd like to know the structure of a thing, but black box analysis can play a big role when the thing is for some reason opaque.