Prediction cannot solve causal problems.
"ML person thinks AI is about what ML people care about. News at 11."
I don't think he said an AI is not a world-optimizer. He's saying "What you can identify in intelligence...", and this is absolutely true. An intelligent optimizer needs a world-model (a predictor) in order to work.
"What you can identify in intelligence is it can predict what is going to happen in the world" made me realize that there's a big conceptual split in the culture between intelligence and action. Intelligence and action aren't the same thing, but the culture almost has them in opposition.
As an outsider I kind of get the impression that there is a bit of looking-under-the-streetlamp syndrome going on here where world-modelling is assumed to be the most/only important feature because that's what we can currently do well. I got the same impression seeing Jeff Hawkins speaking at a conference recently.
It is interesting that his view of AI is apparently that of a prediction tool [...] rather than of a world optimizer.
If you can predict well enough, you can pass the Turing test - with a little training data.
This is not very surprising, given his background in handwriting and image recognition.
Could you elaborate on the connections between image recognition / interpretation and prediction? For this reply, it's fine to be only roughly accurate. (In case an inability to be sufficiently rigorous is what prevented you from sketching the connection.)
...naively, I think of intelligence as, say, an ability to identify and solve problems. Is LeCun saying perhaps that this is equal to prediction, or not as important as prediction, or that he's more interested in working on the latter?
I concur. To predict, is everything there is about intelligence, really.
If a program could predict what I am going to type in here, it would be as intelligent as I am. At least in this domain. It could post instead of me.
But the same goes for every other domain. To predict every action of an intelligent agent, is to be as intelligent as he is.
I don't see a case, where this symmetry breaks down.
EDIT: But this is an old idea. Decades old, nothing very new.
the data comes from the territory, but we assume the map is correct.
You don't need any assumptions about the model to get observational data. Well, you need some to recognize what are you looking at, but certainly you don't need to assume the correctness of a causal model.
no longer purely a prediction model as everyone in the ML field understands it
We may be having some terminology problems. Normally I call a "prediction model" anything that outputs testable forecasts about the future. Causal models are a subset of prediction models. Within the context of this thread I understand "prediction model" as a model which outputs forecasts and which does not depend on simulating the mechanics of the underlying process. It seems you're thinking of "pure prediction models" as something akin to "technical" models in finance which look at price history, only at price history, and nothing but the price history. So a "pure prediction model" would be to you something like a neural network into which you dump a lot of more or less raw data but you do not tweak the NN structure to reflect your understanding of how the underlying process works.
Yes, I would agree that a prediction model cannot talk about counterfactuals. However I would not agree that a prediction model can't successfully forecast on the basis of inputs it never saw before.
So are you willing to take me up on my offer of solving causal problems with a prediction algorithm?
Good prediction algorithms are domain-specific. I am not defending an assertion that you can get some kind of a Universal Problem Solver out of ML techniques.
Yann LeCun, now of Facebook, was interviewed by The Register. It is interesting that his view of AI is apparently that of a prediction tool:
"In some ways you could say intelligence is all about prediction," he explained. "What you can identify in intelligence is it can predict what is going to happen in the world with more accuracy and more time horizon than others."
rather than of a world optimizer. This is not very surprising, given his background in handwriting and image recognition. This "AI as intelligence augmentation" view appears to be prevalent among the AI researchers in general.