# LINK: AI Researcher Yann LeCun on AI function

Yann LeCun, now of Facebook, was interviewed by The Register. It is interesting that his view of AI is apparently that of a prediction tool:

"In some ways you could say intelligence is all about prediction," he explained. "What you can identify in intelligence is it can predict what is going to happen in the world with more accuracy and more time horizon than others."

rather than of a world optimizer. This is not very surprising, given his background in handwriting and image recognition. This "AI as intelligence augmentation" view appears to be prevalent among the AI researchers in general.

## Comments (80)

Best*11 points [-]Prediction cannot solve causal problems.

"ML person thinks AI is about what ML people care about. News at 11."

Ilya, I don't think it is very fair for you to bludgeon people with terminology / appeals to authority (as you do later in a couple of the sub-threads to this comment) especially given that causality is a somewhat niche subfield of machine learning. I.e. I think many people in machine learning would disagree with the implicit assumptions in the claim "probabilistic models cannot capture causal information". I realize that this is true by definition under the definitions preferred by causality researchers, but the assumption here seems to be that it's more natural to make causality an ontologically fundamental aspect of the model, whereas it's far from clear to me that this is the most natural thing to do (i.e. you can imagine learning about causality as a feature of the environment). In essence, you are asserting that "do" is an ontologically fundamental notion, but I personally think of it as a notion that just happens to be important enough to many of the prediction tasks we care about that we hard-code it as a feature of the model, and supply the causal information by hand. I suspect the people you argue with below have similar intuitions but lack the terminology to express them to your satisfaction.

I'll freely admit that I'm not an expert on causality in particular, so perhaps some of what I say above is off-base. But if I'm also below the bar for respectful discourse then your target audience is small indeed.

*3 points [-][ Upvoted. ]

If anyone felt I was uncivil to them in any subthread, I hereby apologize here.

I am not sure causality is a subfield of ML in the sense that I don't think many ML people care about causality. I think causal inference is a subfield of stats (lots of talks with the word "causal" at this year's JSM). I think it's weird that stats and ML are different fields, but that's a separate discussion.

I think it is possible to formalize causality without talking about interventions as Pearl et al. thinks of them, for example people in reinforcement learning do this. But if you start to worry about e.g. time-varying confounders, and you are not using interventions, you will either get stuff wrong, or have to reinvent interventions again. Which would be silly -- so just learn about the Neyman/Rubin model and graphs. It's the formalism that handles all the "gotchas" correctly. (In fact, until interventionists came along, people didn't even have the math to realize that time-varying confounders are a "gotcha" that needs special handling!)

By the way, the only reason I am harping on time-varying confounders is because it is a historically important case that I can explain with a 4 node example. There are lots of other, more complicated "gotchas," of course.

Interventions seem to pop up/get reinvented in seemingly weird places, like the pi constant:

http://infostructuralist.wordpress.com/2010/09/23/directed-stochastic-kernels-and-causal-interventions/

In channels with feedback (thus causality arises!)

http://www.adaptiveagents.org/bayesian_control_rule

http://en.wikipedia.org/wiki/Thompson_sampling

In multi-armed bandit problems (which are related to longitudinal studies in causal inference).

http://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator

http://missingdata.lshtm.ac.uk/index.php?option=com_content&view=article&id=76:missing-at-random-mar&catid=40:missingness-mechanisms&Itemid=96

In handling missing data (can view "missingness" as a causal property). Note the phrasing in the second link: "given the observed data, the missingness

mechanismdoes not depend on the unobserved data." This is precisely the "no unobserved confounders" assumption in causal inference. Not surprisingly the correction is the same as in causal inference.Also in figuring out what the dimension of a statistical hidden variable DAG model is. For example if A,B,C,D are binary, and U, W are unrestricted, then the dimension of the model

{ p(a,b,c,d) = \sum_{u,w} p(a,b,c,d,u,w) | p(a,b,c,d,u,w) factorizes wrt A -> B -> C -> D, A <- U -> C, B <- W -> D } is 13, not 15, which is weird, but there is an intervention-inspired explanation for why.

I don't think you can get something for nothing. You will need causal assumptions somewhere.

*2 points [-]Thanks Ilya, that was a lot of useful context and I wasn't aware that causality was more in stats than ML. For the record, I think that causality is super-interesting and cool, I hope that I didn't sound too negative by calling it "niche" (I would have described e.g. Bayesian nonparametrics, which I used to do research in, the same way, although perhaps it's unfair to lump in causality with nonparametric Bayes, since the former has a much more distinguished history).

I agree with pretty much everything you say above, although I'm still confused about "you will need causal assumptions somewhere". If I could somehow actually do inference under the Solomonoff prior, do you think that some notion of causality would not pop out? I'd understand if you didn't want to take the time to explain it to me; I've had this conversation with 2 other causality people already and am still not quite sure I understand what is meant by "you need causal assumptions to get causal inferences". (Note I already agree that this is true

in the context of graphical models, i.e. you can't distincuish between X->Y and X<-Y without do(X) or some similar information.)*2 points [-]Graphical models are only a "thing" because our brain dedicates lots of processing to vision, so, for instance, we immediately understand complicated conditional independence statements if expressed in the visual form of d-separation. In some sense, graphs in the context of graphical models do not really add any extra information mathematically that wasn't already encoded even without graphs.

Given this, I am not sure there really

isa context for graphical models separate from the context of "variables and their relationships". What you are saying above is that we seem to need "something extra" to be able to tell the direction of causality in a two variable system. (For example, in an additive noise model you can do this:http://machinelearning.wustl.edu/mlpapers/paper_files/ShimizuHHK06.pdf)

I think the "no causes in -- no causes out" principle is more general than that though. For example if we had a

threevariable case, with variables A, B, C where:A is marginally independent of B, but no other independences hold, than the only faithful graphical explanation for this model is:

A -> C <- B

It seems that, unlike the previous case, here there is no causal ambiguity -- A points to C, and B points to C. However, since the only information you inserted into the procedure which gave you this graph is the information about conditional independences, all you are getting out is a graphical description of a conditional independence model (that is a Bayesian network, or a statistical DAG model). In particular, the absence of arrows aren't telling you about absent causal relationships (that is whether A would change if I intervene on C), but absent statistical relationships (that is, whether A is independent of B). The statistical interpretation of the above graph is that it corresponds to a set of densities:

{ p(A,B,C) | A is independent of B }

The same graph can also correspond to a causal model, where we are explicitly talking about interventions, that is:

{ p(A,B,C,C(a,b),B(a)) | C(a,b) is independent of B(a) is independent of A, p(B(a)) = p(B) }

where C(a,b) is just stats notation for do(.), that is p(C(a,b)) = p(C | do(a,b)).

This is a different object from before, and the interpretation of arrows is different. That is, the absence of an arrow from A to B means that intervening on A does not affect B, etc. This causal model

alsoinduces an independence model on the same graph, where the interpretation of arrows changes back to the statistical interpretation. However, we could imagine a very different causal model on three variables, that willalsoinduce the same independence model where A is marginally independent of B. For example, maybe the set of all densities where the real direction of causality is A -> C -> B, but somehow the probabilities involved happened to line up in such a way that A is marginally independent of B. In other words, the mapping from causal to statistical models is many to one.Given this view, it seems pretty clear that going from independences to causal models (even via a very complicated procedure) involves making some sort of assumption that makes the mapping one to one. Maybe the prior in Solomonoff induction gives this to you, but my intuitions about what non-computable procedures will do are fairly poor.

It sort of seems like Solomonoff induction operates at a (very low) level of abstraction where interventionist causality isn't really necessary (because we just figure out what the observable environment as a whole -- including action-capable agents, etc. -- will do), and thus isn't explicitly represented. This is similar to how Blockhead (http://en.wikipedia.org/wiki/Blockhead_(computer_system%29) does not need an explicit internal model of the other participant in the conversation.

I think Solomonoff induction is sort of a boring subject, if one is interested in induction, in the same sense that Blockhead is boring if one is interested in passing the Turing test, and particle physics is boring if one is interested in biology.

Agreed. And search is not the same problem as prediction, you can have a big search problem even when evaluating/predicting any single point is straightforward.

They are not the same problem but they are highly related:

If you have a very good heuristic, then search is trivial, and learning good heuristics from data is a prediction problem.

On the other hand, prediction problems such as Structured prediction (the stuff LeCun does) entail search, and moreover most machine learning algorithms also require some kind of search in the training phase.

It is when what you are predicting is the results of a search.

Prediction covers searching.What counts as a causal problem?

A sufficiently good predictor might be able to answer questions of the form "if I do X, what will happen thereafter?" and "if I do Y, what will happen thereafter?" even though what-will-happen-thereafter may be partly caused by doing X or Y.

Is your point that (to take a famous example with which I'm sure you're already very familiar) in a world where the correlation between smoking and lung cancer goes via a genetic feature that makes both happen, if you ask the machine that question it may in effect say "he chose to smoke, therefore he has that genetic quirk, therefore he will get lung cancer"? Surely any prediction device that would be called "intelligent" by anyone less gung-ho than, say, Ray Kurzweil would enable you to ask it questions like "suppose I -- with my current genome -- chose to smoke; then what?" and "suppose I -- with my current genome -- chose not to smoke; then what?".

I do agree that there are important questions a pure predictor can't help much with. For instance, the machine may be as good as you please at predicting the outcome of particle physics experiments, but it may not have (or we may not be able to extract from it in comprehensible form) any

theoryof what's going on to produce those outcomes.*5 points [-]We give patients a drug, and some of them die. In fact, those that get the drug die more often than those that do not. Is the drug killing them or helping them? This is a very real problem we are facing right now, and getting it wrong results in people dying.

I certainly hope that anything actually intelligent will be able to answer counterfactual questions of the kind you posed here. However, the standard language of prediction employed in ML is not able to even pose such questions, let alone answer them.

I don't get it. You gave some people the drug and some people you didn't. It seems pretty straightforward to estimate how likely someone is to die if you give them medicine.

Certainly it's straightforward. Here's how one can apply your logic. You gave some people [the ones whose disease has progressed the most] the drug and some people you didn't [because their disease isn't so bad you're willing to risk it]; the % of people dying in the first drugged group is much higher than the % of deaths in the second non-drugged group; therefore, this drug is poison and you're a mass murderer.

See the problem?

*1 point [-]Of course people say "but this is silly, obviously we need to condition on health status."

The point is: what if we can't? Or what if we there are other causally relevant factors here? In fact, what is "causally relevant" anyways... We need a

system! ML people don't think about these questions very hard, generally, because culturally they are more interested in "algorithmic approaches" to prediction problems.(This is a clarification of gwern's response to the grandparent, not a reply to gwern.)

The problem is the data is biased. The ML algorithm doesn't know whether the bias is a natural part of the data or artificially induced. Garbage In - Garbage Out.

However it can still be done if the algorithm has more information. Maybe some healthy patients ended up getting the medicine anyways and were far more likely to live, or some unhealthy ones didn't and were even more likely to die. Now it's straightforward prediction again: How likely is a patient to live based on their current health and whether or not they take the drug?

You're making up excuses. The data is not 'biased', it just is, nor is it garbage - it's not made up, no one is lying or falsifying data or anything like that. If your theory cannot handle clean data from a real-world problem, that's a big problem (

especiallyif there are more sophisticated alternatives which can handle it).Biased data is a real thing and this is a great example.

Nomethod can solve the problem you've given without additional information.*4 points [-]This is not biased data. No one tampered with it. No one preferentially left out some data. There is no Cartesian daemon tampering with you. It's a perfectly ordinary causal problem for which one has all the available data. You can't throw your hands up and disdainfully refuse to solve the problem, proclaiming, 'oh, that's

biased'. It may be hard, and the best available solution weak or require strong assumptions, but if that is the case, the correct method should say as much and specify what additional data or interventions would allow stronger conclusions.What do you call "solving the problem"?

Any method will output some

estimates. Some methods will output better estimates, some worse. As people have pointed out, this was an example of a real problem and yes, real-life data is usually pretty messy. We need methods which can handle messy data and not work just on spherical cows in vacuum.Prediction by itself cannot solve causal

decisionproblems (that's why AIXI is not the same as just a Solomonoff predictor) but your example is incorrect. What you're describing is a modelling problem, not a decision problem.*2 points [-]Sorry, I am not following you. Decision problems have the form of "What do you do in situation X to maximize a defined utility function?"

It is very easy to transform any causal modeling example into a decision problem. In this case: "here is an observational study where doctors give drugs to some cohort of patients. This is your data. Here's the correct causal graph for this data. Here is a set of new patients from the same cohort. Your utility function rewards you for minimizing patient deaths. Your actions are 'give the drug to everyone in the set' or 'do not give the drug to everyone in the set.' What do you do?"

Predictor algorithms, as understood by the machine learning community, cannot solve this class of problems correctly. These are not abstract problems! They happen all the time, and we need to solve them now, so you can't just say "let's defer solving this until we have a crazy detailed method of simulating every little detail of the way the HIV virus does its thing in these poor people, and the way this drug disrupts this, and the way side effects of the drug happen, etc. etc. etc."

Bayesian network learning and Bayesian network inference can, in principle, solve that problem.

Of course, if your model is wrong, and/or your dataset is degenerate,

anyapproach will give you bad results: Gargbage in, garbage out.Bayesian networks are statistical, not causal models.

I don't know what you mean by "causal model", but Bayesian networks can deal with the type of problems you describe.

A causal model to me is a set of joint distributions defined over potential outcome random variables.

And no, regardless of how often you repeat it, Bayesian networks cannot solve causal problems.

I have no idea what you're talking about.

gjm asked you what a causal problem was, you didn't provide a definition and instead gave an example of a problem which seems clearly solvable by Bayesian methods such as hidden Markov models (for prediction) or partially observable Markov decision processes (for decision).

Huh?

Can you expand on this, with special attention to the difference between the model and the result of a model, and to the differences from plain-vanilla Bayesian models which will also produce joint distributions over outcomes.

*0 points [-]Yes, but what you are describing is a modelling problem. "Is the drug killing them or helping them?" is not a decision problem, although "Which drug should we give them to save their lives?" is. These are two very different problems, possibly with different answers!

Yes, but in the process it becomes a new problem. Although, you are right that modelling is in some respects an 'easier' problem than making decisions. That's also the reason I wrote my top-level comment, saying that it is true that something you can identify in an AI is the ability to model the world.

I guess my point was that there is a trivial reduction (in the complexity theory sense of the word) here, namely that decision theory is "modeling-complete." In other words, if we had algorithm for solving a certain class of decision problems correctly, we automatically have an algorithm for correctly handling the corresponding model (otherwise how could we get the decision problem right?)

Prediction cannot solve causal decision problems, but the reason it cannot is that it cannot solve the underlying modeling problem correctly. (If it could, there is nothing more to do, just integrate over the utility).

It seems to me that a sufficiently smart prediction machine could answer questions of this kind. E.g., suppose what it really is is a very fast universe simulator. Simulate a lot of patients, diddle with their environments, either give each one the drug or not, repeat with different sets of parameters. I'm not actually recommending this (it probably isn't possible, it produces

interestingethical issues if the simulation is really accurate, etc.) but the point is thatmerely being a predictoras such doesn't imply inability to answer causal questions.Was Yann LeCun saying (1) "AI is all about prediction in the ordinary informal sense of the word" or (2) "AI is all about prediction in the sense in which it's discussed formally in the machine learning community"? I thought it was #1.

*5 points [-]Simulations (and computer programs in general -- think about how debuggers for computer programs work) are causal models, not purely predictive models. Your answer does no work, because being able to simulate at that level of fidelity means we are already Done<tm> with the science of what we are simulating. In particular our simulator will contain in it a very detailed causal model that would contain answers to everything we might want to know. The question is what do we do when our information isn't very good, not when we can just say "let's ask God."

This is a quote from an ML researcher today, who is talking about what is done today. And what is done today for purely predictive modeling are those crazy deep learning networks or support vector machines they have in ML. Those are algorithms specifically tailored to answering p(Y | X) kinds of questions (e.g. prediction questions), not causal questions.

edit: to add to this a little more. I think there is a general mathematical principle at play here, which is similar in spirit to Occam's razor. This principle is : "try to use the weakest assumptions needed to get the right answer." It is this principle that makes "Omega-style simulations" an unsatisfactory answer. It's a kind of overfitting of the entire scientific process.

A good enough prediction engine can substitute, to a degree, for a causal model. Obviously, not always and once you get outside of its competency domain it will break, but still -- if you can forecast very well what effects will an intervention produce, your need for a causal model is diminished.

*0 points [-]I see. So then if I were to give you a causal decision problem, can you tell me what the right answer is using only a prediction engine? I have a list of them right here!

The general form of these problems is : "We have a causal model where an outcome is death. We only have observational data obtained from this causal model. We are interested in whether a given intervention will reduce the death rate. Should we do the intervention?"

Observational data is enough for the predictor, right? (But the predictor doesn't get to see what the causal model is, after all, it just works on observational data and is agnostic of how it came about).

A

good enoughprediction engine, yes.Huh? You don't obtain observational data from a model, you obtain it from reality.

That depends. I think I understand prediction models wider than you do. A prediction model can use any kind of input it likes if it finds it useful.

*0 points [-]Right, the data comes from the territory, but we assume the map is correct.

The point is, if your 'prediction model' has a rich enough language to incorporate the causal model, it's no longer purely a prediction model as everyone in the ML field understands it, because it can then also answer counterfactual questions. In particular, if your prediction model

onlyuses the language of probability theory, it cannot incorporate any causal information because it cannot talk about counterfactuals.So are you willing to take me up on my offer of solving causal problems with a prediction algorithm?

You don't need any assumptions about the model to get observational data. Well, you need

someto recognize what are you looking at, but certainly you don't need to assume the correctness of a causal model.We may be having some terminology problems. Normally I call a "prediction model" anything that outputs testable forecasts about the future. Causal models are a subset of prediction models. Within the context of this thread I understand "prediction model" as a model which outputs forecasts and which does not depend on simulating the mechanics of the underlying process. It seems you're thinking of "pure prediction models" as something akin to "technical" models in finance which look at price history, only at price history, and nothing but the price history. So a "pure prediction model" would be to you something like a neural network into which you dump a lot of more or less raw data but you do not tweak the NN structure to reflect your understanding of how the underlying process works.

Yes, I would agree that a prediction model cannot talk about counterfactuals. However I would not agree that a prediction model can't successfully forecast on the basis of inputs it never saw before.

Good prediction algorithms are domain-specific. I am not defending an assertion that you can get some kind of a Universal Problem Solver out of ML techniques.

But it would be better if you could ask: "suppose I chose to smoke, but my genome and any other similar factors I don't know about were to stay as they are, then what?" where the other similar factors are things that cause smoking.

I don't think he said an AI is not a world-optimizer. He's saying "What you can

identifyin intelligence...", and this is absolutely true. An intelligent optimizer needs a world-model (a predictor) in order to work."What you can identify in intelligence is it can predict what is going to happen in the world" made me realize that there's a big conceptual split in the culture between intelligence and action. Intelligence and action aren't the same thing, but the culture almost has them in opposition.

As an outsider I kind of get the impression that there is a bit of looking-under-the-streetlamp syndrome going on here where world-modelling is assumed to be the most/only important feature because that's what we can currently do well. I got the same impression seeing Jeff Hawkins speaking at a conference recently.

I'm pretty sure that we suck at prediction - compared to evaluation and tree-pruining. Prediction is where our machines need to improve the most.

*0 points [-]If you can predict well enough, you can pass the Turing test - with a little training data.

*0 points [-]Could you elaborate on the connections between image recognition / interpretation and prediction? For this reply, it's fine to be only roughly accurate. (In case an inability to be sufficiently rigorous is what prevented you from sketching the connection.)

...naively, I think of intelligence as, say, an ability to identify and solve problems. Is LeCun saying perhaps that this is equal to prediction, or not as important as prediction, or that he's more interested in working on the latter?

Here is one of my efforts to explain the links: Machine Forecasting.

*0 points [-]I concur. To predict, is everything there is about intelligence, really.

If a program could predict what I am going to type in here, it would be as intelligent as I am. At least in this domain. It could post instead of me.

But the same goes for every other domain. To predict every action of an intelligent agent, is to be as intelligent as he is.

I don't see a case, where this symmetry breaks down.

EDIT: But this is an old idea. Decades old, nothing very new.

You're talking about predicting the actions of an intelligent agent.

LeCun is talking about predicting the environment. These are two different concepts.

*1 point [-]No, they are not. Every intelligent agent is just a piece of environment.

Intelligence can exist even in isolation from any other intelligent agents. Indeed, the first super-intelligent agent is likely to be without peer.

Look! The point is about predicting and intelligence. Doesn't matter what a predictor has around itself. It's just predicting. That's what it does.

And what does a (super)intelligence? It predicts. Very good, probably.

A dichotomy is needless.

Some examples:

I predict, you can't give me a counterexample. Where an obviously intelligent solution can't be regarded as a prediction.

This went under the name of SP theory, long ago. That the prediction, compression and intelligence are the same thing, actually.

http://www.researchgate.net/publication/235892114_Computing_as_compression_the_SP_theory_of_intelligence

Almost tautological, but inescapable.

In order to do this you need training data on what the optimal move is. This may not exist, or limits you to only doing as good as the player you are predicting.

Additionally, predicting is inherently less optimal than search, unless your predictions are 100% perfect. You are choosing moves because you

predictthey are optimal, rather than because it's the best move you've found. If for example, you try to play by predicting what a chessmaster would do, your play will necessarily be worse than if you just play normally.They are closely related but not the same thing.

A counterexample is chess.

What an ideal chess player does? It predicts which move is optimal. May be a tricky feat, but he is good and predicts it well.

I looked this thread in past minutes and I clearly saw this "ideological division". Few people thinks as I do. Other say - you can't solve causal problems with a mere prediction. But don't give a clear example.

Don't you agree, that an ideal "best next chess move predictor" is the strongest possible chess player?

*0 points [-]Maybe it would be useful to define terms, to make things more clear.

If you have a time-process X, and t observations from this process, a predictor comes up with a prediction as to what X_t+1 will be.

On the other hand, given a utility function f() on a series of possible outcomes Y from t+1 to infinity, a decision maker finds the best Y_t+1 to choose to maximize the utility function.

Note that the definition of these two things is not the same: a predictor is concerned about the past and immediate present, whereas a decision maker is concerned with the future.

This "t+1" might be "t+X". Results for a large X may be very bad. So as results for "t+1" may be bad. Still he do his best predictions.

He predicts the best decision, which can be taken.

In part of the interview LeCun is talking about predicting the actions of Facebook users, e.g. "Being able to predict what a user is going to do next is a key feature"

But not predicting everything they do and exactly what they'll type.