Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

LINK: AI Researcher Yann LeCun on AI function

0 Post author: shminux 11 December 2013 12:29AM

Yann LeCun, now of Facebook, was interviewed by The Register. It is interesting that his view of AI is apparently that of a prediction tool:

"In some ways you could say intelligence is all about prediction," he explained. "What you can identify in intelligence is it can predict what is going to happen in the world with more accuracy and more time horizon than others."

rather than of a world optimizer. This is not very surprising, given his background in handwriting and image recognition. This "AI as intelligence augmentation" view appears to be prevalent among the AI researchers in general.

 

Comments (80)

Comment author: IlyaShpitser 11 December 2013 12:49:37PM *  11 points [-]

Prediction cannot solve causal problems.

"ML person thinks AI is about what ML people care about. News at 11."

Comment author: jsteinhardt 14 December 2013 09:56:43PM 6 points [-]

Ilya, I don't think it is very fair for you to bludgeon people with terminology / appeals to authority (as you do later in a couple of the sub-threads to this comment) especially given that causality is a somewhat niche subfield of machine learning. I.e. I think many people in machine learning would disagree with the implicit assumptions in the claim "probabilistic models cannot capture causal information". I realize that this is true by definition under the definitions preferred by causality researchers, but the assumption here seems to be that it's more natural to make causality an ontologically fundamental aspect of the model, whereas it's far from clear to me that this is the most natural thing to do (i.e. you can imagine learning about causality as a feature of the environment). In essence, you are asserting that "do" is an ontologically fundamental notion, but I personally think of it as a notion that just happens to be important enough to many of the prediction tasks we care about that we hard-code it as a feature of the model, and supply the causal information by hand. I suspect the people you argue with below have similar intuitions but lack the terminology to express them to your satisfaction.

I'll freely admit that I'm not an expert on causality in particular, so perhaps some of what I say above is off-base. But if I'm also below the bar for respectful discourse then your target audience is small indeed.

Comment author: IlyaShpitser 14 December 2013 10:14:15PM *  3 points [-]

[ Upvoted. ]

If anyone felt I was uncivil to them in any subthread, I hereby apologize here.


I am not sure causality is a subfield of ML in the sense that I don't think many ML people care about causality. I think causal inference is a subfield of stats (lots of talks with the word "causal" at this year's JSM). I think it's weird that stats and ML are different fields, but that's a separate discussion.


I think it is possible to formalize causality without talking about interventions as Pearl et al. thinks of them, for example people in reinforcement learning do this. But if you start to worry about e.g. time-varying confounders, and you are not using interventions, you will either get stuff wrong, or have to reinvent interventions again. Which would be silly -- so just learn about the Neyman/Rubin model and graphs. It's the formalism that handles all the "gotchas" correctly. (In fact, until interventionists came along, people didn't even have the math to realize that time-varying confounders are a "gotcha" that needs special handling!)

By the way, the only reason I am harping on time-varying confounders is because it is a historically important case that I can explain with a 4 node example. There are lots of other, more complicated "gotchas," of course.


Interventions seem to pop up/get reinvented in seemingly weird places, like the pi constant:

http://infostructuralist.wordpress.com/2010/09/23/directed-stochastic-kernels-and-causal-interventions/

In channels with feedback (thus causality arises!)

http://www.adaptiveagents.org/bayesian_control_rule

http://en.wikipedia.org/wiki/Thompson_sampling

In multi-armed bandit problems (which are related to longitudinal studies in causal inference).

http://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator

http://missingdata.lshtm.ac.uk/index.php?option=com_content&view=article&id=76:missing-at-random-mar&catid=40:missingness-mechanisms&Itemid=96

In handling missing data (can view "missingness" as a causal property). Note the phrasing in the second link: "given the observed data, the missingness mechanism does not depend on the unobserved data." This is precisely the "no unobserved confounders" assumption in causal inference. Not surprisingly the correction is the same as in causal inference.

Also in figuring out what the dimension of a statistical hidden variable DAG model is. For example if A,B,C,D are binary, and U, W are unrestricted, then the dimension of the model

{ p(a,b,c,d) = \sum_{u,w} p(a,b,c,d,u,w) | p(a,b,c,d,u,w) factorizes wrt A -> B -> C -> D, A <- U -> C, B <- W -> D } is 13, not 15, which is weird, but there is an intervention-inspired explanation for why.


you can imagine learning about causality as a feature of the environment

I don't think you can get something for nothing. You will need causal assumptions somewhere.

Comment author: jsteinhardt 15 December 2013 08:18:12PM *  2 points [-]

Thanks Ilya, that was a lot of useful context and I wasn't aware that causality was more in stats than ML. For the record, I think that causality is super-interesting and cool, I hope that I didn't sound too negative by calling it "niche" (I would have described e.g. Bayesian nonparametrics, which I used to do research in, the same way, although perhaps it's unfair to lump in causality with nonparametric Bayes, since the former has a much more distinguished history).

I agree with pretty much everything you say above, although I'm still confused about "you will need causal assumptions somewhere". If I could somehow actually do inference under the Solomonoff prior, do you think that some notion of causality would not pop out? I'd understand if you didn't want to take the time to explain it to me; I've had this conversation with 2 other causality people already and am still not quite sure I understand what is meant by "you need causal assumptions to get causal inferences". (Note I already agree that this is true in the context of graphical models, i.e. you can't distincuish between X->Y and X<-Y without do(X) or some similar information.)

Comment author: IlyaShpitser 17 December 2013 03:44:23PM *  2 points [-]

Graphical models are only a "thing" because our brain dedicates lots of processing to vision, so, for instance, we immediately understand complicated conditional independence statements if expressed in the visual form of d-separation. In some sense, graphs in the context of graphical models do not really add any extra information mathematically that wasn't already encoded even without graphs.

Given this, I am not sure there really is a context for graphical models separate from the context of "variables and their relationships". What you are saying above is that we seem to need "something extra" to be able to tell the direction of causality in a two variable system. (For example, in an additive noise model you can do this:

http://machinelearning.wustl.edu/mlpapers/paper_files/ShimizuHHK06.pdf)


I think the "no causes in -- no causes out" principle is more general than that though. For example if we had a three variable case, with variables A, B, C where:

A is marginally independent of B, but no other independences hold, than the only faithful graphical explanation for this model is:

A -> C <- B

It seems that, unlike the previous case, here there is no causal ambiguity -- A points to C, and B points to C. However, since the only information you inserted into the procedure which gave you this graph is the information about conditional independences, all you are getting out is a graphical description of a conditional independence model (that is a Bayesian network, or a statistical DAG model). In particular, the absence of arrows aren't telling you about absent causal relationships (that is whether A would change if I intervene on C), but absent statistical relationships (that is, whether A is independent of B). The statistical interpretation of the above graph is that it corresponds to a set of densities:

{ p(A,B,C) | A is independent of B }

The same graph can also correspond to a causal model, where we are explicitly talking about interventions, that is:

{ p(A,B,C,C(a,b),B(a)) | C(a,b) is independent of B(a) is independent of A, p(B(a)) = p(B) }

where C(a,b) is just stats notation for do(.), that is p(C(a,b)) = p(C | do(a,b)).

This is a different object from before, and the interpretation of arrows is different. That is, the absence of an arrow from A to B means that intervening on A does not affect B, etc. This causal model also induces an independence model on the same graph, where the interpretation of arrows changes back to the statistical interpretation. However, we could imagine a very different causal model on three variables, that will also induce the same independence model where A is marginally independent of B. For example, maybe the set of all densities where the real direction of causality is A -> C -> B, but somehow the probabilities involved happened to line up in such a way that A is marginally independent of B. In other words, the mapping from causal to statistical models is many to one.

Given this view, it seems pretty clear that going from independences to causal models (even via a very complicated procedure) involves making some sort of assumption that makes the mapping one to one. Maybe the prior in Solomonoff induction gives this to you, but my intuitions about what non-computable procedures will do are fairly poor.

It sort of seems like Solomonoff induction operates at a (very low) level of abstraction where interventionist causality isn't really necessary (because we just figure out what the observable environment as a whole -- including action-capable agents, etc. -- will do), and thus isn't explicitly represented. This is similar to how Blockhead (http://en.wikipedia.org/wiki/Blockhead_(computer_system%29) does not need an explicit internal model of the other participant in the conversation.


I think Solomonoff induction is sort of a boring subject, if one is interested in induction, in the same sense that Blockhead is boring if one is interested in passing the Turing test, and particle physics is boring if one is interested in biology.

Comment author: Eliezer_Yudkowsky 11 December 2013 09:09:48PM 2 points [-]

Agreed. And search is not the same problem as prediction, you can have a big search problem even when evaluating/predicting any single point is straightforward.

Comment author: V_V 12 December 2013 12:28:28AM 5 points [-]

They are not the same problem but they are highly related:

If you have a very good heuristic, then search is trivial, and learning good heuristics from data is a prediction problem.
On the other hand, prediction problems such as Structured prediction (the stuff LeCun does) entail search, and moreover most machine learning algorithms also require some kind of search in the training phase.

Comment author: timtyler 14 December 2013 03:01:56AM 0 points [-]

search is not the same problem as prediction

It is when what you are predicting is the results of a search. Prediction covers searching.

Comment author: gjm 11 December 2013 01:47:04PM 1 point [-]

What counts as a causal problem?

A sufficiently good predictor might be able to answer questions of the form "if I do X, what will happen thereafter?" and "if I do Y, what will happen thereafter?" even though what-will-happen-thereafter may be partly caused by doing X or Y.

Is your point that (to take a famous example with which I'm sure you're already very familiar) in a world where the correlation between smoking and lung cancer goes via a genetic feature that makes both happen, if you ask the machine that question it may in effect say "he chose to smoke, therefore he has that genetic quirk, therefore he will get lung cancer"? Surely any prediction device that would be called "intelligent" by anyone less gung-ho than, say, Ray Kurzweil would enable you to ask it questions like "suppose I -- with my current genome -- chose to smoke; then what?" and "suppose I -- with my current genome -- chose not to smoke; then what?".

I do agree that there are important questions a pure predictor can't help much with. For instance, the machine may be as good as you please at predicting the outcome of particle physics experiments, but it may not have (or we may not be able to extract from it in comprehensible form) any theory of what's going on to produce those outcomes.

Comment author: IlyaShpitser 11 December 2013 01:59:49PM *  5 points [-]

What counts as a causal problem?

We give patients a drug, and some of them die. In fact, those that get the drug die more often than those that do not. Is the drug killing them or helping them? This is a very real problem we are facing right now, and getting it wrong results in people dying.

Surely any prediction device that would be called "intelligent" by anyone less gung-ho than, say, Ray Kurzweil would enable you to ask it questions like "suppose I -- with my current genome -- chose to smoke; then what?" and "suppose I -- with my current genome -- chose not to smoke; then what?".

I certainly hope that anything actually intelligent will be able to answer counterfactual questions of the kind you posed here. However, the standard language of prediction employed in ML is not able to even pose such questions, let alone answer them.

Comment author: Houshalter 24 March 2014 02:22:39AM 0 points [-]

I don't get it. You gave some people the drug and some people you didn't. It seems pretty straightforward to estimate how likely someone is to die if you give them medicine.

Comment author: gwern 24 March 2014 02:38:14AM 1 point [-]

It seems pretty straightforward to estimate how likely someone is to die if you give them medicine.

Certainly it's straightforward. Here's how one can apply your logic. You gave some people [the ones whose disease has progressed the most] the drug and some people you didn't [because their disease isn't so bad you're willing to risk it]; the % of people dying in the first drugged group is much higher than the % of deaths in the second non-drugged group; therefore, this drug is poison and you're a mass murderer.

See the problem?

Comment author: IlyaShpitser 24 March 2014 12:22:37PM *  1 point [-]

Of course people say "but this is silly, obviously we need to condition on health status."

The point is: what if we can't? Or what if we there are other causally relevant factors here? In fact, what is "causally relevant" anyways... We need a system! ML people don't think about these questions very hard, generally, because culturally they are more interested in "algorithmic approaches" to prediction problems.

(This is a clarification of gwern's response to the grandparent, not a reply to gwern.)

Comment author: Houshalter 31 March 2014 02:53:05PM 0 points [-]

The problem is the data is biased. The ML algorithm doesn't know whether the bias is a natural part of the data or artificially induced. Garbage In - Garbage Out.

However it can still be done if the algorithm has more information. Maybe some healthy patients ended up getting the medicine anyways and were far more likely to live, or some unhealthy ones didn't and were even more likely to die. Now it's straightforward prediction again: How likely is a patient to live based on their current health and whether or not they take the drug?

Comment author: gwern 31 March 2014 03:32:06PM 3 points [-]

The problem is the data is biased. The ML algorithm doesn't know whether the bias is a natural part of the data or artificially induced. Garbage In - Garbage Out.

You're making up excuses. The data is not 'biased', it just is, nor is it garbage - it's not made up, no one is lying or falsifying data or anything like that. If your theory cannot handle clean data from a real-world problem, that's a big problem (especially if there are more sophisticated alternatives which can handle it).

Comment author: Houshalter 31 March 2014 04:47:44PM 1 point [-]

Biased data is a real thing and this is a great example. No method can solve the problem you've given without additional information.

Comment author: gwern 31 March 2014 05:11:04PM *  4 points [-]

This is not biased data. No one tampered with it. No one preferentially left out some data. There is no Cartesian daemon tampering with you. It's a perfectly ordinary causal problem for which one has all the available data. You can't throw your hands up and disdainfully refuse to solve the problem, proclaiming, 'oh, that's biased'. It may be hard, and the best available solution weak or require strong assumptions, but if that is the case, the correct method should say as much and specify what additional data or interventions would allow stronger conclusions.

Comment author: Lumifer 31 March 2014 05:02:25PM 1 point [-]

No method can solve the problem you've given without additional information.

What do you call "solving the problem"?

Any method will output some estimates. Some methods will output better estimates, some worse. As people have pointed out, this was an example of a real problem and yes, real-life data is usually pretty messy. We need methods which can handle messy data and not work just on spherical cows in vacuum.

Comment author: passive_fist 11 December 2013 09:12:58PM 0 points [-]

Prediction by itself cannot solve causal decision problems (that's why AIXI is not the same as just a Solomonoff predictor) but your example is incorrect. What you're describing is a modelling problem, not a decision problem.

Comment author: IlyaShpitser 11 December 2013 09:41:41PM *  2 points [-]

Sorry, I am not following you. Decision problems have the form of "What do you do in situation X to maximize a defined utility function?"

It is very easy to transform any causal modeling example into a decision problem. In this case: "here is an observational study where doctors give drugs to some cohort of patients. This is your data. Here's the correct causal graph for this data. Here is a set of new patients from the same cohort. Your utility function rewards you for minimizing patient deaths. Your actions are 'give the drug to everyone in the set' or 'do not give the drug to everyone in the set.' What do you do?"

Predictor algorithms, as understood by the machine learning community, cannot solve this class of problems correctly. These are not abstract problems! They happen all the time, and we need to solve them now, so you can't just say "let's defer solving this until we have a crazy detailed method of simulating every little detail of the way the HIV virus does its thing in these poor people, and the way this drug disrupts this, and the way side effects of the drug happen, etc. etc. etc."

Comment author: V_V 12 December 2013 12:36:47AM 1 point [-]

Bayesian network learning and Bayesian network inference can, in principle, solve that problem.

Of course, if your model is wrong, and/or your dataset is degenerate, any approach will give you bad results: Gargbage in, garbage out.

Comment author: IlyaShpitser 12 December 2013 12:38:48AM 1 point [-]

Bayesian networks are statistical, not causal models.

Comment author: V_V 12 December 2013 12:53:11PM 0 points [-]

I don't know what you mean by "causal model", but Bayesian networks can deal with the type of problems you describe.

Comment author: IlyaShpitser 12 December 2013 01:42:54PM 2 points [-]

A causal model to me is a set of joint distributions defined over potential outcome random variables.

And no, regardless of how often you repeat it, Bayesian networks cannot solve causal problems.

Comment author: V_V 12 December 2013 04:01:44PM 2 points [-]

I have no idea what you're talking about.

gjm asked you what a causal problem was, you didn't provide a definition and instead gave an example of a problem which seems clearly solvable by Bayesian methods such as hidden Markov models (for prediction) or partially observable Markov decision processes (for decision).

Comment author: Lumifer 12 December 2013 08:40:03PM 1 point [-]

A causal model to me is a set of joint distributions defined over potential outcome random variables.

Huh?

Can you expand on this, with special attention to the difference between the model and the result of a model, and to the differences from plain-vanilla Bayesian models which will also produce joint distributions over outcomes.

Comment author: passive_fist 11 December 2013 09:49:02PM *  0 points [-]

Decision problems have the form of "What do you do in situation X to maximize a defined utility function?"

Yes, but what you are describing is a modelling problem. "Is the drug killing them or helping them?" is not a decision problem, although "Which drug should we give them to save their lives?" is. These are two very different problems, possibly with different answers!

It is very easy to transform any causal modeling example into a decision problem.

Yes, but in the process it becomes a new problem. Although, you are right that modelling is in some respects an 'easier' problem than making decisions. That's also the reason I wrote my top-level comment, saying that it is true that something you can identify in an AI is the ability to model the world.

Comment author: IlyaShpitser 12 December 2013 10:53:41AM 1 point [-]

I guess my point was that there is a trivial reduction (in the complexity theory sense of the word) here, namely that decision theory is "modeling-complete." In other words, if we had algorithm for solving a certain class of decision problems correctly, we automatically have an algorithm for correctly handling the corresponding model (otherwise how could we get the decision problem right?)

Prediction cannot solve causal decision problems, but the reason it cannot is that it cannot solve the underlying modeling problem correctly. (If it could, there is nothing more to do, just integrate over the utility).

Comment author: gjm 11 December 2013 03:57:30PM 0 points [-]

We give patients a drug [...] Is the drug killing them or helping them?

It seems to me that a sufficiently smart prediction machine could answer questions of this kind. E.g., suppose what it really is is a very fast universe simulator. Simulate a lot of patients, diddle with their environments, either give each one the drug or not, repeat with different sets of parameters. I'm not actually recommending this (it probably isn't possible, it produces interesting ethical issues if the simulation is really accurate, etc.) but the point is that merely being a predictor as such doesn't imply inability to answer causal questions.

the standard language of prediction employed in ML

Was Yann LeCun saying (1) "AI is all about prediction in the ordinary informal sense of the word" or (2) "AI is all about prediction in the sense in which it's discussed formally in the machine learning community"? I thought it was #1.

Comment author: IlyaShpitser 11 December 2013 04:28:41PM *  5 points [-]

Simulate a lot of patients

Simulations (and computer programs in general -- think about how debuggers for computer programs work) are causal models, not purely predictive models. Your answer does no work, because being able to simulate at that level of fidelity means we are already Done<tm> with the science of what we are simulating. In particular our simulator will contain in it a very detailed causal model that would contain answers to everything we might want to know. The question is what do we do when our information isn't very good, not when we can just say "let's ask God."

This is a quote from an ML researcher today, who is talking about what is done today. And what is done today for purely predictive modeling are those crazy deep learning networks or support vector machines they have in ML. Those are algorithms specifically tailored to answering p(Y | X) kinds of questions (e.g. prediction questions), not causal questions.


edit: to add to this a little more. I think there is a general mathematical principle at play here, which is similar in spirit to Occam's razor. This principle is : "try to use the weakest assumptions needed to get the right answer." It is this principle that makes "Omega-style simulations" an unsatisfactory answer. It's a kind of overfitting of the entire scientific process.

Comment author: Lumifer 11 December 2013 04:53:47PM 1 point [-]

A good enough prediction engine can substitute, to a degree, for a causal model. Obviously, not always and once you get outside of its competency domain it will break, but still -- if you can forecast very well what effects will an intervention produce, your need for a causal model is diminished.

Comment author: IlyaShpitser 11 December 2013 05:08:21PM *  0 points [-]

I see. So then if I were to give you a causal decision problem, can you tell me what the right answer is using only a prediction engine? I have a list of them right here!

The general form of these problems is : "We have a causal model where an outcome is death. We only have observational data obtained from this causal model. We are interested in whether a given intervention will reduce the death rate. Should we do the intervention?"

Observational data is enough for the predictor, right? (But the predictor doesn't get to see what the causal model is, after all, it just works on observational data and is agnostic of how it came about).

Comment author: Lumifer 11 December 2013 05:25:29PM 0 points [-]

So then if I were to give you a causal decision problem, can you tell me what the right answer is using only a prediction engine?

A good enough prediction engine, yes.

We only have observational data obtained from this causal model.

Huh? You don't obtain observational data from a model, you obtain it from reality.

Observational data is enough for the predictor, right?

That depends. I think I understand prediction models wider than you do. A prediction model can use any kind of input it likes if it finds it useful.

Comment author: IlyaShpitser 11 December 2013 05:56:11PM *  0 points [-]

Huh? You don't obtain observational data from a model, you obtain it from reality.

Right, the data comes from the territory, but we assume the map is correct.

That depends. I think I understand prediction models wider than you do.

The point is, if your 'prediction model' has a rich enough language to incorporate the causal model, it's no longer purely a prediction model as everyone in the ML field understands it, because it can then also answer counterfactual questions. In particular, if your prediction model only uses the language of probability theory, it cannot incorporate any causal information because it cannot talk about counterfactuals.

So are you willing to take me up on my offer of solving causal problems with a prediction algorithm?

Comment author: Lumifer 11 December 2013 06:08:31PM 0 points [-]

the data comes from the territory, but we assume the map is correct.

You don't need any assumptions about the model to get observational data. Well, you need some to recognize what are you looking at, but certainly you don't need to assume the correctness of a causal model.

no longer purely a prediction model as everyone in the ML field understands it

We may be having some terminology problems. Normally I call a "prediction model" anything that outputs testable forecasts about the future. Causal models are a subset of prediction models. Within the context of this thread I understand "prediction model" as a model which outputs forecasts and which does not depend on simulating the mechanics of the underlying process. It seems you're thinking of "pure prediction models" as something akin to "technical" models in finance which look at price history, only at price history, and nothing but the price history. So a "pure prediction model" would be to you something like a neural network into which you dump a lot of more or less raw data but you do not tweak the NN structure to reflect your understanding of how the underlying process works.

Yes, I would agree that a prediction model cannot talk about counterfactuals. However I would not agree that a prediction model can't successfully forecast on the basis of inputs it never saw before.

So are you willing to take me up on my offer of solving causal problems with a prediction algorithm?

Good prediction algorithms are domain-specific. I am not defending an assertion that you can get some kind of a Universal Problem Solver out of ML techniques.

Comment author: Caspian 15 December 2013 12:37:27AM 0 points [-]

Surely any prediction device that would be called "intelligent" by anyone less gung-ho than, say, Ray Kurzweil would enable you to ask it questions like "suppose I -- with my current genome -- chose to smoke; then what?" and "suppose I -- with my current genome -- chose not to smoke; then what?".

But it would be better if you could ask: "suppose I chose to smoke, but my genome and any other similar factors I don't know about were to stay as they are, then what?" where the other similar factors are things that cause smoking.

Comment author: passive_fist 11 December 2013 05:12:11AM 6 points [-]

I don't think he said an AI is not a world-optimizer. He's saying "What you can identify in intelligence...", and this is absolutely true. An intelligent optimizer needs a world-model (a predictor) in order to work.

Comment author: NancyLebovitz 11 December 2013 02:16:02PM 2 points [-]

"What you can identify in intelligence is it can predict what is going to happen in the world" made me realize that there's a big conceptual split in the culture between intelligence and action. Intelligence and action aren't the same thing, but the culture almost has them in opposition.

Comment author: ShardPhoenix 11 December 2013 08:08:35AM 1 point [-]

As an outsider I kind of get the impression that there is a bit of looking-under-the-streetlamp syndrome going on here where world-modelling is assumed to be the most/only important feature because that's what we can currently do well. I got the same impression seeing Jeff Hawkins speaking at a conference recently.

Comment author: timtyler 14 December 2013 04:16:19PM 0 points [-]

I'm pretty sure that we suck at prediction - compared to evaluation and tree-pruining. Prediction is where our machines need to improve the most.

Comment author: timtyler 14 December 2013 02:50:45AM *  0 points [-]

It is interesting that his view of AI is apparently that of a prediction tool [...] rather than of a world optimizer.

If you can predict well enough, you can pass the Turing test - with a little training data.

Comment author: byrnema 11 December 2013 03:40:13PM *  0 points [-]

This is not very surprising, given his background in handwriting and image recognition.

Could you elaborate on the connections between image recognition / interpretation and prediction? For this reply, it's fine to be only roughly accurate. (In case an inability to be sufficiently rigorous is what prevented you from sketching the connection.)

...naively, I think of intelligence as, say, an ability to identify and solve problems. Is LeCun saying perhaps that this is equal to prediction, or not as important as prediction, or that he's more interested in working on the latter?

Comment author: timtyler 14 December 2013 04:18:36PM 1 point [-]

Here is one of my efforts to explain the links: Machine Forecasting.

Comment author: Thomas 11 December 2013 10:52:06AM *  0 points [-]

I concur. To predict, is everything there is about intelligence, really.

If a program could predict what I am going to type in here, it would be as intelligent as I am. At least in this domain. It could post instead of me.

But the same goes for every other domain. To predict every action of an intelligent agent, is to be as intelligent as he is.

I don't see a case, where this symmetry breaks down.

EDIT: But this is an old idea. Decades old, nothing very new.

Comment author: passive_fist 11 December 2013 09:24:10PM 0 points [-]

You're talking about predicting the actions of an intelligent agent.

LeCun is talking about predicting the environment. These are two different concepts.

Comment author: Thomas 12 December 2013 10:48:32AM *  1 point [-]

No, they are not. Every intelligent agent is just a piece of environment.

Comment author: passive_fist 13 December 2013 04:11:42AM 1 point [-]

Intelligence can exist even in isolation from any other intelligent agents. Indeed, the first super-intelligent agent is likely to be without peer.

Comment author: Thomas 13 December 2013 07:25:26AM 0 points [-]

Look! The point is about predicting and intelligence. Doesn't matter what a predictor has around itself. It's just predicting. That's what it does.

And what does a (super)intelligence? It predicts. Very good, probably.

A dichotomy is needless.

Some examples:

  • predicting the solution of a partial deferential equation
  • predicting the best method to solve the given equation
  • predicting how a process might behave
  • predicting the best action you may take to achieve a goal
  • predicting the best possible move in a given chess position
  • predicting what a cyphered message is about ...

I predict, you can't give me a counterexample. Where an obviously intelligent solution can't be regarded as a prediction.

This went under the name of SP theory, long ago. That the prediction, compression and intelligence are the same thing, actually.

http://www.researchgate.net/publication/235892114_Computing_as_compression_the_SP_theory_of_intelligence

Almost tautological, but inescapable.

Comment author: Houshalter 24 March 2014 02:43:09AM 0 points [-]

predicting the best possible move in a given chess position

In order to do this you need training data on what the optimal move is. This may not exist, or limits you to only doing as good as the player you are predicting.

Additionally, predicting is inherently less optimal than search, unless your predictions are 100% perfect. You are choosing moves because you predict they are optimal, rather than because it's the best move you've found. If for example, you try to play by predicting what a chessmaster would do, your play will necessarily be worse than if you just play normally.

Comment author: passive_fist 13 December 2013 07:38:46AM 0 points [-]

They are closely related but not the same thing.

A counterexample is chess.

Comment author: Thomas 13 December 2013 08:08:28AM 0 points [-]

What an ideal chess player does? It predicts which move is optimal. May be a tricky feat, but he is good and predicts it well.

I looked this thread in past minutes and I clearly saw this "ideological division". Few people thinks as I do. Other say - you can't solve causal problems with a mere prediction. But don't give a clear example.

Don't you agree, that an ideal "best next chess move predictor" is the strongest possible chess player?

Comment author: passive_fist 13 December 2013 08:16:35AM *  0 points [-]

It predicts which move is optimal.

Maybe it would be useful to define terms, to make things more clear.

If you have a time-process X, and t observations from this process, a predictor comes up with a prediction as to what X_t+1 will be.

On the other hand, given a utility function f() on a series of possible outcomes Y from t+1 to infinity, a decision maker finds the best Y_t+1 to choose to maximize the utility function.

Note that the definition of these two things is not the same: a predictor is concerned about the past and immediate present, whereas a decision maker is concerned with the future.

Comment author: Thomas 13 December 2013 08:23:01AM 0 points [-]

a predictor comes up with a prediction as to what X_t+1 will be

This "t+1" might be "t+X". Results for a large X may be very bad. So as results for "t+1" may be bad. Still he do his best predictions.

whereas a decision maker is concerned with the future

He predicts the best decision, which can be taken.

Comment author: Caspian 15 December 2013 12:01:30AM 0 points [-]

In part of the interview LeCun is talking about predicting the actions of Facebook users, e.g. "Being able to predict what a user is going to do next is a key feature"

But not predicting everything they do and exactly what they'll type.