V_V comments on LINK: AI Researcher Yann LeCun on AI function - Less Wrong

0 Post author: shminux 11 December 2013 12:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (82)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 12 December 2013 12:36:47AM 1 point [-]

Bayesian network learning and Bayesian network inference can, in principle, solve that problem.

Of course, if your model is wrong, and/or your dataset is degenerate, any approach will give you bad results: Gargbage in, garbage out.

Comment author: IlyaShpitser 12 December 2013 12:38:48AM 1 point [-]

Bayesian networks are statistical, not causal models.

Comment author: V_V 12 December 2013 12:53:11PM 0 points [-]

I don't know what you mean by "causal model", but Bayesian networks can deal with the type of problems you describe.

Comment author: IlyaShpitser 12 December 2013 01:42:54PM 2 points [-]

A causal model to me is a set of joint distributions defined over potential outcome random variables.

And no, regardless of how often you repeat it, Bayesian networks cannot solve causal problems.

Comment author: V_V 12 December 2013 04:01:44PM 2 points [-]

I have no idea what you're talking about.

gjm asked you what a causal problem was, you didn't provide a definition and instead gave an example of a problem which seems clearly solvable by Bayesian methods such as hidden Markov models (for prediction) or partially observable Markov decision processes (for decision).

Comment author: IlyaShpitser 12 December 2013 04:57:46PM *  0 points [-]

(a) Hidden Markov models and POMDPs are probabilistic models, not necessarily Bayesian.

(b) I am using the standard definition of a causal model, first due to Neyman, popularized by Rubin. Everyone except some folks in the UK use this definition now. I am sorry if you are unfamiliar with it.

(c) Statistical models cannot solve causal problems. The number of times you repeat the opposite, while adding the word "clearly" will not affect this fact.

Comment author: V_V 12 December 2013 06:40:13PM 0 points [-]

(a) Hidden Markov models and POMDPs are probabilistic models, not necessarily Bayesian.

According to Wikipedia:

A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. A HMM can be considered the simplest dynamic Bayesian network.

.

(b) I am using the standard definition of a causal model, first due to Neyman, popularized by Rubin. Everyone except some folks in the UK use this definition now. I am sorry if you are unfamiliar with it.

I suppose you mean this.

It seems to be a framework for the estimation of probability distributions from experimental data, under some independence assumptions.

(c) Statistical models cannot solve causal problems. The number of times you repeat the opposite, while adding the word "clearly" will not affect this fact.

You still didn't define "causal problem" and what you mean by "solve" in this context.

Comment author: IlyaShpitser 12 December 2013 08:07:55PM *  1 point [-]

A "Bayesian network" is not necessarily a Bayesian model. Bayesian networks can be used with frequentist methods, and frequently are (see: the PC algorithm). I believe Pearl called the networks "Bayesian" to honor Bayes, and because of the way Bayes theorem is used when you shuffle probabilities around. The model does not necessitate Bayesian methods at all.

I don't mean to be rude, but are we operating at the level of string pattern matching, and google searches here?

You still didn't define "causal problem" and what you mean by "solve" in this context.

Sociological definition : "a causal problem" is a problem that people who do causal inference study. Estimating causal effects. Learning cause-effect relationships from data. Mediation analysis. Interference analysis. Decision theory problems. To "solve" means to get the right answer and thereby avoid going to jail for malpractice.


This is a bizarre conversation. Causal problems aren't something esoteric. Imagine if you kept insisting I define what an algebra problem is. There are all sorts of things you could read on this standard topic.

Comment author: Lumifer 12 December 2013 08:37:09PM 2 points [-]

This is a bizarre conversation.

Looks a like a perfectly normal conversation where people insist on using different terminology sets :-/

Comment author: IlyaShpitser 12 December 2013 08:52:57PM *  0 points [-]

One of these people has a good reason for preferring his terminology (e.g. it's standard, it's what everyone in the field actually uses, etc.) "Scott, can you define what a qubit is?", etc.

Comment author: V_V 12 December 2013 10:36:06PM 0 points [-]

A "Bayesian network" is not necessarily a Bayesian model. Bayesian networks can be used with frequentist methods, and frequently are (see: the PC algorithm).

You can use frequentists methods to learn Bayesian networks from data, as with any other Bayesian model.

And you can also use Bayesian networks without priors to do things like maximum likelihood estimation, which isn't Bayesian sensu stricto, but I don't think this is relevant to this conversation, is it?

I don't mean to be rude, but are we operating at the level of string pattern matching, and google searches here?

No, we are operating at the level of trying to make sense of your claims.

Sociological definition : "a causal problem" is a problem that people who do causal inference study. Estimating causal effects. Learning cause-effect relationships from data. Mediation analysis. Interference analysis. Decision theory problems. To "solve" means to get the right answer and thereby avoid going to jail for malpractice.

Please try to reformulate without using the word "cause/causal".
The term has multiple meanings. You may be using a one of them assuming that everybody shares it, but that's not obvious.

Comment author: IlyaShpitser 12 December 2013 11:00:18PM *  1 point [-]

I operate within the interventionist school of causality, whereby a causal effect has something to do with how interventions affect outcome variables. This is of course not the only formalization of causality, there are many many others. However, this particular one has been very influential, almost universally adopted among the empirical sciences, corresponds very closely to people's causal intuitions in many important respects (and has the mathematical machinery to move far beyond when intuitions fail), and has a number of other nice advantages I don't have the space to get into here (for example it helped to completely crack open the "what's the dimension of a hidden variable DAG" problem).


One consequence of the conceptual success of the interventionist school is that there is now a long list of properties we think a formalization of causality has to satisfy (that were first figured out within the interventionist framework). So we can now rule out bad formalizations of causality fairly easily.


I think getting into the interventionist school is too long for even a top level post, let alone a response post buried many levels deep in a thread. If you are interested, you can read a book about it (Pearl's book for example), or some papers.


Prediction algorithms, as used in ML today, completely fail on interventionist causal problems, which correspond, loosely speaking, to trying to figure out the effect of a randomized trial from observational data. I am not trying to give them a hard time about it, because that's not what the emphasis in ML is, which is perfectly fine!

You can think of this problem as just another type of "prediction problem," but this word usage simply does not conform to what people in ML mean by "prediction." There is an entirely different theory, etc.

Comment author: Lumifer 12 December 2013 08:40:03PM 1 point [-]

A causal model to me is a set of joint distributions defined over potential outcome random variables.

Huh?

Can you expand on this, with special attention to the difference between the model and the result of a model, and to the differences from plain-vanilla Bayesian models which will also produce joint distributions over outcomes.

Comment author: IlyaShpitser 12 December 2013 08:57:35PM *  1 point [-]

Sure. Here's the world's simplest causal graph: A -> B.

Rubin et al, who do not like graphs, will instead talk about a joint distribution:

p(A, B(a=1), B(a=0))

where B(a=1) means 'random variable B under intervention do(a=1)'. Assume binary A for simplicity here.

A causal model over A,B is a set of densities { p(A, B(a=1), B(a=0) | [ some property ] } The causal model for this graph would be:

{ p(A, B(a=1), B(a=0) | B(a=1) is independent of A, and B(a=0) is independent of A }

These assumptions are called 'ignorability assumptions' in the literature, and they correspond to the absence of confounding between A and B. Note that it took counterfactuals to define what 'absence of confounding' means.

A regular Bayesian network model for this graph is just the set of densities over A and B (since this graph has no d-separation statements). That is, it is the set { p(A,B) | [no assumptions] }. This is a 'statistical model,' because it is a set of regular old joint densities, with no mention of counterfactuals or interventions anywhere.

The same graph can correspond to very different things, you have to specify.


You could also have assumptions corresponding to "missing graph edges." For example, in the instrumental variable graph:

Z -> A -> B, with A <- U -> B, where we do not see U, we would have an assumption that states that B(a,z) = B(a,z') for all a,z,z'.


Please don't say "Bayesian model" when you mean "Bayesian network." People really should say "belief networks" or "statistical DAG models" to avoid confusion.

Comment author: Lumifer 12 December 2013 09:28:00PM *  1 point [-]

Please don't say "Bayesian model" when you mean "Bayesian network."

I do not mean "Bayesian networks". I mean Bayesian models of the kind e.g. described in Gelman's Bayesian Data Analysis.

p(A, B(a=1), B(a=0)) where B(a=1) means 'random variable B under intervention do(a=1)'. Assume binary A for simplicity here.

You still can express this as plain-vanilla conditional densities, can't you? "under intervention do(a=1)" is just a different way of saying "conditional on A=1", no?

A causal model over A,B is a set of densities { p(A, B(a=1), B(a=0) | [ some property ] }

and

with no mention of counterfactuals or interventions anywhere.

I don't see counterfactuals in your set of densities and how "interventions" are different from conditionality?

Comment author: IlyaShpitser 12 December 2013 09:43:08PM *  2 points [-]

You still can express this as plain-vanilla conditional densities, can't you?

No. If conditioning was the same as interventions I could make it rain by watering my lawn and become a world class athlete by putting on a gold medal.

Comment author: Lumifer 12 December 2013 09:52:40PM 0 points [-]

If conditioning was the same as interventions I could make it rain by watering my lawn

I don't understand -- can you unroll?

Comment author: IlyaShpitser 12 December 2013 10:45:40PM *  1 point [-]

Well, since p(rain | grass wet) is high, it seems making the grass wet via a garden hose will make rain more likely. Of course you might say that "making the grass wet" and "seeing the grass wet" is not the same thing, in which case I agree!

The fact that these are not the same thing is why people say conditioning and interventions are not the same thing.

You can of course say that you can still use the language of conditional probability to talk about "doing events" vs "seeing events." But then you are just reinventing interventions (as will become apparent if you try to figure out axioms for your notation).