gwern comments on Open thread, 23-29 June 2014 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (190)
Thanks for reading.
I tried to read that, but I think I didn't understand too much of it or its connection to this topic. I'll save that whole festschrift for later, there were some interesting titles in the table of contents.
I agree I did sort of conflate causal networks and Bayesian networks in general... I didn't realize there was no clean way of having both at the same time.
It might help if I describe a concrete way to test my claim using just causal networks: generate a randomly connected causal network with x nodes and y arrows, where each arrow has some random noise in it; count how many pairs of nodes are in a causal relationship; now, 1000 times initialize the root nodes to random values and generate a possible state of the network & storing the values for each node; count how many pairwise correlations there are between all the nodes using the 1000 samples (using an appropriate significance test & alpha if one wants); divide # of causal relationships by # of correlations, store; return to the beginning and resume with x+1 nodes and y+1 arrows... As one graphs each x against its respective estimated fraction, does the fraction head toward 0 as x increases? My thesis is it does.
Interesting, and it reminds me of what happens in physics classes: people learn how to memorize teachers' passwords, but go on thinking in folk-Aristotelian physics fashion, as revealed by simple multiple-choice tests designed to hone in on the appealing folk-physics misconceptions vs 'unnatural' Newtonian mechanics. That's a plausible explanation, but I wonder if anyone has established more directly that people really do reason causally even when they know they're not supposed to? Offhand, it doesn't really sound like any bias I can think of. It shouldn't be too hard to develop such a test for teachers of causality material, just take common student misconceptions or deadends and refine them into a multiple-choice test. I'd bet stats 101 courses have as much problems as intro physics courses.
That seems to make sense to me.
I'm not sure about marginal dependence.
I'm afraid I don't understand you here. If we draw an arrow from A to B, either as a causal or Bayesian net, because we've observed correlation or causation (maybe we actually randomized A for once), how can there not be a relationship in any underlying reality and there actually be an 'independence' and the graph be 'unfaithful'?
Anyway, it seems that either way, there might be something to this idea. I'll keep it in mind for the future.