Raiden comments on The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe - Less Wrong

1 Post author: morganism 10 September 2016 07:13PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 10 September 2016 11:48:43PM 4 points [-]

I'd blame the MIT press release organ for being clickbait, but the paper isn't much better. It's almost entirely flash with very little substance. This is not to say there's no math - the math just doesn't much apply to the real world. For example, the idea that deep neural networks work well because they recreate the hierarchical generative process for the data is a common misconception.

And then from this starting point you want to start speculating?

Comment author: Raiden 14 September 2016 03:48:04PM 1 point [-]

Can you explain why that's a misconception? Or at least point me to a source that explains it?

I've started working with neural networks lately and I don't know too much yet, but the idea that they recreate the generative process behind a system, at least implicitly, seems almost obvious. If I train a neural network on a simple linear function, the weights on the network will probably change to reflect the coefficients of that function. Does this not generalize?

Comment author: Manfred 14 September 2016 06:15:19PM 2 points [-]

Well, consider a neural net for distinguishing dogs from cats. This neural network might develop features that look like "dog-like eyes" and "cat-like eyes," which are pattern-matched across the image. Images with more activation on the first feature are claimed to be dogs and images with more activation on the second feature are claimed to be cats, along with input from many other features. This is fairly typical-sounding.

Now imagine how bonkers a neural net would have to be in order to reproduce the generative process behind the images! Leaving aside simulations of the early universe, our neural network should still have a solid understanding of the biology of dogs and cats, the different grooming and adornment practices, macroscopic physics and physiology that leads to poses, and the preferences of people taking and storing photographs.

Comment author: Tyrin 25 September 2016 11:50:16PM 0 points [-]

Isn't the idea more that the neural network just learns rough subgraphs of the underlying DAG that captures the causal structure up to quantum detail? Whole-part relationships are such subgraphs: a person being present causes a face to be present, which causes eyes to be present etc.