Punoxysm comments on [Link] An exact mapping between the Variational Renormalization Group and Deep Learning] - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (9)
Universality of neural networks is a known result (in the sense: A basic fully-connected net with an input layer, hidden layer, and output layer can represent any function given sufficient hidden nodes).
Nitpick: Any continuous function on a compact set. Still, I think this should include most real-life problems.
Universality of functions: Yes (inefficiently so). But the claim made in the paper goes deeper.
Can you explain? I don't know much about renormalization groups.
Physics has lots of structure that is local. 'Averaging' over local structures can reveal higher level structures. On rereading I realized that the critical choice remains in the the way the RG is constructed. So the approach isn't as general as I initially imagined it to be.