You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Punoxysm comments on [Link] An exact mapping between the Variational Renormalization Group and Deep Learning] - Less Wrong Discussion

5 Post author: Gunnar_Zarncke 08 December 2014 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (9)

You are viewing a single comment's thread.

Comment author: Punoxysm 08 December 2014 05:44:40PM *  1 point [-]

could be made or is already conceptually general enough to learn everything there is to learn

Universality of neural networks is a known result (in the sense: A basic fully-connected net with an input layer, hidden layer, and output layer can represent any function given sufficient hidden nodes).

Comment author: skeptical_lurker 10 December 2014 12:13:24AM 1 point [-]

Nitpick: Any continuous function on a compact set. Still, I think this should include most real-life problems.

Comment author: Gunnar_Zarncke 08 December 2014 05:46:52PM 0 points [-]

Universality of functions: Yes (inefficiently so). But the claim made in the paper goes deeper.

Comment author: Punoxysm 08 December 2014 08:55:55PM 0 points [-]

Can you explain? I don't know much about renormalization groups.

Comment author: Gunnar_Zarncke 08 December 2014 09:31:59PM 0 points [-]

The idea behind RG is to find a new coarse-grained description of the spin system where one has “integrated out” short distance fluctuations.

Physics has lots of structure that is local. 'Averaging' over local structures can reveal higher level structures. On rereading I realized that the critical choice remains in the the way the RG is constructed. So the approach isn't as general as I initially imagined it to be.