You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Daniel_Burfoot comments on The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe - Less Wrong Discussion

1 Post author: morganism 10 September 2016 07:13PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread.

Comment author: Daniel_Burfoot 11 September 2016 03:54:56PM 1 point [-]

How can neural networks approximate functions well in practice, when the set of possible functions is exponentially larger than the set of practically possible networks?

This question answers itself. If neural networks could really approximate every possible function, they could never generalize. That is the whole point of statistical learning theory: you get a Probably Approximately Correct (PAC) generalization bound when 1) your learning machine gets good empirical accuracy and 2) the number of possible functions expressible by the machine is small in some sense compared to the volume of training data.