Houshalter comments on The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe - Less Wrong

1 Post author: morganism 10 September 2016 07:13PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread.

Comment author: Houshalter 11 September 2016 01:41:51PM 2 points [-]

I have another theory on how Deep Learning works: http://lesswrong.com/lw/m9p/approximating_solomonoff_induction/

The idea is that neural networks are a (somewhat crude) approximation of solomonoff induction.

Comment author: The_Jaded_One 12 September 2016 09:40:05AM 0 points [-]

Basically every learning algorithm can be seen as a crude approximation of Solomonoff induction. What makes one approximation better than the others?

Comment author: Houshalter 12 September 2016 11:41:32AM *  1 point [-]

Well I try to demonstrate you can derive neural networks from first principles, starting with SI. I don't think you can derive decision trees or other ML algorithms in a similar way.

Further, NNs are completely general. In theory recurrent neural nets can learn to simulate any computer program, or at least logical circuits. With certain modifications they can even be given a memory "tape" like a turing machine and become turing complete. Most machine learning methods do not have this property or anything like it. They can only learn "shallow" functions and can't handle recurrency.