You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Daniel_Burfoot comments on Open thread, Nov. 16 - Nov. 22, 2015 - Less Wrong Discussion

7 Post author: MrMind 16 November 2015 08:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (185)

You are viewing a single comment's thread. Show more comments above.

Comment author: Daniel_Burfoot 16 November 2015 02:38:13PM 2 points [-]

I agree that this is a huge problem, but RNNs and CNNs aren't the whole of ML (random forests are a different category of algorithm). You should study the ML that has the prettiest math. Try VC theory, Pearl's work on graphical models, AIT, and MaxEnt as developed by Jaynes and applied by della Pietra to statistical machine translation. Hinton's early work on topics like Boltzmann machines and Wake-Sleep algorithm is also quite "deep".

Comment author: cousin_it 16 November 2015 04:25:47PM 1 point [-]

Yeah, I suppose our instincts agree, because I've already studied all these things except the last two :-)

Comment author: V_V 04 December 2015 11:37:54PM *  0 points [-]

Have fun with generative models such as variational Bayesian neural networks, generative adversarial networks, applications of Fokker–Planck/Langevin/Hamiltonian dynamics to ML and NNs in particular, and so on. There are certainly lots of open problems for the mathematically inclined which are much more interesting than "Look ma, my neural networks made psychedelic artwork and C-looking code with more or less matched parentheses".

For instance, this paper provides pointers to some of these methods and describes a class of failure modes that are still difficult to address.