bogus comments on Open thread, Nov. 16 - Nov. 22, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (185)
I've been hearing about all this amazing stuff done with recurrent neural networks, convolutional neural networks, random forests, etc. The problem is that it feels like voodoo to me. "I've trained my program to generate convincing looking C code! It gets the indentation right, but the variable use is a bit off. Isn't that cool?" I'm not sure, it sounds like you don't understand what your program is doing. That's pretty much why I'm not studying machine learning right now. What do you think?
What this is really saying is: "Hey, convincing-looking C code can be modeled by a RNN, i.e. a state-transition version ("recurrent") of a complex non-linear model which is ultimately a generalization of logistic regression ("neural network")! And the model can be practically 'learned', i.e. fitted empirically, albeit with no optimality or accuracy guarantees of any kind. The variable use is a bit off, though. Isn't this cool/Does this tell us anything important?"