You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

bogus comments on Open thread, Nov. 16 - Nov. 22, 2015 - Less Wrong Discussion

7 Post author: MrMind 16 November 2015 08:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (185)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 16 November 2015 02:23:04PM *  9 points [-]

I've been hearing about all this amazing stuff done with recurrent neural networks, convolutional neural networks, random forests, etc. The problem is that it feels like voodoo to me. "I've trained my program to generate convincing looking C code! It gets the indentation right, but the variable use is a bit off. Isn't that cool?" I'm not sure, it sounds like you don't understand what your program is doing. That's pretty much why I'm not studying machine learning right now. What do you think?

Comment author: bogus 17 November 2015 01:28:10PM *  1 point [-]

"I've trained my program to generate convincing looking C code! It gets the indentation right, but the variable use is a bit off. Isn't that cool?"

What this is really saying is: "Hey, convincing-looking C code can be modeled by a RNN, i.e. a state-transition version ("recurrent") of a complex non-linear model which is ultimately a generalization of logistic regression ("neural network")! And the model can be practically 'learned', i.e. fitted empirically, albeit with no optimality or accuracy guarantees of any kind. The variable use is a bit off, though. Isn't this cool/Does this tell us anything important?"