V_V comments on Open thread, Nov. 16 - Nov. 22, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (185)
I've been hearing about all this amazing stuff done with recurrent neural networks, convolutional neural networks, random forests, etc. The problem is that it feels like voodoo to me. "I've trained my program to generate convincing looking C code! It gets the indentation right, but the variable use is a bit off. Isn't that cool?" I'm not sure, it sounds like you don't understand what your program is doing. That's pretty much why I'm not studying machine learning right now. What do you think?
The trippy pictures and the vaguely C-looking code are just cool stunts, not serious experiments. People may be tempted to fell into the hype, sometimes a reality check is helpful.
This said, neural networks really do well in difficult tasks such as visual object recognition and machine translation, indeed for reasons that are not fully understood.
Sounds like a good reason to study the field in order to understand why they can do what they do, and why they can't do what they can't do, doesn't it?