You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

V_V comments on Open thread, Nov. 16 - Nov. 22, 2015 - Less Wrong Discussion

7 Post author: MrMind 16 November 2015 08:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (185)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 16 November 2015 02:23:04PM *  9 points [-]

I've been hearing about all this amazing stuff done with recurrent neural networks, convolutional neural networks, random forests, etc. The problem is that it feels like voodoo to me. "I've trained my program to generate convincing looking C code! It gets the indentation right, but the variable use is a bit off. Isn't that cool?" I'm not sure, it sounds like you don't understand what your program is doing. That's pretty much why I'm not studying machine learning right now. What do you think?

Comment author: V_V 05 December 2015 01:21:27AM *  0 points [-]

The trippy pictures and the vaguely C-looking code are just cool stunts, not serious experiments. People may be tempted to fell into the hype, sometimes a reality check is helpful.

This said, neural networks really do well in difficult tasks such as visual object recognition and machine translation, indeed for reasons that are not fully understood.

Sounds like a good reason to study the field in order to understand why they can do what they do, and why they can't do what they can't do, doesn't it?