You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Douglas_Knight comments on Open thread, Nov. 16 - Nov. 22, 2015 - Less Wrong Discussion

7 Post author: MrMind 16 November 2015 08:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (185)

You are viewing a single comment's thread. Show more comments above.

Comment author: Douglas_Knight 16 November 2015 07:46:56PM 4 points [-]

it sounds like you don't understand what your program is doing

That is ambiguous. Do you mean the final output program or the ML program?

Most ML programs seem pretty straight-forward to me (search, as Ilya said); the black magic is the choice of hyperparameters. How do people know how many layers they need? Also, I think time to learn is a bit opaque, but probably easy to measure. In particular, by mentioning both CNN and RNN, you imply that the C and R are mysterious, while they seem to me the most comprehensible part of the choices.

But your further comments suggest that you mean the program generated by the ML algorithms. This isn't new. Genetic algorithms and neural nets have been producing incomprehensible results for decades. What has changed is that new learning algorithms have pushed neural nets further and judicious choice of hyperparameters have allowed them to exploit more data and more computer power, while genetic algorithms seem to have run out of steam. The bigger the network or algorithm that is the output, the more room for it to be incomprehensible.