aberglas comments on [Link]"Neural Turing Machines" - Less Wrong

16 Post author: Prankster 31 October 2014 08:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread.

Comment author: aberglas 04 November 2014 04:14:54AM *  1 point [-]

I hate the term "Neural Network", as do many serious people working in the field.

There are Perceptrons which were inspired by neurons but are quite different. There are other related techniques that optimize in various ways. There are real neurons which are very complex and rather arbitrary. And then there is the greatly simplified Integrate and Fire (IF) abstraction of a neuron, often with Hebbian learning added.

Perceptrons solve practical problems, but are not the answer to everything as some would have you believe. There are new and powerful kernal methods that can automatically condition data which extend perceptrons. There are many other algorithms such as learning hidden Markov models. IF neurons are used to try and understand brain functionality, but are not useful for solving real problems (far too computationally expensive for what they do).

Which one of these quite different technologies is being referred to as "Neural Network"?

The idea of wiring perceptrons back onto themselves with state is old. Perceptrons have been shown to be able to emulate just about any function, so yes, they would be Turing complete. Being able to learn meanginful weights for such "recurrent" networks is relatively recent (1990s?).

Comment author: Gunnar_Zarncke 09 November 2014 12:40:04AM 0 points [-]

I'd think that deep neural networks as here with e.g. backprogation thru time/BPP are meant.