IlyaShpitser comments on [Link]"Neural Turing Machines" - Less Wrong

16 Post author: Prankster 31 October 2014 08:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 31 October 2014 06:57:58PM *  4 points [-]

If you want to put it that way, nothing conceptually interesting is going on in any neural network paper - we already know they're universal.

My problem is the details: I visualize neural networks as like pachinko machines or Galton's quincunx - you drop a bunch of bits (many balls) into the top (bottom) layer of neurons (pins) and they cascade down to the bottom based on the activation functions (spacing of pins & how many balls hit a pin simultaneously), and at the bottom (top) is emitted a final smaller output (1 ball somewhere). I don't get the details of what it means to add a memory to this many-to-one function.

Comment author: IlyaShpitser 01 November 2014 03:33:38PM *  2 points [-]

I think another way to look at neural networks is they are nested non-linear regression models.


I am probably in the minority here, but I don't think the stuff in the OP is that interesting.