V_V comments on [Link]"Neural Turing Machines" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (21)
If you want to put it that way, nothing conceptually interesting is going on in any neural network paper - we already know they're universal.
My problem is the details: I visualize neural networks as like pachinko machines or Galton's quincunx - you drop a bunch of bits (many balls) into the top (bottom) layer of neurons (pins) and they cascade down to the bottom based on the activation functions (spacing of pins & how many balls hit a pin simultaneously), and at the bottom (top) is emitted a final smaller output (1 ball somewhere). I don't get the details of what it means to add a memory to this many-to-one function.
The same way you add memory to many-to-one boolean functions: feedback loops.