jacob_cannell comments on Analogical Reasoning and Creativity - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (15)
I actually said:
The whole symbolic key-value-store memory is a main key point of my OP and my earlier brain article. "Memory and the computational brain", from what I can tell, seems to provide a good overview of the recent neuroscience stuff which I covered in my ULM post. I'm not disparaging the book, just saying that it isn't something that I have time to read at the moment, and most of the material looks familiar.
LSTM is already quite powerful, and new variants - such as the recent grid LSTM - continue to expand the range of what can feasibly be learned. In many ways their learning abilities are already beyond the brain (see the parity discussion in the other thread).
That being said, LSTM isn't everything, and a general AGI will also need a memory-based symbolic system, which can excel especially at rapid learning from few examples - as discussed. Neural turing machines and memory networks and related are now expanding into that frontier. You seem to be making a point that standard RNNs can't do effective symbolic learning - and I agree. That's what the new memory based systems are for.
Ok, I read enough of that paper to get the gist. I don't think it's that significant. Assuming their general conclusion is correct and they didn't make any serious experimental mistakes, all that they have shown is that the neuron itself can learn a simple timing response. The function they learned only requires that the neuron model a single parameter - a t value. We have already known for a while that many neurons feature membrane plasticity and other such mechanisms that effectively function as learnable per-neuron parameters that effect the transfer function. This has been known and even incorporated into some ANNs and found to be somewhat useful. It isn't world changing. The cell isn't learning a complex spatiotemporal pattern - such as entire song. It's just learning a single or a handful of variables.