You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

passive_fist comments on Analogical Reasoning and Creativity - Less Wrong Discussion

25 Post author: jacob_cannell 01 July 2015 08:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread. Show more comments above.

Comment author: passive_fist 11 July 2015 10:12:10PM *  1 point [-]

To get your pointer-based memory, you just have to construct a pointer as a specific compression or encoding of the memory in the associative network.

Again, that's what I'm saying. How do you get from a memory to a pointer? We do not yet know how the brain does this. We have models that can do this, but very little experimental data. We of course know that it's possible, we just don't know the form this mechanism takes in the brain.

You're assuming that a Von Neumann Architecture is a more general-purpose memory than an associative memory system, when in fact, it's the other way around.

I'm assuming nothing of the sort. I'm not talking about which kind of memory is more general purpose (and, really, you have to take into account memory plus processing to be able to talk about generality in this sense). I'm talking about what the brain does. The usual 'associative memory' view says that all we have is an associative/content-addressable memory system. That's fine, but it's like saying the brain is made up of neurons. It lacks descriptive power. I want to know the specifics of how memory formation and recall happens, not hand-waving. Theoretical descriptions can help, but without experimental evidence they are of limited utility in understanding the brain.

That's why the Hesslow experiment is so intriguing: It is actual experimental evidence that clearly illustrates what a single neuron is capable of learning and shows that even when it comes to such a drastically reduced and simplified system, our understanding is still very limited.

According to Hava Siegelmann, a recurrent neural network with real precision weights would be, theoretically speaking, a Super Turing Machine.

This is irrelevant as real precision weights are physically impossible.