Khaled comments on Connectionism: Modeling the mind with neural networks - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (20)
Connectionism may be the best we've got. But it is not very good.
Take the recent example of improving performance on a task by reading a manual. If we were to try and implement something similar in a connectionist/reinforcement model we would have problems. We need positive and negative reinforcement to change the neural connection strengths but we wouldn't get those whilst reading a book, so how do we assimilate the non-inductive information stored in there? It is possible with feedback loops, those can be used to store information quickly in a connectionist system, however I haven't seen any systems use them or learn them on the sort of scale that would be needed for the civilization problem.
There are also more complex processes which seem out of its reach, such as learning a language using a language e.g. "En francais, le mot pour 'cat' est 'chat'".
The idea of "virtual machines" mentioned in [Your Brain is (almost) Perfect] (http://www.amazon.com/Your-Brain-Almost-Perfect-Decisions/dp/0452288843) is tempting me to think in the direction of "reading a manual will trigger the nuerons involved in running the task and the reinforcements will be implemented on those 'virtual' runs".
How reading a manual will trigger this virtual run can be answered by the same way hearing "get me a glass of water" will trigger the neurons to do so, and if I get a "thank you" it will be reinforced. In the same way reading "to open the TV, click the red button on the remote" might trigger the neurons for opening a TV and reinforce the behavior in accordance to the manual.
I know this is quite a wild guess, but perhaps someone can elaborate on it in a more accurate manner