Yvain comments on Connectionism: Modeling the mind with neural networks - Less Wrong

39 Post author: Yvain 19 July 2011 01:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread. Show more comments above.

Comment author: Yvain 20 July 2011 10:04:29AM 0 points [-]

Couldn't "not" negatively reinforce a hidden node level between the input and output?

I'd like to hear what an expert like Phil has to say on this topic.

Comment author: whpearson 20 July 2011 02:00:53PM 1 point [-]

Normal Artificial Neural Networks are Turing complete with a certain amount of hidden layers (I think 4, but it has been a long time, and I don't know the reference off hand, this says 1 for universal approximation (paywalled)). A bit of googling says that recurrent neural networks are turing complete.

Feed forward neural networks can represent any computable function between its input and the output. They are not Turing complete with respect to the past inputs and the output as AIXI is.

Note this doesn't say anything about the set of training data needed to get the network to represent the function or how big the network would need to be. Just about the possibility.