Silas comments on How An Algorithm Feels From Inside - Less Wrong

87 Post author: Eliezer_Yudkowsky 11 February 2008 02:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Silas 11 February 2008 02:36:50PM 10 points [-]

At risk of sounding ignorant, it's not clear to me how Network 1, or the networks in the prerequisite blog post, actually work. I know I'm supposed to already have superficial understanding of neural networks, and I do, but it wasn't immediately obvious to me what happens in Network 1, what the algorithm is. Before you roll your eyes, yes, I looked at the Artificial Neural Network Wikipedia page, but it still doesn't help in determining what yours means.

Comment author: gmaxwell 08 September 2010 04:55:52AM *  1 point [-]

Network 1 would work just fine (ignoring how you'd go about training such a thing). Each of the N^2 edges has a weight expressing the relationship of the vertices it connects. E.g. if nodes A and B are strongly anti-correlated the weight between them might be -1. You then fix the nodes you know and then either solve the system analytically or through numerical iteration until it settles down (hopefully!) and then you have expectations for all the unknown.

Typical networks for this sort of thing don't have cycles so stability isn't a question, but that doesn't mean that networks with cycles can't work and reach stable solutions. Some error correcting codes have graph representations that aren't much better than this. :)

Comment author: thomblake 24 October 2011 04:43:01PM 0 points [-]

Silas, I'm sure you've seen the answer by now, but for anyone who comes later, if you think of the diagrams above as Bayes Networks then you're on the right track.