You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gedymin comments on LINK: Nematode brain uploaded with success - Less Wrong Discussion

3 Post author: polymathwannabe 23 December 2014 11:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread. Show more comments above.

Comment author: gedymin 24 December 2014 04:49:20PM *  2 points [-]

I don't know about the power needed to simulate the neurons, but my guess is that most of the resources are spent not on the calculations, but on interprocess communication. Running 302 processes on a Raspberry Pi and keeping hundreds of UDP sockets open probably takes a lot of its resources.

The technical solution is neither innovative nor fast. The benefits are in its distributed nature (every neuron could be simulated on a different computer) and in simplicity of implementation. At least while 100% faithfullness to the underlying mathematical model is not required. I have no idea how the author plans to avoid unintended data loss in the not-unusual case when some UDP packets are dropped. Retransmission (TCP) is not really an option either, as the system has to run in real time.

Comment author: V_V 25 December 2014 08:42:03PM 1 point [-]

If each simulated "neuron" is just a linear threshold unit, as described by the paper, using a whole process to run it and exchange messages by UDP looks like a terribly wasteful architecture.
Maybe the author wants to eventually implement a computationally expensive biologically accurate neuron model, but still I don't see the point of this architecture, as even if the individual neurons were biologically accurate, the overall simulation wouldn't, due to the non-deterministc delays and packet lossess introduced by UDP messaging.

I'm unimpressed.