I think important caveats need to be kept in mind. From the New Yorker article:
I.B.M.’s Compass has more neurons than any system previously built, but it still doesn’t do anything with all those neurons. The short report published on the new system is full of vital statistics—how many neurons, how fast they run—but there’s not a single experiment to test the system’s cognitive capacities. It’s sort of like having the biggest set of Lego blocks in town without a clue of what to make out of them. The real art is not in buying the Legos but in knowing how to put them together. Until we have a deeper understanding of the brain, giant arrays of idealized neurons will tell us less than we might have hoped.
Thanks for this. The latest research report 10^14 already appears to be a significant update on that paper.
IBM now report roughly eight times as many simulated neurons and synapses, while the slow-down has gone from ~400x real-time to ~1500x real time. That works out at a factor > 2 in hardware improvement within a matter of months. They are using a custom hardware architecture and presumably there are still a lot of optimisations to be made. It can't be very long before this can run in real time.
As said in other comments, nobody knows how to program this yet...
I actually never heard about non-von Neumann architectures. Anybody has some tip on a good source on this? Especially how this relates to biological brain architectures? Thank you!
It's nothing like a real brain simulation, of course, but illustrates that hardware to do this is getting very close.
They simulate model neurons. Those neurons they are less complex than the real neurons that we have in our head. The way in which real neurons change the amount of ion channels on their membrane for long-term plasticity is neither fully understood nor easy to simulate.
Parallelism changes absolutely nothing other than speed of execution.
Strong AI is refuted because syntax is insufficient for semantics. Allowing the syntax to execute in parallel will not alter this because the refutation of strong AI attacks the logical basis for the strong AI hypothesis itself. If you are trying to build a television with tinker-toys it does not improve your chances to substitute higher quality tinker-toys for the older wooden ones. You will still never get a functional TV.
They do not actually have a physical non-von Neumann architecture. They are simulating a brain on simulated neurosynaptic cores on a simulated non-von Neumann architecture on a Blue Gene/Q super computer which consists of 64-bit PowerPC A2 processors connected in a toroidal network. No wonder it's slow.
They are trying to reach "True North" and believe they are headed in the right direction but they do not know if the Compass they have built actually measures what they believe it measures. Nor do they know if once they get there True North will do what they want it to do. They do not even know how what they want to do does what it does but they believe if they use faster computers that will overcome their lack of knowledge of how actual minds arise out of actual brains, which they don't know how they are constructed. Nor do they know how the actual neurons of which actual brains are constructed actually function in real life.
But they're published. So... you know... there's that.
If you cannot simulate round worms, do not know how neurons actually work and do not even know how memories are stored in natural brains you are in no danger of building Colossus.
People are highly susceptible to magical thinking. When the telegraph was invented people thought the mind was like the telegraph because...... magic is why. Building more and faster wires and better telegraph stations and connecting them in advanced topologies will not change the fact that you are living in a fantasy world.
Strong AI is refuted because syntax is insufficient for semantics.
A wild Aristotelian Teleologist appears!
Phrasing claims in the passive voice to lend an air of authority is grating to the educated ear.
Aside from stylistic concerns, though, I believe you're claiming that electronic circuits don't really mean anything. However, I'm not sure whether you're making the testable claim that no arrangement of electronic circuits will ever perform complicated cross-domain optimization better than a human, or the untestable claim that no electronic circuit will ever really be able to think.
Recent article in The New Yorker:
http://www.newyorker.com/online/blogs/newsdesk/2012/11/ibm-brain-simulation-compass.html
Here is the research report from IBM, with the simple title "10^14":
http://www.modha.org/blog/SC12/RJ10502.pdf
It's nothing like a real brain simulation, of course, but illustrates that hardware to do this is getting very close.
There is likely to be quite a long overhang between the hardware and the software...