I think important caveats need to be kept in mind. From the New Yorker article:
I.B.M.’s Compass has more neurons than any system previously built, but it still doesn’t do anything with all those neurons. The short report published on the new system is full of vital statistics—how many neurons, how fast they run—but there’s not a single experiment to test the system’s cognitive capacities. It’s sort of like having the biggest set of Lego blocks in town without a clue of what to make out of them. The real art is not in buying the Legos but in knowing how to put them together. Until we have a deeper understanding of the brain, giant arrays of idealized neurons will tell us less than we might have hoped.
Thanks for this. The latest research report 10^14 already appears to be a significant update on that paper.
IBM now report roughly eight times as many simulated neurons and synapses, while the slow-down has gone from ~400x real-time to ~1500x real time. That works out at a factor > 2 in hardware improvement within a matter of months. They are using a custom hardware architecture and presumably there are still a lot of optimisations to be made. It can't be very long before this can run in real time.
As said in other comments, nobody knows how to program this yet...
I actually never heard about non-von Neumann architectures. Anybody has some tip on a good source on this? Especially how this relates to biological brain architectures? Thank you!
It's nothing like a real brain simulation, of course, but illustrates that hardware to do this is getting very close.
They simulate model neurons. Those neurons they are less complex than the real neurons that we have in our head. The way in which real neurons change the amount of ion channels on their membrane for long-term plasticity is neither fully understood nor easy to simulate.
That is correct, you don't know what semantic content is.
"I still don't know what makes you so sure conciousness is impossible on an emulator."
For the same reason that I know simulated fire will not burn anything. In order for us to create an artificial mind, which certainly must be possible, we must duplicate the causal relations that exist in real consciousnesses.
Let us imagine that you go to your doctor and he says, "You're heart is shot. We need to replace it. Lucky for you we have miniature super computer we can stick into your chest that can simulate the pumping action of a real heart down to the atomic level. Every atom, every material, every gasket of a real pump is precisely emulated to an arbitrary degree of accuracy."
"Sign here."
Do you sign the consent form?
Simulation is not duplication. In order to duplicate the causal effects of real world processes it is not enough to represent them in symbolic notation. Which is all a program is. To duplicate the action of a lever on a mass it is not enough to represent that action to yourself on paper or in a computer. You have to actually build a physical lever in the physical world.
In order to duplicate conscious minds, which certainly must be due to the activity of real brains, you must duplicate those causal relations that allow real brains to give rise to the real world physical phenomenon we call consciousness. A representation of a brain is no more a real brain than a representation of a pump will ever pump a single drop of fluid.
None of this means we might not someday build an artificial brain that gives rise to an artificial conscious mind. But it won't be done on a von Neuman machine. It will be done by creating real world objects that have the same causal functions that real world neurons or other structures in real brains do.
How could it be any other way?
While you could say programs are pure syntax, they are executed on real machines and have real effects. If those capabilities don't count as semantic content, I don't know what does.
That is correct, you don't know what semantic content is.
Care to explain?
Meaning.
The words on this page mean things. They are intended to refer to other things.
Oh. and how do you know that?
Meaning is assigned, it is not intrinsic to symbolic logic.
Assigned by us, I suppose? Then what makes us so special?
Anyway, that's not the most important:
...None of this means we
Recent article in The New Yorker:
http://www.newyorker.com/online/blogs/newsdesk/2012/11/ibm-brain-simulation-compass.html
Here is the research report from IBM, with the simple title "10^14":
http://www.modha.org/blog/SC12/RJ10502.pdf
It's nothing like a real brain simulation, of course, but illustrates that hardware to do this is getting very close.
There is likely to be quite a long overhang between the hardware and the software...