Recent article in The New Yorker:
http://www.newyorker.com/online/blogs/newsdesk/2012/11/ibm-brain-simulation-compass.html
Here is the research report from IBM, with the simple title "10^14":
http://www.modha.org/blog/SC12/RJ10502.pdf
It's nothing like a real brain simulation, of course, but illustrates that hardware to do this is getting very close.
There is likely to be quite a long overhang between the hardware and the software...
"What do you mean by "strong AI is refuted""
The strong AI hypothesis is that consciousness is the software running on the hardware of the brain. Therefore one does not need to know or understand how brains actually work to construct a living conscious mind. Thus any system that implements the right computer program with the right inputs and outputs has cognition in exactly the same literal sense that human beings have understanding, thought and memory. It was the belief of strong AI proponents such as Marvin Minski at MIT and others that they were literally creating minds when writing their programs. They felt no need to stoop so low as to poke around in actual brains and get their hands dirty.
Computers are syntactical machines. The programs they execute are pure syntax and have no semantic content. Meaning is assigned, it is not intrinsic to symbolic logic. That is it's strength. Since (1) programs are pure syntax and have no semantic content and (2) minds do have semantic content and (3) syntax is neither sufficient for nor constitutive of semantics. It must follow that programs are not by themselves constitutive of, nor sufficient for, minds. The strong AI hypothesis is false.
Which means that IBM is wasting time, energy and money. But.... perhaps their efforts will result in spin off technology so not all is lost.
How would one determine whether a given device/system has this "semantic content"? What kind of evidence should one look at? Inner structure? Only inputs and outputs? Something else?