Neurons aren't simple little machines, axons talk to each other.
He and his colleagues first discovered individual nerve cells can fire off signals even in the absence of electrical stimulations in the cell body or dendrites. It's not always stimulus in, immediate action potential out. (Action potentials are the fundamental electrical signaling elements used by neurons; they are very brief changes in the membrane voltage of the neuron.)
"This cellular memory is a novelty," Spruston said. "The neuron is responding to the history of what happened to it in the minute or so before." Spruston and Sheffield found that the cellular memory is stored in the axon and the action potential is generated farther down the axon than they would have expected. Instead of being near the cell body it occurs toward the end of the axon.
Their studies of individual neurons (from the hippocampus and neocortex of mice) led to experiments with multiple neurons, which resulted in perhaps the biggest surprise of all. The researchers found that one axon can talk to another. They stimulated one neuron, and detected the persistent firing in the other unstimulated neuron.
No dendrites or cell bodies were involved in this communication. "The axons are talking to each other, but it's a complete mystery as to how it works," Spruston said. "The next big question is: how widespread is this behavior? Is this an oddity or does in happen in lots of neurons? We don't think it's rare, so it's important for us to understand under what conditions it occurs and how this happens."
The original article (paywall).
Assuming this is all true, how does it affect the feasibility of uploading? Anyone want to bet on whether things are even more complicated than the current discoveries?
ETA: It seems unlikely to me that you have to simulate every atom to upload a person, and more unlikely that it's enough to view neurons as binary switches. Is there any good way to think about how much abstraction you can get away with in uploading?
Yes, I know it's a vague standard. I'm not sure how good an upload needs to be. How good would be good enough for you?
Well, I would say "that we care about"; it's not clear to me what it means to say about an aspect of experience that I don't care about it but I should. But I think that's a digression.
Leaving that aside, I agree: the only way to be sure that X is sufficient to build a brain is to do X and get a brain out of it. Until we do that, we don't know.
But if supporting a project to build a brain via X prior to having that certainty is a sign of accepting an "article of faith," then, well, my answer to your original question is that what underlies that article of faith is support for doing the experiment.
By way of analogy: the only way to be sure that X is sufficient to build a heavier-than-air flying machine is to do X and see if it flies. And if doing X requires a lot of time and energy and enthusiasm, then it's unsurprising if the first community to do X takes as "an article of faith" that X is sufficient.
I have no problem with that, in and of itself; my question is what happens when someone does X and the machine doesn't fly. If that breaks the community's faith and they go on to endorse something else, then I endorse the mechanism underlying their faith.
As for why the idea of mind-as-computation seems plausible... well, of all the tools we have, computers currently seem like the best bet for building an artificial mind out of, so that's where we're concentrating our time and energy and enthusiasm. That seems reasonable to me.
That said, if in N years neurobiology advances to the point where we can engineer artificial structures out of neurons as readily as we can out of silicon, I expect you'll see a lot of enthusiastic endorsement of AI projects built out of neurons. (I also expect that you'll see a lot of criticism that a mere neural structure lacks something-or-other that brains have, and therefore cannot conceivably be intelligent.)
Heh! Funny, I was just reading the bit in Anslie's Breakdown of Will where he discusses the function of beliefs as mobilizing one's motivations, and now feel obliged to be more tolerant of beliefs that I suspect are confused or mistaken but which motivate activities I endorse -- in this case, "running the experiment", as you say. So I guess I don't disagree. :) Thanks for the thoughtful response!