Neurons aren't simple little machines, axons talk to each other.

He and his colleagues first discovered individual nerve cells can fire off signals even in the absence of electrical stimulations in the cell body or dendrites. It's not always stimulus in, immediate action potential out. (Action potentials are the fundamental electrical signaling elements used by neurons; they are very brief changes in the membrane voltage of the neuron.)
"This cellular memory is a novelty," Spruston said. "The neuron is responding to the history of what happened to it in the minute or so before." Spruston and Sheffield found that the cellular memory is stored in the axon and the action potential is generated farther down the axon than they would have expected. Instead of being near the cell body it occurs toward the end of the axon.
Their studies of individual neurons (from the hippocampus and neocortex of mice) led to experiments with multiple neurons, which resulted in perhaps the biggest surprise of all. The researchers found that one axon can talk to another. They stimulated one neuron, and detected the persistent firing in the other unstimulated neuron.
No dendrites or cell bodies were involved in this communication. "The axons are talking to each other, but it's a complete mystery as to how it works," Spruston said. "The next big question is: how widespread is this behavior? Is this an oddity or does in happen in lots of neurons? We don't think it's rare, so it's important for us to understand under what conditions it occurs and how this happens."

The original article (paywall).

Assuming this is all true, how does it affect the feasibility of uploading? Anyone want to bet on whether things are even more complicated than the current discoveries?

ETA: It seems unlikely to me that you have to simulate every atom to upload a person, and more unlikely that it's enough to view neurons as binary switches. Is there any good way to think about how much abstraction you can get away with in uploading?

Yes, I know it's a vague standard. I'm not sure how good an upload needs to be. How good would be good enough for you?

New Comment
25 comments, sorted by Click to highlight new comments since: Today at 9:33 AM

This does not seem to me to change predictions much. There is a huge range of hypotheses of how much detail is needed and this is only relevant to a small subset of them. One hypothesis is that the activity of individual neurons is unimportant noise which produces simpler behavior at the level of cortical columns, which can modeled as black boxes. This does not provide evidence against such coarse theories. It does provide evidence against neuron-level emulation, pushing the posterior from neurons to sub-neuron hypotheses. But the prior on emulating neurons was already pretty low: no one believes that neurons are "simple little machines."

If you think that this behavior is important, if you think that, say, Michael Hines's Neuron software is a serious attempt to emulate neurons at the coarsest plausible sub-neuron level, and if the software fails to generate this behavior, the conjunction is evidence is evidence that the software works on too coarse a level and a finer level is necessary to emulate neurons. But I am skeptical about all three clauses!

Fair enough-- I don't think I've seen the sophisticated work that's been done.

Does neurogenesis add significant complexity?

[-]ata13y70

Depaywalled original article: http://atlas.ai/swag/nn.2728.pdf

I figure when we have built an artificial kidney that works as well as a kidney, and an artificial heart that works as well as a heart, and an artificial pancreas that works as well as a pancreas - then it will be reasonable to know whether an artificial brain is a reasonable goal.

If we have figured out how to compute the weather accurately some weeks into the future - then we might know whether we can compute a much more complex system. If we had the foggiest idea of how the brain actually works - then we might know what level of approximation is good enough.

Don't hold your breath for a personal upload.

I figure when we have built an artificial kidney that works as well as a kidney, and an artificial heart that works as well as a heart, and an artificial pancreas that works as well as a pancreas - then it will be reasonable to know whether an artificial brain is a reasonable goal.

Building an artificial kidney requires both knowledge about how a kidney works, and the physical engineering skill to build an artificial kidney with the same structure. Unlike a kidney, an artificial brain can be implemented in software, so it's enough to only know how it works.

The comparison would be valid if by an "artificial brain" we meant a brain built out of biological neurons, but we don't.

"an artificial brain can be implemented in software"

I have never understood what underlies this article of faith, and I know more about all the relevant technical disciplines than your average overeducated schmuck. Could you indulge my incomprehension by explaining it to me as though I just fell off the turnip truck?

I generally understand statements like that to be shorthand for (1) The computations performed by a natural brain can be implemented in a software artificial brain, and (2) those computations are the only aspect of a natural brain I care about.

If you question (1), I doubt I can satisfy your doubts, but I am curious as to what sorts of computations a brain can perform but software can't.

If you question (2), I'm certain I can't satisfy your doubts, but I'm curious as to what other aspects of a natural brain you care about and why.

If you understand something else by the phrase in the first place, it might help to unpack your reading more explicitly.

I generally understand statements like that to be shorthand for (1) The computations performed by a natural brain can be implemented in a software artificial brain, and (2) those computations are the only aspect of a natural brain I care about.

This is about right.

(1) The computations performed by a natural brain can be implemented in a software artificial brain, and (2) those computations are the only aspect of a natural brain I care about.

This is precisely what I was fishing for, thank you.

If brains are physical systems, and physics as we know it involves noncomputable processes (c.f. the comment below about CTD), then it follows that brains are doing noncomputable stuff. The question is then whether that noncomputable stuff is necessary to aspects of experience that we should care about, presuming that this conversation is ultimately about uploads. And the simple answer is that we don't know.

I've got enough theoretical and practical experience with physics, computers and nervous systems to have noticed the muddles that conspicuously creep in when I try to make these three topics interface, and am a little mystified whenever it seems like other smart people don't notice them. I suspect a big part of it is just getting a little too happy with metaphors like "brain = computer" without paying close attention to the ways in which these things are dis-analogous.

The question is then whether that noncomputable stuff is necessary to aspects of experience that we should care about

Well, I would say "that we care about"; it's not clear to me what it means to say about an aspect of experience that I don't care about it but I should. But I think that's a digression.

Leaving that aside, I agree: the only way to be sure that X is sufficient to build a brain is to do X and get a brain out of it. Until we do that, we don't know.

But if supporting a project to build a brain via X prior to having that certainty is a sign of accepting an "article of faith," then, well, my answer to your original question is that what underlies that article of faith is support for doing the experiment.

By way of analogy: the only way to be sure that X is sufficient to build a heavier-than-air flying machine is to do X and see if it flies. And if doing X requires a lot of time and energy and enthusiasm, then it's unsurprising if the first community to do X takes as "an article of faith" that X is sufficient.

I have no problem with that, in and of itself; my question is what happens when someone does X and the machine doesn't fly. If that breaks the community's faith and they go on to endorse something else, then I endorse the mechanism underlying their faith.

As for why the idea of mind-as-computation seems plausible... well, of all the tools we have, computers currently seem like the best bet for building an artificial mind out of, so that's where we're concentrating our time and energy and enthusiasm. That seems reasonable to me.

That said, if in N years neurobiology advances to the point where we can engineer artificial structures out of neurons as readily as we can out of silicon, I expect you'll see a lot of enthusiastic endorsement of AI projects built out of neurons. (I also expect that you'll see a lot of criticism that a mere neural structure lacks something-or-other that brains have, and therefore cannot conceivably be intelligent.)

Heh! Funny, I was just reading the bit in Anslie's Breakdown of Will where he discusses the function of beliefs as mobilizing one's motivations, and now feel obliged to be more tolerant of beliefs that I suspect are confused or mistaken but which motivate activities I endorse -- in this case, "running the experiment", as you say. So I guess I don't disagree. :) Thanks for the thoughtful response!

I have never understood what underlies this article of faith

I discourage the use of 'article of faith' as a rhetorical device.

I use it because I think it an accurate descriptor; what would you propose instead?

I'm guessing the presumption here is that "article of faith" is a pejorative. It's not: it refers to anything taken as true despite not being demonstrable. We lean on these all the time and that's okay, but it's useful to acknowledge when this is the case.

[-]gjm13y00

In principle an artificial brain can be implemented in software, but we currently don't have the technology to do so with the available resources. Overcoming that limitation would require a lot of technological progress: much faster computers or much cleverer software. So there are technological limits to implementing an artificial brain, just as there are to implementing an artificial pancreas. They just happen to be different technological limits.

[-]JanetK13y-20

Do you honestly believe that an artificial brain can be built purely in software in the near future? And if it could how would it be accurate enough to be some particular person's brain rather than a generic one? And if it was someone's brain could the world afford to do this for more than one or two person's at a time? I am not at all convinced of 'uploads'.

[-][anonymous]13y20

Do you honestly believe that an artificial brain can be built purely in software in the near future?

Kaj never mentioned "near future" or any timeline for uploads for that matter. The only thing he did was pointing out a possible flaw in your argument, yet you took it as a personal insult to your belief.

The Whole Brain Emulation Roadmap implies that it may very well be possible. I don't have the expertise to question their judgement in this matter.

As this review shows, WBE on the neuronal/synaptic level requires relatively modest increases in microscopy resolution, a less trivial development of automation for scanning and image processing, a research push at the problem of inferring functional properties of neurons and synapses, and relatively business-as-usual development of computational neuroscience models and computer hardware. This assumes that this is the appropriate level of description of the brain, and that we find ways of accurately simulating the subsystems that occur on this level. Conversely, pursuing this research agenda will also help detect whether there are low-level effects that have significant influence on higher level systems, requiring an increase in simulation and scanning resolution.

There do not appear to exist any obstacles to attempting to emulate an invertebrate organism today. We are still largely ignorant of the networks that make up the brains of even modestly complex organisms. Obtaining detailed anatomical information of a small brain appears entirely feasible and useful to neuroscience, and would be a critical first step towards WBE. Such a project would serve as both a proof of concept and a test bed for further development. If WBE is pursued successfully, at present it looks like the need for raw computing power for real-time simulation and funding for building large-scale automated scanning/processing facilities are the factors most likely to hold back large-scale simulations.

Tordmor has commented on my attitude - sorry I did not mean to sound so put out. The reason for the 'near future' was because the discussion was about 'upload' and so I assumed we talking about our lifetimes which in the context seemed the near furture (about the next 50 years). Making an approximate emulation of some simple invertebrate brain is certainly on the cards. But an accurate emulation of a particular person's brain is a different ballpark entirely.

I never know exactly what people mean when they say emulation or simulation or model. How much is the idea to mimic how the brain does something? To 'upload' someone, the receiving computer would need some sort of mapping to the physical brain of that person. This is a very tall order.

Thanks for the link to the Roadmap which I will be reading it.

More brain fun: endocannabinoids, which can send signals between neurons, usually "backwards", from the postsynaptic cell to the presynaptic cell. They're hydrophobic, but can pass easily among the membranes of nearby cells, so they're used for short-distance signaling via diffusion. They have a bunch of roles, including something to do with memory, though the details are still vague.

A general rule of thumb for biological systems is that you can safely bet on them being more complicated than they look at any given moment. Don't even get me started on how we evolve immunity to new pathogens whenever we come down with a cold (literally evolve -- there are somatic DNA changes involved), or the struggle in our genome between parasitic DNA sequences and the systems suppressing them. A modern molecular biology textbook would make H. P. Lovecraft blush.

I've absolutely no clue about computation but I have been told quite a few times that it doesn't matter how the brain works if you accept the Church–Turing thesis. It doesn't matter insofar as "everything computable is computable by a Turing machine." But as far as I can tell, that is purely theoretical. Theoretical in the sense that it works but that it could be unusably inefficient. If our brains make use of some novel physical processes, if indeed general intelligence demands (Church–Turing–Deutsch principle) to be run on quantum computers, then we might still be able to 'upload' ourselves into some crude mechanical device but it won't be efficient. Substrate-neutrality is likely factual but might be inefficient.

Well, lots of people already figured actual uploading was only possible by simulating the brain as a physical object, and thus not likely to be all that good - brains are complicated, and Kurzweil way oversimplifies. Have you seen the news that our brains use long-range electric fields in computation? The choices I see if we have to be simulated as physical objects are "use lots of rules of thumb to get a fast-running probably-human-like AI" and "try for fidelity and get a slow-running human."

Have you seen the news that our brains use long-range electric fields in computation?

No-- do you have a link?

http://media.caltech.edu/press_releases/13401

So the effect exists, and appears to be used. If you remember the magic disconnected gates in the famous evolved circuit, I think it's very safe to say that it's used.

Really nifty. I summarized both here, although the full articles are short enough, and are worth reading.

I am a bit surprised if this is surprising - is it not obvious that electric fields will affect neuron activity. Whether a neuron fires depends on the voltage across its membrane (at a point in a particular region at the base of the axon and, it seems, down the axon). The electric field around the neuron will affect this voltage difference as in good old-fashioned electrical theory. This is important for synchrony in firing (as in the brain waves) and that is important for marking synapses between neurons that have fired simultaneously for chemical changes. etc. etc. etc. Fields are not to be thought of as a little side effect. What is more interesting is what the fields do to glial cells and their communication which is (I believe) carried out with calcium ions but very affected by electrical fields. The synapses live in an environment created by the surrounding glia. The brain cannot be reduced to a bunch of on-off switches.