From: http://freethoughtblogs.com/pharyngula/2012/07/14/and-everyone-gets-a-robot-pony/

I’ve worked with tiny little zebrafish brains, things a few hundred microns long on one axis, and I’ve done lots of EM work on them. You can’t fix them into a state resembling life very accurately: even with chemical perfusion with strong aldehyedes of small tissue specimens that takes hundreds of milliseconds, you get degenerative changes. There’s a technique where you slam the specimen into a block cooled to liquid helium temperatures — even there you get variation in preservation, it still takes 0.1ms to cryofix the tissue, and what they’re interested in preserving is cell states in a single cell layer, not whole multi-layered tissues. With the most elaborate and careful procedures, they report excellent fixation within 5 microns of the surface, and disruption of the tissue by ice crystal formation within 20 microns. So even with the best techniques available now, we could possibly preserve the thinnest, outermost, single cell layer of your brain…but all the fine axons and dendrites that penetrate deeper? Forget those.

[...]

And that’s another thing: what the heck is going to be recorded? You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue? All the ones I know of involve chemically modifying the cells and proteins and fluid environment. Does anyone have a scanning technique that records a complete chemical breakdown of every complex component present?

I think they’re grossly underestimating the magnitude of the problem. We can’t even record the complete state of a single cell; we can’t model a nematode with a grand total of 959 cells. We can’t even start on this problem, and here are philosophers and computer scientists blithely turning an immense and physically intractable problem into an assumption.

[...]

You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed…how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” 

[...]

I’m not anti-AI; I think we are going to make great advances in the future, and we’re going to learn all kinds of interesting things. But reverse-engineering something that is the product of almost 4 billion years of evolution, that has been tweaked and finessed in complex and incomprehensible ways, and that is dependent on activity at a sub-cellular level, by hacking it apart and taking pictures of it? Total bollocks.

New Comment
59 comments, sorted by Click to highlight new comments since: Today at 11:47 AM

Computer folk often use the terms emulation and simulation to mean two different things, which Myers appears to be conflating. In the sense I'm thinking of, simulation means modeling the components of a system at a relatively low level — such as all the transistors and connections in a CPU — whereas emulation means replicating the functional behavior of a system.

(Of course, these terms are used in a lot of other ways, too. SimCity is neither a simulation nor an emulation in the sense I'm using.)

For instance, a circuit simulator modeling a piece of RAM might keep track of the amount of charge in a particular capacitor that represents a particular bit in memory; but an emulator would just keep track of what numerical value was stored in which addressable location. An emulator doesn't attempt to replicate how the original system works, but rather what it does.

(A non-computational analogy: An artificial heart doesn't duplicate the muscle cells of a natural heart; it duplicates the function of a heart, namely moving blood around. It's not necessary to copy the behavior of each individual muscle cell — to say nothing of each molecule in each muscle cell! — in order to duplicate the function of a heart well enough to keep a person alive for years.)

From what I've read, folks who expect WBE don't expect modeling at the molecular level (a simulation of a brain), but rather at some higher functional level (an emulation, hence the term), so much as that some sort of functional components — maybe individual neurons; maybe specific brain regions — can be emulated without simulating them.

In the sense I'm thinking of, simulation means modeling the components of a system at a relatively low level — such as all the transistors and connections in a CPU — whereas emulation means replicating the functional behavior of a system.

There seems to be conflicting usage about this.

http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

The term emulation originates in computer science, where it denotes mimicking the function of a program or computer hardware by having its low‐level functions simulated by another program. While a simulation mimics the outward results, an emulation mimics the internal causal dynamics (at some suitable level of description). The emulation is regarded as successful if the emulated system produces the same outward behaviour and results as the original (possibly with a speed difference). This is somewhat softer than a strict mathematical definition1. [...]

By analogy with a software emulator, we can say that a brain emulator is software (and possibly dedicated non‐brain hardware) that models the states and functional dynamics of a brain at a relatively fine‐grained level of detail.

In particular, a mind emulation is a brain emulator that is detailed and correct enough to produce the phenomenological effects of a mind.

https://secure.wikimedia.org/wikipedia/en/wiki/Emulation

The word emulation refers to: [...]

The low-level simulation of equipment or phenomena by artificial means, such as by software modeling. Note that simulation may also allow an abstract high-level model.

On the other hand, the top-voted answer at http://stackoverflow.com/questions/1584617/simulator-or-emulator-what-is-the-difference says that

Emulation is the process of mimicking the outwardly observable behavior to match an existing target. The internal state of the emulation mechanism does not have to accurately reflect the internal state of the target which it is emulating.

Simulation, on the other hand, involves modeling the underlying state of the target. The end result of a good simulation is that the simulation model will emulate the target which it is simulating.

[-][anonymous]12y50

Well, when I argued on here last week ( http://lesswrong.com/lw/d80/malthusian_copying_mass_death_of_unhappy/6y2r?context=1#6y2r ) that emulation would be more difficult than people imagine, based on my experience of working on software that does that, people downvoted it and argued "no, people aren't talking about emulation, but about modelling at the molecular level"

Hmm ... from my reading of that conversation, one person said that.

[-][anonymous]12y70

Fair enough, although multiple people downvoted that comment (it seems to have had some upvotes since to compensate). Even if they downvoted for different reasons though, that's still at least one counterexample of someone who fits into the category "folks who expect WBE".

Emulation without simulation would require not only vastly more understanding of the brain and of cell biology than we have now (most of the problems Myers points out would still be there, though not all) but on top of that all the problems you hit when trying to emulate one system on another, plus a whole lot of problems no-one's ever even conceived because no-one's ever ported an algorithm (for which we have neither source code nor documentation) from a piece of meat to silicon.

I did like the test problem in the comments:

Take a preserved cell phone, slice it into very thin slices, scan the slices, and build a computer simulation of the entire phone.

Question: what is the name, number, and avatar of the third entry in the address book?

Now, how would you approach that one? Assume a known model of phone.

Looks like flash memory stores information using varying levels of charge; that would be quite painful to read out with a destructive scan. Happily that's unlikely to be the case with the brain's long-term storage, since AIUI it doesn't contain any sufficiently good insulators.

Now, how would you approach that one?

Step 1 is to construct a superintelligent machine...

Freeze the volatile memory - - this preserves its state (you can retrieve passwords from shutdown laptops this way. an upside down can of computer cleaner will work). Slice it up, scan it (this assumes it wasn't significantly damaged while slicing; some damage is acceptable because what was there can be inferred -- this is a method in data recovery. also, you wouldn't slice it up tbh. probably the same with a brain.). With the scan you should be able to build a 3d representation of the memory with pixels (more information than just rgba). Now, you use some kind of pattern recognition to map patterns of pixels to physical representations (eg. take a Quake map and look for pixel patterns that match a jump pad).

Now, if you understood how the memory and cellphone software works, you could just get the state into a binary form acceptable for a cell phone emulator. But, because we don't understand how it works, we'll need to simulate reality to a sufficient level. I.e., we need an empty emulated universe with physical laws that correspond to our own, so that we can interpret pixels into their physical correspondents. So, when we pattern match a bunch of pixels into a memory cell with a certain state, we can then drop that interpretation into the emulated world.

For the emulated world to be sufficient for emulating the cell phone, I don't think you would need atoms or electrons (or anything below that level). You could probably emulate the components at the level of electricity, silicon, wire, gold, ect, because we can explain and predict the phenomena a phone produces at that level without going further. E.g., we just need to know what an electric current does, not what its electrons are doing to turn on an emulated light bulb.

(This was my internal monologue as I went through this problem. It's not researched, and is intended to be taken more as bar talk than anything very serious.)

[-][anonymous]12y20

That seems feasible if you knew both the model and the operating system, and had a scan showing very precise relative temperatures. You could then match the state of the simulated phone to a long but finite list of the possible states of the phone given the operating system. But I'm not a doctor.

It's possible to directly read the state of transistors in the phone's memory via scanning capacitance microscopy (http://www.multiprobe.com/technology/technologyassets/S05_1_direct_measurements_of_charge_in_floating_gate.pdf), so you can reconstruct the actual contents of the memory. Probably the greater challenge would be figuring out how to cut the phone into slices without damaging the memory.

Assume there are 20 apps on the phone, and each app can be in 5 states. Then this list is already 5^20 (or about 10^14) entries long. This doesn't include stored memory, as the address book would entail (number of possible names for the first entry of the address book is already something like 26^20 as a conservative estimate).

PZ's comment regarding the implausibility of speeding up an emulated brain was a real head-scratcher to me, and Andrew G calls him on it in the comments. Apparently (judging from his further comments) what he really meant was that you have to simulate or emulate a good environment, physiology, and endocrine system as well otherwise the brain would go insane.

Of course, we already knew that...

Right on.

I'm the blogger PZ was responding to in his post, and I specifically recommended PZ read Sandberg and Bostrom's Whole Brain Emulation: A Roadmap.

That's what PZ is claiming to have read when he writes "I read the paper he recommended," but PZ doesn't seem to have read it very carefully, in particular missing out on the sections "simulation scales" (pp. 13-14), "Body Simulation," and "Environment Simulation" (pp. 74-78). I've written a post explaining PZ's apparent confusions in greater detail at my blog.

a post [...] at my blog.

Copy-and-paste error with the link; I think you meant to give this one.

Thanks, fix'd.

Seems similar enough to "Every part of your brain assumes that all the other surrounding parts work a certain way. The present brain is the Environment of Evolutionary Adaptedness for every individual piece of the present brain.

Start modifying the pieces in ways that seem like "good ideas"—making the frontal cortex larger, for example—and you start operating outside the ancestral box of parameter ranges. And then everything goes to hell.

So you'll forgive me if I am somewhat annoyed with people who run around saying, "I'd like to be a hundred times as smart!" as if it were as simple as scaling up a hundred times instead of requiring a whole new cognitive architecture."

Eliezer Yudkowsky, Growing Up is Hard

Well, OTOH, he also complains that messing around by trial and error is likely to cause unpredictable side effects, like nasty insanity, some of which may be too subtle to notice at first, or just tolerated.

Can Myers engage with stuff he might be wrong about on the Pharyngula blog? He seems to mostly focus on spotting creationists and similar obviously wrong crackpots, hitting them with the biggest hammer in easy reach and never backing down. Taking the same approach to stuff nobody understands very well yet might not be productive.

stuff nobody understands very well yet

He works with brain preservation every day. When he says "this is impossible", he's not being uber-sceptic - he's speaking with annoyance at something he'd love to be able to do and that would make his work a lot easier, but that he has excellent reason to consider practically impossible.

No complaints about that part, but then he went off on the weird argument about how increasing the emulation speed is an incoherent idea, and seems to be sticking to his guns in the comments despite several people pointing out that you don't need to do a quantum-level simulation of an entire universe to provide a sped-up virtual sensory reality for the sped-up emulated brain in a box.

That's the stuff some people do understand but PZ either doesn't or can't back down on since he's writing a blog where he must not lose face by admitting mistakes or the creationists win.

The stuff nobody understands is why we can't even build a robot flatworm by emulating the 100-odd neuron flatworm brain, which would be nice to know before we start getting into detailed arguments about the practical requirements of human uploads. Proper understanding of this part might also reveal shortcuts which we can use to loosen the scanning and emulation requirements and still end up with functional uploads.

[-][anonymous]12y00

I used to read Panda's Thumb regularly, many years ago, and have read occasional pieces by him more recently. PZ Myers might be competent at whatever field he specializes in, but as a general thinker he is best ignored.

[This comment is no longer endorsed by its author]Reply

It also seems like a pretty serious argument against cryonics, no?

The dogs bark. The caravan moves on.

Putting smileys after jokes such as "Step 1 is to construct a superintelligent machine..." would be a good start. Seems like people are taking such statements seriously -- not surprising, really.

From the comments, PZ elaborates: "Andrew G: No, you don’t understand. Part of this magical “scan” has to include vast amounts of data on the physics of the entity…pieces which will interact in complex ways with each other and the environment. Unless you’re also planning to build a vastly sped up model of the whole universe, you’re going to have a simulation of brain running very fast in a sensory deprivation tank.

Or do you really think you can understand how the brain works in complete isolation from physiology, endocrinology, and sensation?"

Seems like PZ is dismissing the feasibility of computation by assuming that computation has to be perfectly literal. To make a chemistry analogy here, one does not have to model the quantum mechanics and the dynamics of every single molecule in a beaker of water in order to simulate the kinetics of a reaction in water. One does not need to replicate the chemical entirety of the neuron in silico; one merely needs to replicate the neuron's stimulus-response patterns.

Oops, didn't see a further comment below: In response to a comment, " I still don’t understand why biologists insist that you have to do a perfect simulation, down to the smallest molecule, and then state the obvious fact that it’s not going to happen.", PZ says this:

"Errm, because that’s what the singularitarians we’re critiquing are proposing? This whole slice-and-scan proposal is all about recreating the physical components of the brain in a virtual space, without bothering to understand how those components work. We’re telling you that approach requires an awfully fine-grained simulation.

An alternative would be to, for instance, break down the brain into components, figure out what the inputs and outputs to, say, the nucleus accumbens are, and then model how that tissue processes it all (that approach is being taken with models of portions of the hippocampus). That approach doesn’t require a detailed knowledge of what every molecule in the tissue is doing.

But the method described here is a brute force dismantling and reconstruction of every cell in the brain. That requires details of every molecule."

Still seems like a straw man.

Still seems like a straw man.

Erm, please clarify how.

Well, there are many different possible levels of brain emulation (just like in emulating video game consoles), all of which have different demands and feasibilities. The Whole Brain Emulation roadmap discusses several.

No one denies that details of every molecule would be a very brute force and difficult emulation and as far as that goes, he's not strawmanning; but to think that this is the only kind of emulation and dismiss emulation in general on the basis of the specific, that is a straw man.

In the first quote, he sets up the straw man as gwern describes it. In the second quote, he defends his first straw man by saying "but that's what singularitarians believe", essentially putting up a second straw man to defend the first.

The quote jumps between models of large brain regions to molecule by molecule analysis, leaving out the intermediate of creating models of neurons. Thus all the talk in the roadmap about predictive models.

Recent talk of (http://www.youtube.com/watch?v=ZBpy29IPO8c)[S. Seung] at oxford show how large is the material problem of building a human connectome. The AIs are not enouth to track the path of individual neurons. People have to correct the erros, becoming gamers.

It is unclear why this apposite technical reference got downvotes.

But reverse-engineering something that is the product of almost 4 billion years of evolution, that has been tweaked and finessed in complex and incomprehensible ways, and that is dependent on activity at a sub-cellular level, by hacking it apart and taking pictures of it? Total bollocks.

I agree with the last sentence.

While it is possible that key features aspects of intelligence can't be modeled without an extremely low level of detail of brain function, it's also possible that many of those details are not needed. I think it's likely. My guess is that if neurons were so chaotically fiddly on a functional level, we wouldn't work at all in the first place.

My hypothesis is that there are a finite number of classes of neurons, glial cells, and classes of synaptic junctions, that bound closely into certain behavioral groupings. In which case, you need only prod enough neurons in petri dishes to develop good statistical models of each type of neuron, glia, and synapse you're modelling. I suspect, but can't prove right now, that only the broad probabilistic behavior of each functional element would be meaningful on the scales we care about.

The reason I believe that is exactly what you said -- it's too noisy. Human brains are way too robust to be extremely sensitive to sub-cellular changes. If you want sub-cellular changes to make a difference (say, in the case of drugs) you have to affect billions of neurons.

EDIT: Actually, you can pretty cleanly rebut his argument about how hard it is to preserve the fish's neural tissue in what he considers to be 'sufficient detail.' If brains really were that sensitive to sub-cellular shifts in neuronal state, there's no way it would be possible for someone to recover from being clinically dead for a few seconds, much less the hours or days that have been observed in cold conditions.

PZ Meyers

Please spell PZ Myers' name correctly.

To our best knowledge, what are the hard limits on 'compressing' physical systems? That is, given some bit of physics, what are the limits on building a simulator using less space/time/energy/bits/... than the original, and still having a similarly sized phase space? I expect physics is in general incompressible, but perhaps we can use some physical phenomena that don't ordinarily play a part in the everyday systems we want to simulate?

I've seen people discuss what level of emulation is necessary for WBE. Supposing outright simulation is needed, how much bigger/more complex/more expensive might a robust simulator have to be compared to a regular brain?

I expect physics is in general incompressible

Why would "physics" be incompressible? Most of the universe is empty space, no?

I don't know, I'm not a physicist. Don't they have vacuum energy and virtual particles and other stuff that makes even empty space full of information? ETA: what's empty space? A near-zero value of all relevant fields? But if fields can be measured to the same precision regardless of magnitude (?) then don't you get the same amount of information unless the fields are actually a constant zero? I don't understand physics, this may well be completely wrong.

Anyway, I expect the lack of phenomena important to brains in empty space (no ordinary matter and energy, atoms, chemistry) allows the compression of that. But can you simulate a typical physical system using significantly less matter or energy? (Or time?) Can you simulate the human brain or body?

I don't know, I'm not a physicist. Don't they have vacuum energy and virtual particles and other stuff that makes even empty space full of information?

Not so much as near black holes. Just look at their respective entropies.

FWIW, I expect that the human brain will prove to be highly compressible with advanced molecular nanotechnology.

Do you mean the compressibility of a single human brain in isolation, or the compressibility of an individual human brain given that at least one other human brain has already been stored (or is expected to be available during restoration), or both? I expect the data storage requirements of the latter to be orders of magnitude smaller than the former.

I was talking about the compressibility of a single human brain in isolation.

The link goes to lesswrong.com!

I always thought it is more likely via alternative approach: figure out how the plug-n-play nature of the brain works (one part of the brain can substitute for damaged part, it must be built out of small plug-n-play-ish units (cortical columns?) ), then you can perhaps connect brain to the hardware running that simulated network, have the function 'expand' into there, get smarter and figure out how to scan or recreate the rest. Still an enormous problem, of course, but there's a better way to copy data from one computer to other computer than shaving off the plastic from the flash memory chips then using a scanning electron microscope to read off the data.