problem: I've read arguments for WBE, but I can't find any against. 

Most people agree that WBE is the first step to FAI (EDIT: I mean to say that if we were going to try to build AGI in the safest way possible, WBE would be the first step. I did not mean to imply that I thought WBE would come before AGI). I've read a significant portion of Bostrom's WBE roadmap. My question is, are there any good arguments against the feasibility of WBE? A quick google search did not turn up anything other than 

 This video. Given that many people consider the scenario in which WBE comes before AGI, to be safer than the converse, shouldn't we be talking about this more? What probability do you guys assign to the likelihood that WBE comes before AGI?

Bostrom's WBE roadmap details what technological advancement is needed to get towards WBE:

Different required technologies have different support and drivers for development. Computers are developed independently of any emulation goal, driven by mass market forces and the need for special high performance hardware. Moore’s law and related exponential trends appear likely to continue some distance into the future, and the feedback loops powering them are unlikely to rapidly disappear (see further discussion in Appendix B: Computer Performance Development). There is independent (and often sizeable) investment into computer games, virtual reality, physics simulation and medical simulations. Like computers, these fields produce their own revenue streams and do not require WBE‐specific or scientific encouragement.

A large number of the other technologies, such as microscopy, image processing, and computational neuroscience are driven by research and niche applications. This means less funding, more variability of the funding, and dependence on smaller groups developing them. Scanning technologies are tied to how much money there is in research (including brain emulation research) unless medical or other applications can be found. Validation techniques are not widely used in neuroscience yet, but could (and should) become standard as systems biology becomes more common and widely applied.  

 

Finally there are a few areas relatively specific to WBE: large‐scale neuroscience, physical handling of large amounts of tissue blocks, achieving high scanning volumes, measuring functional information from the images, automated identification of cell types, synapses, connectivity and parameters. These areas are the ones that need most support in order to enable WBE.  The latter group is also the hardest to forecast, since it has weak drivers and a small number of researchers. The first group is easier to extrapolate by using current trends, with the assumption that they remain unbroken sufficiently far into the future. 

 

Implications for those trying to accelerate the future:

Because much of the technological requirements are going to be driven by business-as-usual funding and standard application, anybody who wants to help bring about WBE faster (and hence FAI) should focus on either donating towards the niche applications that won't receive a lot of funding otherwise, or try to become a researcher in those areas (but what good would becoming a researcher be if there's no funding?). Also, how probable is it that once the business-as-usual technologies become more advanced, more government/corporate funding will go towards the niche applications? 

New Comment
32 comments, sorted by Click to highlight new comments since:

I'm confused by the premise that "Most people agree that WBE is the first step to FAI." "Friendly AI" is, if I'm not mistake, a term invented by EY, and neither he nor the others here have been particularly in favor of WBE in the past. Did you perhaps mean AGI (artificial general intelligence)?

I had to look up what WBE stands for. You should probably define it the first time you use it.

EDIT: It's whole brain emulation, in case anyone gets to the comments without getting it.

[-]gjm90

Which is to say, in the title.

Ah, the RSS feed did not display that. I too was unsure at first what this was until I read the first paragraph or two.

WBE is basically porting spaghetti code to a really different architecture... it may first seem easy, but...

What comes to mind is some C64 emulators that included logic for a simulated electron beam that scanned the display... as there were games that changed the memory while it was being read in order to... be able to use more colors? I'm not sure, but C64 computers are still created by humans, while evolution had millions of years to mess our brains up with complicated molecular-level stuff.

As I see it, WBE wouldn't be an one-step achievement but rather a succession of smaller steps, building various kinds of implants, interfaces, etc, "rewriting" parts of it to have the same functionality, until we end up having a brain made out of new code, making the old, biology-based part irrelevant.

That said... I don't think WBE would solve FAI so easily. The current concept is along the lines of "if we can build a working brain without either us or it knowing how it works, that's safe". That is indeed true, but only if we can treat it as a black box all along. Unfortunately, we can't avoid learning stuff about how minds work in the process, so by the time we get the first functional WBE instance, maybe every grad student would be able to hack a working synthetic AGI together just by reading a few papers...

Emulation... I know I had a good link on that in Simulation inferences... Ah, here we go! This was a pretty neat Ars Technica article: "Accuracy takes power: one man's 3GHz quest to build a perfect SNES emulator"

Whether you regard the examples and trade-offs as optimistic or pessimistic lessons for WBE reveals your own take on the matter.

Following up on this, I wondered what it'd take to emulate a relatively simple processor with as many normal transistors as your brain has neurons, and when we should get to that assuming Moore's Law hold. Also assuming that the number of transistors needed to emulate something is a simple linear function of the number of transistors in the thing you're emulating. This seems like it should give a relatively conservative lower bound, but is obviously still just a napkin calculation. The result is about 48 years, and the math is:

Where all numbers are taken from Wikipedia, and the random 2 in the second equation is the Moore's law years per doubling constant.

I'm not sure what to make of this number, but it is an interesting anchor for other estimates. That said, this whole style of problem is probably much easier in an FPGA or similar, which gives completely different estimates.

I wonder if people would sign up to be simulated with 95% accuracy. That'd would raise some questions about consciousness and identity. I guess you can't really emulate anything with 100% accuracy. The question of how accurate the simulations would have to be before people think they are safe enough to be uploaded sounds like an interesting topic for debate.

Upvoted for the nice example at the beginning (although it would be even better if I didn't have to look up C64; for anyone else reading this, C64 stands for Commodore 64, which was an old 8-bit home computer).

I don't know where you get the "most people". Many here think it is a possible step, others think it is irrelevant to GAI, and some even consider it harder than GAI. I think you may be confusing learning from neurological science and models with the more extensive Whole Brain Emulation, many do think we can learn from the former to help with GAI.

Most people agree that WBE is the first step to FAI.

I am one of the authors of the talk in the second link, and both of us think this claim is probably false, and that WBE won't come first.

Agree.

are there any good arguments against the feasibility of WBE

Against the feasibility? Heck yes. It's a ridiculously hard technical problem. Building a mind from scratch sounds much easier. Airplanes were invented in 1903 and we still don't have a scanner that can copy birds.

Storage requirements for example. There are about 10^11 neurons and 10^14 synapses. If we don't model them all, there is absolutely no guarantee that you'll get a human, let alone a specific human. And there's some news that glial cells play an important role in cognition, too, so that's another couple hundred billion cells and more connections.

Even if saying everything important about a cell or synapse only takes 10 bytes, that still a few internetworths of data to keep running in parallel. In contrast, there are plenty of optimizing tricks that you can play with something made out of code rather than opaque physical data. Even if the first seed AI is as complicated as a human brain, it could be smushed down to smaller processing requirements because it's in an easily manipulable language. So if computing power requirements end up determining which comes first, it seems quite likely that seed AI comes first.

Airplanes were invented in 1903 and we still don't have a scanner that can copy birds.

To keep the metaphor precise, we'd need only to copy the bird's technology of flight, which is its physical shape and its flying technique, not the whole internal structure. The reason we don't do it is that we already have better means of air transportation, rather than unfeasibility of bird emulation.

And we do, in fact, have toys that achieve flight by being bird-shaped and flapping their wings.

Building a mind from scratch sounds much easier.

I disagree -- I would argue that, in principle, simulating/emulating a mind would be much easier than building a mind from scratch. My main justification is that simulating a brain is much more straightforward than building one from scratch. They are both undoubtedly extremely difficult tasks, but we are much closer to being able to accomplish the simulation. As a rough measure of this, you can try to look at where current companies and researchers are placing their bets on the problem. For example, brain simulation is a field which is already maturing rapidly (IBM's project being a keynote example), whereas the state of the art of "mind design from scratch", as it were, is still essentially speculative. Some groups like Goertzel's team and others are looking at it, but no big company is taking on the task.

If you count IBM's simulation of networks (made out of point nodes, not simulated neurons, unless you're thinking of a different project) as "betting on emulating whole humans," then why not also count all their work on AI as "betting on building minds from scratch?" And, of course, Google. And, of course, birds.

I made a discussion post a while back trying to get a handle on how hard this problem is by looking at progress on nematode brain emulation.

Most people agree that WBE is the first step to FAI

Agree that it is the first step, or a first step? We don't know enough about developing an FAI to be able to say the former with any certainty, do we?

I'm looking for the conversation on this.

[-][anonymous]20

Hmm I don't know much about the actual feasibility of AGI, but I happen to know a bit about neurology and even if there are really powerful scanninga technique on the horizon - for example according to a researcher I recently spoke to, MRI with 10^−6 m resolution are currently available - WBE seems really hard to do. Aside for the already mentioned number of neurons and synapses (and glia), there are (according to my lectures) about 1000 different kinds of neurons, lots of different synapses, various densities of ion-channels at different locations, you have local, global and semi-global neurotransmitters, passive conductive properties of dendrites/axons differ and change over time, DNA modification in responds to some stimuli. I think you get the point. It's not impossible I guess, some things could probably be left out/compressed but still . . .

What seems (at least to me) more plausible is just that a lot of the "macroscopic tricks" (example: receptor fields could be applied when engineering AI and maybe AGI.

There are many difficulties with WBE :

  1. As other people said, the storage/computing power required.

  2. The scanning itself.

  3. All the interactions between the brain and the rest of the body, our brain is constantly fed with input from nerves from eyes, ears, nose, the skin, even the inside of the body ... Unsettle that too much, and you're likely to drive the brain mad.

  4. The chemical part : the brain stimulated by chemicals (hormones, drugs, ...) we don't know exactly if we can just model neurons and synapses (like we do in neural networks) or if we need a much lower-level (atom by atom) simulation to get a working brain, since the brain is constantly affected by various chemicals.

So... WBE is of course theoretically possible, but it seems a harder challenge to me than building an GAI "from scratch" (which is already a hard task).

[-][anonymous]20

A large number of the other technologies, such as microscopy, image processing, and computational neuroscience are driven by research and niche applications.

I was talking with someone about this a couple months ago, and as far as nondestructive brain imaging goes, it may be theoretically impossible. I forget the details of the argument, but the kind of resolution you would need to observe neurons is so high that the effective penetration of whatever signal you use to observe them is too small to see the whole brain. It was just an off-the-cuff Fermi estimate, though, so I wouldn't bet any probability mass on it.

If this is the case, it seems to be a reasonable argument against practical feasibility. Getting someone to trust destructive imaging techniques would be a much steeper uphill battle.

EDIT: For comparison, the roadmap expects a resolution of 5 x 5 x 50 nm. Modern CT scans get 0.2 mm resolution, according to Wikipedia. Obviously there's room for improvement, but the question is, that much?

Well, if direct scanning from outside is not possible, it's always possible to send nanobots to scan the brain from inside. It's also possible to freeze the person, cut the brain in small slices while it's frozen, and scan the slices. Just two different ways of "scanning" a brain, and there are probably others we don't even think about now.

[-][anonymous]90

It's also possible to freeze the person, cut the brain in small slices while it's frozen, and scan the slices.

This is what I referred to as "destructive imaging" above. Unless the brain is vitrified (which essentially kills any chemical data, which we may or may not need) the ice damage is going to play havoc with the scanning results. Every time you refreeze/rethaw the brain to try another scan, more of the brain gets damaged from the ice. It's a lot riskier.

Again, I'm not saying it's impossible, but there's a difference between a technology possible in 2025 and a technology possible in 2060. After all, I may not live to see the latter.

[-]gjm20

It was just an off-the-cuff Fermi estimate, though, so I wouldn't bet any probability mass on it.

I'm not sure what you mean by "bet any probability mass on it" -- one of the things about probability mass is that it's conserved... -- but there are many cases in which I'd be quite happy to adjust my probabilities substantially and/or make large bets on the basis of off-the-cuff Fermi estimates. The best reason for not basing one's actions on such an estimate isn't that their results are no use, but that one can very often improve them somewhat with little effort.

In this instance, the main reason I'd be reluctant to base anything important on such an estimate is that it seems like improvements in measuring equipment, and/or just measuring for a long time, might be able to overcome the problem. But if the estimate were (1) a matter of what's fundamentally possible, (2) separated from what present technology can do by several orders of magnitude, and (3) apparently quite robust (i.e., the approximations it uses don't look too bad) then it might be entirely reasonable to conclude from it that Pr(WBE in the foreseeable future) is extremely small.

Different required technologies have different support and drivers for developmenComputers are developed

There appears to be a cut'n'paste typo in the quote from the WBE paper quoted from. I guess because the error occurs over a page boundary.

Given that many people consider the scenario in which WBE comes before AGI, to be safer than the converse, shouldn't we be talking about this more?

Building it on the moon would be safer in the same sense. Should we be talking about that more?

What probability do you guys assign to the likelihood that WBE comes before AGI?

Pretty low - under 1%.

I know you meant that sarcastically, but it actually strikes me as a good idea if the technology for colonizing the moon were sufficiently advanced. Though internet access would have to be restricted either way.

FWIW, I wasn't being sarcastic. Today's most sophisticated electronic minds are almost constantly plugged into the internet. If you think tomorrow's machine intelligences are going to be very much different, reality check time, methinks.