Not sure if this has been covered on LW, but it seems highly relevant to WBE development. Link here:

http://www.reddit.com/r/IAmA/comments/147gqm/we_are_the_computational_neuroscientists_behind/

A few questioners mention the Singularity and make Skynet jokes.

The abstract from their paper in Science:

A central challenge for cognitive and systems neuroscience is to relate the incredibly complex behavior of animals to the equally complex activity of their brains. Recently described, large-scale neural models have not bridged this gap between neural activity and biological function. In this work, we present a 2.5-million-neuron model of the brain (called “Spaun”) that bridges this gap by exhibiting many different behaviors. The model is presented only with visual image sequences, and it draws all of its responses with a physically modeled arm. Although simplified, the model captures many aspects of neuroanatomy, neurophysiology, and psychological behavior, which we demonstrate via eight diverse tasks.

I'm curious to see LWers' perspectives on the project.

New to LessWrong?

New Comment
17 comments, sorted by Click to highlight new comments since: Today at 1:10 PM

Hi, I'm Terry Stewart, one of the researchers on the project, and I'm also a regular reader of Less Wrong. I think LW is the most interesting new development in applied cognitive science, and I'm loving seeing what comes out of it.

I'd definitely be up for answering questions, or going into more detail about some of the stuff in the reddit discussion. I'll go through any questions that show up here as a start...

There's a Less Wrong meetup group in Waterloo if you're interested.

I actually know one of the guys working on it - I could ask him to come over here if you like.

[-][anonymous]11y30

This seems like a great idea - if we put together a concrete list of questions to ask, it could be worth his time to come over.

If anyone wants to ask any questions, leave a comment and maybe we can get some direct answers. (But make sure your question isn't in the AmA, first!)

Q: What do you all make of Bostrom and Sandberg's Whole Brain Emulation Roadmap?

Hi, I'm Terry Stewart, one of the researchers on the project.

I like the roadmap, and it seems to be the right way to go if the goal is to emulate a particular person's brain. However, our whole goal is to understand the human brain, so we want to reach for whole-system understanding, which is exactly what the WBE approach doesn't need.

I believe that the approach we are taking is a novel method for understanding the human brain that has a reasonable chance of producing results faster than the pure WBE approach (or, at the very least, the advances in understand provided by our approach may make WBE significantly simpler). Of course, to make that claim, I need to justify why our approach is significantly different from hundreds of other researchers who also are trying to understand the human brain.

The key difference is that we have a neural compiler: a system for taking a mathematical description of the function to be computed and the properties of the neurons involved, and producing a set of connection weights that will cause those neurons to approximate that function. This is a radically different approach to building neural networks, and we're still working out the consequences of this compiler. There's a technical overview of this system here [http://ctnsrv.uwaterloo.ca/cnrglab/node/297] and the system itself is opensource and available at [http://nengo.ca]. This is what let us build Spaun -- we took a bunch of descriptions of the function of different brain areas, converted them into math, and compiled them into neurons.

Right now, we use a very simple neuron model (LIF -- basically the simplest spiking neuron model), but the technique is applicable to any type of neuron we feel like using (and have the computational power to handle). An interesting part of the research is determining what increased functional capacities you get from using more complex neural models.

Indeed, the main thing that makes me think that this is a novel and useful way of understanding the brain is that we get constraints on the types of computations that can be performed. For example, it turns out to be really easy to compute the circular convolution of two 500-dimensional vectors (an operation we need for our approach to symbol-like reasoning), but very hard to get neurons to find which of five numbers is the largest (the max function). These sorts of constraints have cused us to examine very different types of algorithms for reasoning, and we found that certain inductive reasoning problems are surprisingly easy with these sorts of algorithms [http://ctnsrv.uwaterloo.ca/cnrglab/node/16].

[-][anonymous]11y00

It looks like we only have one question - still I think a lot of people (me included) would like to see it answered. Would you mind contacting your friend?

No problem. Sent him a message, hopefully he has time!

I think we need to separate the concept of whole brain emulation, from that of biology-inspired human-like AI. This actually looks pretty bad for Robin Hanson's singularity hypothesis, where the first emulations to perfectly emulate existing humans suddenly make the cost of labor drop dramatically. If this research pans out, then we could have a "soft takeoff", where AI slowly catches up to us, and slowly overtakes us.

CNRG_UWaterloo, regarding mind uploads:

Being able to simulate a particular person's brain is incredibly far away. There aren't any particularly good ideas as to how we might be able to reasonably read out that sort of information from a person's brain. That said, there are also lots of uses that a repressive state would have for any intelligent system (think of automatically scanning all surveillence camera footage). But, you don't want a realistic model of the brain to do that -- it's get bored exactly as fast as people do.

So we should expect machine labor to gradually replace human labor, exactly as it has since the beginning of the industrial revolution, as more and more capabilities are added, with "whole brain emulation" being one of the last features needed to make machines with all the capabilities of humans (if this step is even necessary). It's possible, of course, that we could wind up in a situation where the "last piece of the puzzle" turns out to be hugely important, but I don't see any particular reason to think that will happen.

Robin's economic model for growth explosion with AI uses a continuum of tasks for automation. The idea is that as you automate more tasks, those tasks are done very efficiently, but the remaining ones become bottlenecks, making up more of GDP and limiting growth. Think Baumol's cost disease: as our manufacturing productivity has increased, economic growth winds up more limited by productivity improvements in service sectors like health care and education, computer programming, and science that have been resistant to automation.

As you eliminate the last bottlenecks where human labor is important, you can speed the whole process of growth up to computer-scales, rather than being held back by the human weakest link. One can make an analogy to Amdahl's law: if you can parallelize 50% of a problem you can double your speed, at 90% you can get a 10x speedup, at 99% a 100x speedup, and as you approach 100% you can rush up to other limits.

Similarly, smooth human progress in automating elements of AI development (which then proceed very quickly, bottlenecked by the human-limited elements) could produce stark speedup as automation approaches 100%.

However, such developments do make a world dominated by whole brain emulations rather than neuromorphic AI less likely, even if they still allow an intelligence explosion.

[-][anonymous]11y30

I think we need to separate the concept of whole brain emulation, from that of biology-inspired human-like AI.

This seems completely true. Part of the problem is that the media hype surrounding this stuff drops lines like this:

Spaun can recognize numbers, remember lists and write them down. It even passes some basic aspects of an IQ test, the team reports in the journal Science.... the simplified model of the brain, which took a year to build, captures many aspects of neuroanatomy, neurophysiology and psychological behaviour... They say Spaun can shift from task to task, "just like the human brain," recognizing an object one moment and memorizing a list of numbers the next. And like humans, Spaun is better at remembering numbers at the beginning and end of the list than the ones in the middle. Spaun's cognition and behaviour is very basic, but it can learn patterns it has never seen before and use that knowledge to figure out the best answer to a question. "So it does learn," says Eliasmith.

Basically: to explain this stuff to normal readers, writers anthropomorphize the hell out of the project and you end up with words like 'intuition' and 'understanding' and 'learn' and 'remember' - which make the articles both sexier and way more misleading. The same thing happened with IBM's project and, to my understanding, the Blue Blain Project as well.

Actually I'm not sure if any of that is a problem. Spaun is quite literally "anthropomorphic" - modeled after a human brain. So it's not much of a stretch to say that it learns and understands the way a human does. I was just pointing out that the more progress we make on human-like AIs, without progress on brain scanning, the less likely a Hansonian singularity (dominated by ems of former humans) becomes. If Spaun as it is now really does work "just like a human", then building a human-level AI is just a matter of speeding it up. So by the time we have computers capable of supporting a human mind upload, we'll already have computer programs at least as smart as humans, which learn their knowledge on their own, with no need for a knowledge transplant from a human.

If Spaun as it is now really does work "just like a human", then building a human-level AI is just a matter of speeding it up.

As I explained in this comment, Spaun can only perform tasks that are specifically and manually programmed into it. It is very, very far from working just like a human. It's definitely incapable of learning new skills or concepts, for example. What the original article said was:

They say Spaun can shift from task to task, "just like the human brain," recognizing an object one moment and memorizing a list of numbers the next.

Well gosh, my desktop computer can also shift from task to task, just like the human brain, mining bitcoins one moment and decoding MPEGs the next. This is either PR or (perhaps unintentional) hype by the reporter, saying something that is literally true but gives the impression of much greater accomplishment.

(Which isn't to say that Spaun might not continue with or inspire further more interesting developments, but a lot of people seem to be overly impressed with it in its current state.)

Hi, it's Terry again (one of the researchers on the project)

The interesting thing (for me) isn't that it can shift from task to task, but that it can shift from task to task just like the human brain. In other words, we're showing how a realistic neural system can shift between tasks. That's something that's not found in other neural models, where you tend to either have it do one task or you have external (non-neural) systems modify the model for different tasks. We're showing a way of doing that selecting routing and control in an entirely neural way that maps nicely onto the cortex-basal ganglia-thalamus loop.

Oh, and, since we constrain the model with a bunch of physical parameters influencing the timing of the system (reabsorption of neurotransmitter, mostly), we can also look at how long it takes the system to switch tasks, and compare that to human brains. It's these sorts of comparisons that let us use this sort of model to test hypotheses about what different parts of the brain are doing.

According to my understanding, Spaun is only able to shift between a fixed set of tasks, and according to a fixed algorithm (if the first two inputs are "A1", route the information one way so that it ends up doing one task, and if the first inputs are "A2", route the information another way, etc.) that was manually designed. You haven't explained yet (or emulated) how human brains are able to switch fluidly between an ever changing set of possibe tasks, and without having to be prompted by specific codes such as "A1" and "A2".

If my understanding is correct, I think a clearer and fairer description of your accomplishment might be that you've demonstrated task shifting "on a (simulated) neural substrate that is structually similar to the human brain", rather than task shifting "just like the human brain".

Yup, I'd say that's a fair way of expressing it, although I think we take "neural substrate that is structurally similar to the human brain" much more seriously than other people that use phrases like that. It's a similar enough substrate that if fixes a lot of our parameter values for us, leaving us less open to "fiddle with parameters until it works".

We've also tried to make sure to highlight that it can't learn new tasks, so it's not able to work in the fluid domains people do. It also doesn't have any intrinsic motivation to do that switching.

Interestingly, there are starting to be good non-neural theories of human task switching (e.g. [http://act-r.psy.cmu.edu/publications/pubinfo.php?id=831] ). These are exactly the sorts of theories we want to take a close look at and see how they could be realistically implemented in spiking neurons.

Interesting quotes from the article.

Their main goal is behavior reproduction, not just making lots of neurons:

Although impressive scaling has been achieved [in other projects], no previous large-scale spiking neuron models have demonstrated how such simulations connect to a variety of specific observable behaviors... In contrast, we present here a spiking neuron model of 2.5 million neurons that is centrally directed to bridging the brain-behavior gap. Our model embodies neuroanatomical and neurophysiological constraints, making it directly comparable to neural data at many levels of analysis. Critically, the model can perform a wide variety of behaviorally relevant functions. We show results on eight different tasks that are performed by the same model, without modification.

The task:

All inputs to the model are 28 by 28 images of handwritten or typed characters. All outputs are the movements of a physically modeled arm that has mass, length, inertia, etc...Many of the tasks we have chosen are the subject of extensive modeling in their own right, e.g. image recognition, serial working memory and reinforcement learning and others demonstrate abilities that are rare for neural network research and have not yet been demonstrated in spiking networks (e.g., counting, question answering, rapid variable creation, and fluid reasoning)...

The eight tasks (termed “A0” to “A7”) that Spaun performs are: (A0) Copy drawing. Given a randomly chosen handwritten digit, Spaun should produce the same digit written in the same style as the handwriting. (A1) Image recognition. Given a randomly chosen handwritten digit, Spaun should produce the same digit written in its default writing. (A2) Reinforcement Learning Spaun should perform a three-armed bandit task, in which it must determine which of three possible choices generates the greatest stochastically generated reward. Reward contingencies can change from trial to trial. (A3) Serial Working Memory. Given a list of any length, Spaun should reproduce it. (A4) Counting. Given a starting value and a count value, Spaun should write the final value (that is, the sum of the two values) (movie S5). (A5) Question answering. Given a list of numbers, Spaun should answer either one of two possible questions: (i) what is in a given position in the list?... or (ii) given a kind of number, at what position is this number in the list? (A6) Rapid variable creation. Given example syntactic input/output patterns (e.g., 0 0 7 4 → 7 4; 0 0 2 4 → 2 4; etc.), Spaun should complete a novel pattern given only the input (e.g., 0 0 1 4 → ?) . (A7) Fluid reasoning. Spaun should perform a syntactic or semantic reasoning task that is isomorphic to the induction problems from the Raven’s Progressive Matrices (RPM) test for fluid intelligence.

How the model works:

The network implementing the Spaun model consists of three compression hierarchies, an action-selection mechanism, and five subsystems. Components of the model communicate using spiking neurons that implement neural representations that we call “semantic pointers,” using various firing patterns. Semantic pointers can be understood as being elements of a compressed neural vector space... Compression is a natural way to understand much of neural processing. For instance, the number of cells in the visual hierarchy gradually decreases from the primary visual cortex to the inferior temporal cortex, meaning that the information has been compressed from a higher-dimensional (image-based) space into a lower-dimensional (feature) space. This same kind of operation maps well to the motor hierarchy, where lower-dimensional firing patterns are successively decompressed (for example, when a lower-dimensional motor representation in Euclidean space moves down the motor hierarchy to higher-dimensional muscle space).

The five subsystems...: (i) map the visual hierarchy firing pattern to a conceptual firing pattern as needed (information encoding), (ii) extract relations between input elements (transformation calculation), (iii) evaluate the reward associated with the input (reward evaluation), (iv) decompress firing patterns from memory to conceptual firing pattern (information decoding), and (v) map conceptual firing patterns to motor firing patterns and control motor timing (motor processing)... It is critical to note that the elements of Spaun are not task-specific.

The performance of the model:

[On Raven's Progressive Matrices]..Human participants average 89% correct (chance is 13%) on the matrices that include only an induction rule (5 of 36 matrices). Spaun performs similarly, achieving a match-adjusted success rate of 88%... [On Serial Working Memory]...As with human data, Spaun produces distinct recency (items at the end are recalled with greater accuracy) and primacy (items at the beginning are recalled with greater accuracy) effects. A good match to human data from a rapid serial-memory task using digits and short presentation times is also evident, with 17 of 22 human mean values within the 95% confidence interval of 40 instances of the model. [Image Recognition]...for which the model achieves 94% accuracy on untrained data from the MNIST handwriting database (human accuracy is ~98%). [Reinforcement Learning]...for which the model is able to learn reward-dependent actions in a variable environment using known neural mechanisms [Counting]...for which the model reproduces human reaction times and scaling of variability [Question Answering]... for which the model generates a novel behavioral prediction [Rapid Variable Creation]...for which the model instantiates the first neural architecture able to solve this challenging task

Conclusions:

However, the central purpose of this work is not to explain any one of these tasks, but to propose a unified set of neural mechanisms able to perform them all...Although Spaun’s main contribution lies in its breadth, it also embodies new hypotheses regarding how specific tasks are solved....However, Spaun has little to say about how that complex, dynamical system develops from birth. Furthermore, Spaun has many other limitations that distinguish it from developed brains. For one, Spaun is not as adaptive as a real brain, as the model is unable to learn completely new tasks