Comment author: Wei_Dai 04 December 2012 11:26:52PM *  1 point [-]

According to my understanding, Spaun is only able to shift between a fixed set of tasks, and according to a fixed algorithm (if the first two inputs are "A1", route the information one way so that it ends up doing one task, and if the first inputs are "A2", route the information another way, etc.) that was manually designed. You haven't explained yet (or emulated) how human brains are able to switch fluidly between an ever changing set of possibe tasks, and without having to be prompted by specific codes such as "A1" and "A2".

If my understanding is correct, I think a clearer and fairer description of your accomplishment might be that you've demonstrated task shifting "on a (simulated) neural substrate that is structually similar to the human brain", rather than task shifting "just like the human brain".

Comment author: tcstewar 05 December 2012 12:03:45AM 1 point [-]

Yup, I'd say that's a fair way of expressing it, although I think we take "neural substrate that is structurally similar to the human brain" much more seriously than other people that use phrases like that. It's a similar enough substrate that if fixes a lot of our parameter values for us, leaving us less open to "fiddle with parameters until it works".

We've also tried to make sure to highlight that it can't learn new tasks, so it's not able to work in the fluid domains people do. It also doesn't have any intrinsic motivation to do that switching.

Interestingly, there are starting to be good non-neural theories of human task switching (e.g. [http://act-r.psy.cmu.edu/publications/pubinfo.php?id=831] ). These are exactly the sorts of theories we want to take a close look at and see how they could be realistically implemented in spiking neurons.

Comment author: Wei_Dai 04 December 2012 12:44:48PM *  2 points [-]

If Spaun as it is now really does work "just like a human", then building a human-level AI is just a matter of speeding it up.

As I explained in this comment, Spaun can only perform tasks that are specifically and manually programmed into it. It is very, very far from working just like a human. It's definitely incapable of learning new skills or concepts, for example. What the original article said was:

They say Spaun can shift from task to task, "just like the human brain," recognizing an object one moment and memorizing a list of numbers the next.

Well gosh, my desktop computer can also shift from task to task, just like the human brain, mining bitcoins one moment and decoding MPEGs the next. This is either PR or (perhaps unintentional) hype by the reporter, saying something that is literally true but gives the impression of much greater accomplishment.

(Which isn't to say that Spaun might not continue with or inspire further more interesting developments, but a lot of people seem to be overly impressed with it in its current state.)

Comment author: tcstewar 04 December 2012 10:45:15PM 1 point [-]

Hi, it's Terry again (one of the researchers on the project)

The interesting thing (for me) isn't that it can shift from task to task, but that it can shift from task to task just like the human brain. In other words, we're showing how a realistic neural system can shift between tasks. That's something that's not found in other neural models, where you tend to either have it do one task or you have external (non-neural) systems modify the model for different tasks. We're showing a way of doing that selecting routing and control in an entirely neural way that maps nicely onto the cortex-basal ganglia-thalamus loop.

Oh, and, since we constrain the model with a bunch of physical parameters influencing the timing of the system (reabsorption of neurotransmitter, mostly), we can also look at how long it takes the system to switch tasks, and compare that to human brains. It's these sorts of comparisons that let us use this sort of model to test hypotheses about what different parts of the brain are doing.

Comment author: ciphergoth 03 December 2012 10:58:51PM 4 points [-]

Q: What do you all make of Bostrom and Sandberg's Whole Brain Emulation Roadmap?

Comment author: tcstewar 04 December 2012 10:36:19PM 9 points [-]

Hi, I'm Terry Stewart, one of the researchers on the project.

I like the roadmap, and it seems to be the right way to go if the goal is to emulate a particular person's brain. However, our whole goal is to understand the human brain, so we want to reach for whole-system understanding, which is exactly what the WBE approach doesn't need.

I believe that the approach we are taking is a novel method for understanding the human brain that has a reasonable chance of producing results faster than the pure WBE approach (or, at the very least, the advances in understand provided by our approach may make WBE significantly simpler). Of course, to make that claim, I need to justify why our approach is significantly different from hundreds of other researchers who also are trying to understand the human brain.

The key difference is that we have a neural compiler: a system for taking a mathematical description of the function to be computed and the properties of the neurons involved, and producing a set of connection weights that will cause those neurons to approximate that function. This is a radically different approach to building neural networks, and we're still working out the consequences of this compiler. There's a technical overview of this system here [http://ctnsrv.uwaterloo.ca/cnrglab/node/297] and the system itself is opensource and available at [http://nengo.ca]. This is what let us build Spaun -- we took a bunch of descriptions of the function of different brain areas, converted them into math, and compiled them into neurons.

Right now, we use a very simple neuron model (LIF -- basically the simplest spiking neuron model), but the technique is applicable to any type of neuron we feel like using (and have the computational power to handle). An interesting part of the research is determining what increased functional capacities you get from using more complex neural models.

Indeed, the main thing that makes me think that this is a novel and useful way of understanding the brain is that we get constraints on the types of computations that can be performed. For example, it turns out to be really easy to compute the circular convolution of two 500-dimensional vectors (an operation we need for our approach to symbol-like reasoning), but very hard to get neurons to find which of five numbers is the largest (the max function). These sorts of constraints have cused us to examine very different types of algorithms for reasoning, and we found that certain inductive reasoning problems are surprisingly easy with these sorts of algorithms [http://ctnsrv.uwaterloo.ca/cnrglab/node/16].

Comment author: tcstewar 04 December 2012 09:45:50PM 11 points [-]

Hi, I'm Terry Stewart, one of the researchers on the project, and I'm also a regular reader of Less Wrong. I think LW is the most interesting new development in applied cognitive science, and I'm loving seeing what comes out of it.

I'd definitely be up for answering questions, or going into more detail about some of the stuff in the reddit discussion. I'll go through any questions that show up here as a start...