I think that the reason that I find Brooks' ideas interesting is because it seems to mirror the way that natural intelligences came about.
Biological evolution seems to amount to nothing more than local systems adapting to survive in and environment, and then aggregating into more complex systems. We know that this strategy has produced intelligence at least once in the history of the universe, and thus is seems to me a productive example to follow in attempting to create artificial intelligence as well.
Now, I don't know what the state of the art is for th...
I hope I live to see a world where synchronous computing is considered a quaint artifact of the dawn of computers. cognitive bias has prevented us from seeing the full of extent of what can be done with this computing thing. a limit on feasible computability (limited by our own brain capacity) that has existed for all the millions of years, shaping the way we assume we can solve problems in our world, is suddenly gone. we've made remarkable progress in a short time, I can't wait to see what happens next.
Was the darpa grand challenge winner written using CES or a successor? I see no mention of it in the Darpa paper.
If not, why not? Perhaps neither of these approaches are good in the real world.
I am also guilty of wanting to toss people back to the Turing Tarpit to get to AI, but I don't advocate staying there for long. I just think we have the wrong foundation for resource management and have to redo security and resource allocation at the architectural level. Then rebuild in a more adaptive system from there. I have a few ideas and they do have a fair amo...
Economists have to face this in spades. So many people say standard econ has failed and the solution is to do the opposite - non-equilibrium instead of equilibrium, non-selfish instead of selfish, non-individual instead of individual, etc. And of course as you point out the problem is that just doesn't say very much.
Tilden is another roboticist who's gotten rich and famous off of unintelligent robots: BEAM robotics
Robin, if you say that's true in economics too, then this is probably a full-blown Standard Iconoclast Failure Mode.
I wonder if the situation in computer programming is just an especially good illustration of it, because the programmer actually does have to reimpose order somehow afterward - you get to see the structure lost, the tarpit, and the effort. Brooks wrote real programs in his expanded design space and even made a buck off it, so we should much more strongly criticize someone who merely advocates "non-equilibrium economics" without saying which kind of disequilibrium.
My understanding is that, while there are still people in the world who speak with reverence of Brooks's subsumption architecture, it's not used much in commercial systems on account of being nearly impossible to program.
I once asked one of the robotics guys at IDSIA about subsumption architecture (he ran the German team that won the robo-soccer world cup a few years back) and his reply was that people like it because it works really well and is the simplist way to program many things. At the time, all of the top teams used it as far as he knew.
(p.s. don't expect follow up replies on this topic from me as I'm current in the middle of nowhere using semi-functional dial-up...)
blinks at Shane
Okay, I'll place my data in a state of expected-instability-pending-further-evidence. This doesn't match what I've heard/found in my own meager investigations. Or maybe it works for a Roomba but not automated vehicles.
I don't get this post. There is no big mystery to asynchronous communication - a process looks for messages whenever it is convenient for it to do so, very much like we check our mail-boxes when it is convenient for us. Although it is not clear to me how asynchronous communication helps in building an AI, I don't see any underspecification here. And if people (including Brooks) have actually used the architecture for building robots, that at least must be clear proof that there is a real architecture here.
Btw, from my understanding, Thrun's team made heavy use of supervised learning - the same paradigm that Eliezer knocked down as being unFriendly in his AI risks paper.
I DO get this post - I understand, and agree with the general concept, but I think Venu has a point that asynchronous programming is a bad example... although it LITERALLY means only "non-synchronous", in practice it refers to a pretty specific type of alternative programming methodology... much more particular than just the set of all programming methodologies that aren't synchronous.
Demanding nonapples is the standard response of voters to the failure of the governing Apple party.
I think subsumption is still popular amongst students and hobbyists.
It does raise an interesting "mini Friendliness" issue... I'm not really comfortable with the idea of subsumption software systems driving cars around on the road. Robots taking over the world may seem silly to most of the public but there are definite decisions to be made soon about what criteria we should use to trust complex software systems that make potentially deadly decisions. So far I think there's a sense that the rules robots should use are completely clear, the situations enumerable -- because of the extreme narrowness of the task domains. As tasks become more open ended, that may not be true for much longer.
@ Venu: Modern AI efforts are so far from human-level competence, that Friendly vs. Unfriendly doesn't really matter yet. Eliezer is concerned about having a Friendly foundation for the coming Singularity, which starts with human-level AIs. A fairly stupid program (compared to humans) that merely drives a car, just doesn't have the power to be a risk in the sense that Eliezer worries about.
@Don: Eliezer says in his AI risks paper , criticising Bill Hibbard, that one cannot use supervised learning to specify the goal system for an AI. And although he doesn't say this in the AI risks paper (contra what I said in my previous comment), I remember him saying somewhere (was it in a mailing list?) that supervised learning as such is not a reliable component to include in a Friendly AI. (I may be wrong in attributing this to him however.) I feel this criticism is misguided as any viable proposal for (Friendly or not) AI will have to be built out of ...
"A fairly stupid program (compared to humans) that merely drives a car, just doesn't have the power to be a risk in the sense that Eliezer worries about."
Well, not a significant one, anyway. Perhaps a very, very tiny one. Programs powering robot maids such as are being developed in Japan are a higher risk, as the required forms of intelligence are closer to human forms, and thus probably closer to general intelligence.
The market has no central processing unit - do your arguments against asynchronous parallel decentralized programs work equally well regarding the market? True, the market doesn't always lift its leg when I'd like it to or where I'd like it to, but it does seem to get along in a decephalic fashion.
I think the strength of Brook's method of robotics is that it allows for another option. Some functions of robotics might be much better to be reflexive: encounter object, lift leg. Some functions might be better if they followed the old model: encounter objec...
Trevor Blake misses the point. Of course there are some good design spots in asynchronous space, it's a huge space, there are bound to be. If you can find them, more power to you.
Venu: And if people (including Brooks) have actually used the architecture for building robots, that at least must be clear proof that there is a real architecture here.
The problem is, almost any AI idea you can think of can be made to work on a few examples, so working on a few examples isn't evidence that there's anything there. The deadly words are "now we just have to scale it up".
derekz: I think subsumption is still popular amongst students and hobbyists.
That is, popular amongst people repeating for themselves the toy examples that other people have used it for already.
I'm surprised nobody put this problem in terms of optimization and "steering the future" (including Eliezer, though I suppose he might have tried to make a different point in his post).
As I see it, robots are a special case of machines intended to steer things in their immediate vicinity towards some preferred future. (The special case is that their acting parts and steering parts are housed in the same object, which is not terribly important, except that the subsumption architecture implies it.)
"Smart" robots have a component analogue ...
The idea is similar to swarm logic or artificial life where the concept there is to program a single agent who behaves naturally then put a bunch of them together and you get natural behavior. The idea of emergent behavior from smaller parts is of great interest in the defense industry.
Stanley used a Markov model.
[ad hominems / noncontent deleted]
vendor a: selling apples on wood carts isn't making as much money as I hoped.
vendor b: maybe we should sell nonapples on nonwood carts.
a: that's just silly. Which convinces me that we should continue selling non-nonapples on non-nonwood.
... ie, the opposite of stupidity is rarely intelligence, but the opposite of the opposite of stupidity never is.
Human intelligence arose out of a neurological Turing tarpit. It is reasonable to imagine that designing intelligence in less time will take tricks - ones which Mother Nature didn't use - to get out of the tarpit...
Eliezer, you'd have done better to ignore ReadABook's trash. Hir ignorance of your arguments and expertise was obvious.
I don't know anything about the specific AI architectures in this post, but I'll defend non-apples. If one area of design-space is very high in search ordering but very low in preference ordering (ie a very attractive looking but in fact useless idea), then telling people to avoid it is helpful beyond the seemingly low level of optimization power it gives.
A metaphor: religious beliefs constitute a very small and specific area of beliefspace, but that area originally looks very attractive. You could spend your whole life searching within that area and never getting anywhere. Saying "be atheist!" provides an trivial amount of optimization power. But that doesn't mean it's of trivial importance in the search for correct beliefs. Another metaphor: if you're stuck in a ditch, the majority of the effort it takes to journey a mile will be the ten vertical meters it takes to climb to the top.
Saying "not X" doesn't make people go for all non-X equally. It makes them apply their intelligence to the problem again, ignoring the trap at X that they would otherwise fall into. If the problem is pretty easy once you stop trying to sell apples, then "sell non-apples" might provide most of the effective optimization power you need.
Yvain: It might be a good idea to start looking for marketable non-apples, but you can't sell generalized non-apples right away. The situation in question is the opposite of what you highlighted: non-apples can act as semantic stopsigns, a Wise and mysterious answer rather than a direction for specific research. Traps of the former domain aren't balanced out by bewilderment of its negation; it might be easier, or it might be much worse.
Yvain, that's a fair point. And to the extent that you've just got specific bad beliefs infesting your head, "Be atheist!" is operationalizable in a way that "sell nonapples!" is not.
So are you claiming that Brooks' whole plan was, on a whim, to just do the opposite of what the neats were doing up till then? I thought his inspiration for the subsumption architecture was nature, the embodied intelligence of evolved biological organisms, the only existence proof of higher intelligence we have so far. To me it seems like the neats are the ones searching a larger design space, not the other way around. The scruffies have identified some kind of solution to creating intelligent machines in nature and are targeting a constrained design space inspired by this--the neats on the other hand are trying to create intelligence seemingly out of the platonic world of forms.
"When you say to build a wagon using "wood", you're giving much more concrete advice then when you say "not wood". There are different kinds of wood, of course - but even so, when you say "wood", you've narrowed down the range of possible building materials a whole lot more then when you say "not wood"."
"When you say to build a wagon using "wood", you're giving much more concrete advice THAN when you say "not wood". There are different kinds of wood, of course - but even so, when you say "wood", you've narrowed down the range of possible building materials a whole lot more THAN when you say "not wood"."
Returning to the post, I suspect that there is a lack of relevant mathematical theorems in this area. What is needed for example is a theorem which says something like:
"In a sufficiently general environment E the AI planning system must incorporate probabilistic reasoning to satisfy its goals."
Likewise a theorem characterising which environments require "asynchronous" planning architectures is probably a holy grail in this field.
I realize I am way late for this party, but I would like to make a specific theoretical point about synchronous vs. asynchronous communication. It turns out that, given some components or modules or threads or what-have-you communicating via sending messages to one another, synchronous communication is actually more general than asynchronous in the following technical sense. Once can always use synchronous communication to implement asynchronous communication by throwing another module in the middle to act as a mailbox. On the other hand, given only asy...
I was with you until that last part about economics. Behavioral economists ARE working on actual detailed models of human economic decisions that use assumptions other than economic rationality. They use things like hyperbolic discounting (as opposed to just "nonexponential" discounting) and distance-mediated altruism (rather than just "nonselfishness"). So they're not making nonwagons out of nonwood; they're trying to make cars out of steel. They haven't finished yet; but it's really not fair to expect a new paradigm to achieve in 10 y...
Apologies for commenting almost a decade after most of the comments here, but this is the exact same reason why "using nonlinear models is harder but more realistic".
The way we were taught math led us to believe that linear models form this space of tractable math, and nonlinear models form this somewhat larger space of mostly intractable math. This is mostly right, but the space of nonlinear models is almost infinitely larger than that of linear models. And that is the reason linear models are mathematically tractable : they form such a small sp...
Previously in series: Worse Than Random
A tale of two architectures...
Once upon a time there was a man named Rodney Brooks, who could justly be called the King of Scruffy Robotics. (Sample paper titles: "Fast, Cheap, and Out of Control", "Intelligence Without Reason"). Brooks invented the "subsumption architecture" - robotics based on many small modules, communicating asynchronously and without a central world-model or central planning, acting by reflex, responding to interrupts. The archetypal example is the insect-inspired robot that lifts its leg higher when the leg encounters an obstacle - it doesn't model the obstacle, or plan how to go around it; it just lifts its leg higher.
In Brooks's paradigm - which he labeled nouvelle AI - intelligence emerges from "situatedness". One speaks not of an intelligent system, but rather the intelligence that emerges from the interaction of the system and the environment.
And Brooks wrote a programming language, the behavior language, to help roboticists build systems in his paradigmatic subsumption architecture - a language that includes facilities for asynchronous communication in networks of reflexive components, and programming finite state machines.
My understanding is that, while there are still people in the world who speak with reverence of Brooks's subsumption architecture, it's not used much in commercial systems on account of being nearly impossible to program.
Once you start stacking all these modules together, it becomes more and more difficult for the programmer to decide that, yes, an asynchronous local module which raises the robotic leg higher when it detects a block, and meanwhile sends asynchronous signal X to module Y, will indeed produce effective behavior as the outcome of the whole intertwined system whereby intelligence emerges from interaction with the environment...
Asynchronous parallel decentralized programs are harder to write. And it's not that they're a better, higher form of sorcery that only a few exceptional magi can use. It's more like the difference between the two business plans, "sell apples" and "sell nonapples".
One noteworthy critic of Brooks's paradigm in general, and subsumption architecture in particular, is a fellow by the name of Sebastian Thrun.
You may recall the 2005 DARPA Grand Challenge for the driverless cars. How many ways was this a fair challenge according to the tenets of Scruffydom? Let us count the ways:
And the winning team was Stanley, the Stanford robot, built by a team led by Sebastian Thrun.
How did he do it? If I recall correctly, Thrun said that the key was being able to integrate probabilistic information from many different sensors, using a common representation of uncertainty. This is likely code for "we used Bayesian methods", at least if "Bayesian methods" is taken to include algorithms like particle filtering.
And to heavily paraphrase and summarize some of Thrun's criticisms of Brooks's subsumption architecture:
Robotics becomes pointlessly difficult if, for some odd reason, you insist that there be no central model and no central planning.
Integrating data from multiple uncertain sensors is a lot easier if you have a common probabilistic representation. Likewise, there are many potential tasks in robotics - in situations as simple as navigating a hallway - when you can end up in two possible situations that look highly similar and have to be distinguished by reasoning about the history of the trajectory.
To be fair, it's not as if the subsumption architecture has never made money. Rodney Brooks is the founder of iRobot, and I understand that the Roomba uses the subsumption architecture. The Roomba has no doubt made more money than was won in the DARPA Grand Challenge... though the Roomba might not seem quite as impressive...
But that's not quite today's point.
Earlier in his career, Sebastian Thrun also wrote a programming language for roboticists. Thrun's language was named CES, which stands for C++ for Embedded Systems.
CES is a language extension for C++. Its types include probability distributions, which makes it easy for programmers to manipulate and combine multiple sources of uncertain information. And for differentiable variables - including probabilities - the language enables automatic optimization using techniques like gradient descent. Programmers can declare 'gaps' in the code to be filled in by training cases: "Write me this function."
As a result, Thrun was able to write a small, corridor-navigating mail-delivery robot using 137 lines of code, and this robot required less than 2 hours of training. As Thrun notes, "Comparable systems usually require at least two orders of magnitude more code and are considerably more difficult to implement." Similarly, a 5,000-line robot localization algorithm was reimplemented in 52 lines.
Why can't you get that kind of productivity with the subsumption architecture? Scruffies, ideologically speaking, are supposed to believe in learning - it's only those evil logical Neats who try to program everything into their AIs in advance. Then why does the subsumption architecture require so much sweat and tears from its programmers?
Suppose that you're trying to build a wagon out of wood, and unfortunately, the wagon has a problem, which is that it keeps catching on fire. Suddenly, one of the wagon-workers drops his wooden beam. His face lights up. "I have it!" he says. "We need to build this wagon from nonwood materials!"
You stare at him for a bit, trying to get over the shock of the new idea; finally you ask, "What kind of nonwood materials?"
The wagoneer hardly hears you. "Of course!" he shouts. "It's all so obvious in retrospect! Wood is simply the wrong material for building wagons! This is the dawn of a new era - the nonwood era - of wheels, axles, carts all made from nonwood! Not only that, instead of taking apples to market, we'll take nonapples! There's a huge market for nonapples - people buy far more nonapples than apples - we should have no trouble selling them! It will be the era of the nouvelle wagon!"
The set "apples" is much narrower than the set "not apples". Apples form a compact cluster in thingspace, but nonapples vary much more widely in price, and size, and use. When you say to build a wagon using "wood", you're giving much more concrete advice than when you say "not wood". There are different kinds of wood, of course - but even so, when you say "wood", you've narrowed down the range of possible building materials a whole lot more than when you say "not wood".
In the same fashion, "asynchronous" - literally "not synchronous" - is a much larger design space than "synchronous". If one considers the space of all communicating processes, then synchrony is a very strong constraint on those processes. If you toss out synchrony, then you have to pick some other method for preventing communicating processes from stepping on each other - synchrony is one way of doing that, a specific answer to the question.
Likewise "parallel processing" is a much huger design space than "serial processing", because serial processing is just a special case of parallel processing where the number of processors happens to be equal to 1. "Parallel processing" reopens all sorts of design choices that are premade in serial processing. When you say "parallel", it's like stepping out of a small cottage, into a vast and echoing country. You have to stand someplace specific, in that country - you can't stand in the whole place, in the noncottage.
So when you stand up and shout: "Aha! I've got it! We've got to solve this problem using asynchronous processes!", it's like shouting, "Aha! I've got it! We need to build this wagon out of nonwood! Let's go down to the market and buy a ton of nonwood from the nonwood shop!" You've got to choose some specific alternative to synchrony.
Now it may well be that there are other building materials in the universe than wood. It may well be that wood is not the best building material. But you still have to come up with some specific thing to use in its place, like iron. "Nonwood" is not a building material, "sell nonapples" is not a business strategy, and "asynchronous" is not a programming architecture.
And this is strongly reminiscent of - arguably a special case of - the dilemma of inductive bias. There's a tradeoff between the strength of the assumptions you make, and how fast you learn. If you make stronger assumptions, you can learn faster when the environment matches those assumptions well, but you'll learn correspondingly more slowly if the environment matches those assumptions poorly. If you make an assumption that lets you learn faster in one environment, it must always perform more poorly in some other environment. Such laws are known as the "no-free-lunch" theorems, and the reason they don't prohibit intelligence entirely is that the real universe is a low-entropy special case.
Programmers have a phrase called the "Turing Tarpit"; it describes a situation where everything is possible, but nothing is easy. A Universal Turing Machine can simulate any possible computer, but only at an immense expense in time and memory. If you program in a high-level language like Python, then - while most programming tasks become much simpler - you may occasionally find yourself banging up against the walls imposed by the programming language; sometimes Python won't let you do certain things. If you program directly in machine language, raw 1s and 0s, there are no constraints; you can do anything that can possibly be done by the computer chip; and it will probably take you around a thousand times as much time to get anything done. You have to do, all by yourself, everything that a compiler would normally do on your behalf.
Usually, when you adopt a program architecture, that choice takes work off your hands. If I use a standard container library - lists and arrays and hashtables - then I don't need to decide how to implement a hashtable, because that choice has already been made for me.
Adopting the subsumption paradigm means losing order, instead of gaining it. The subsumption architecture is not-synchronous, not-serial, and not-centralized. It's also not-knowledge-modelling and not-planning.
This absence of solution implies an immense design space, and it requires a correspondingly immense amount of work by the programmers to reimpose order. Under the subsumption architecture, it's the programmer who decides to add an asynchronous local module which detects whether a robotic leg is blocked, and raises it higher. It's the programmer who has to make sure that this behavior plus other module behaviors all add up to an (ideologically correct) emergent intelligence. The lost structure is not replaced. You just get tossed into the Turing Tarpit, the space of all other possible programs.
On the other hand, CES creates order; it adds the structure of probability distributions and gradient optimization. This narrowing of the design space takes so much work off your hands that you can write a learning robot in 137 lines (at least if you happen to be Sebastian Thrun).
The moral:
Quite a few AI architectures aren't.
If you want to generalize, quite a lot of policies aren't.
They aren't choices. They're just protests.
Added: Robin Hanson says, "Economists have to face this in spades. So many people say standard econ has failed and the solution is to do the opposite - non-equilibrium instead of equilibrium, non-selfish instead of selfish, non-individual instead of individual, etc." It seems that selling nonapples is a full-blown Standard Iconoclast Failure Mode.