Previously in seriesWorse Than Random

A tale of two architectures...

Once upon a time there was a man named Rodney Brooks, who could justly be called the King of Scruffy Robotics.  (Sample paper titles:  "Fast, Cheap, and Out of Control", "Intelligence Without Reason").  Brooks invented the "subsumption architecture" - robotics based on many small modules, communicating asynchronously and without a central world-model or central planning, acting by reflex, responding to interrupts.  The archetypal example is the insect-inspired robot that lifts its leg higher when the leg encounters an obstacle - it doesn't model the obstacle, or plan how to go around it; it just lifts its leg higher.

In Brooks's paradigm - which he labeled nouvelle AI - intelligence emerges from "situatedness".  One speaks not of an intelligent system, but rather the intelligence that emerges from the interaction of the system and the environment.

And Brooks wrote a programming language, the behavior language, to help roboticists build systems in his paradigmatic subsumption architecture - a language that includes facilities for asynchronous communication in networks of reflexive components, and programming finite state machines.

My understanding is that, while there are still people in the world who speak with reverence of Brooks's subsumption architecture, it's not used much in commercial systems on account of being nearly impossible to program.

Once you start stacking all these modules together, it becomes more and more difficult for the programmer to decide that, yes, an asynchronous local module which raises the robotic leg higher when it detects a block, and meanwhile sends asynchronous signal X to module Y, will indeed produce effective behavior as the outcome of the whole intertwined system whereby intelligence emerges from interaction with the environment...

Asynchronous parallel decentralized programs are harder to write.  And it's not that they're a better, higher form of sorcery that only a few exceptional magi can use.  It's more like the difference between the two business plans, "sell apples" and "sell nonapples".

One noteworthy critic of Brooks's paradigm in general, and subsumption architecture in particular, is a fellow by the name of Sebastian Thrun.

You may recall the 2005 DARPA Grand Challenge for the driverless cars.  How many ways was this a fair challenge according to the tenets of Scruffydom?  Let us count the ways:

  • The challenge took place in the real world, where sensors are imperfect, random factors intervene, and macroscopic physics is only approximately lawful.
  • The challenge took place outside the laboratory - not even on paved roads, but 212km of desert.
  • The challenge took place in real time - continuous perception, continuous action, using only computing power that would fit on a car.
  • The teams weren't told the specific race course until 2 hours before the race.
  • You could write the code any way you pleased, so long as it worked.
  • The challenge was competitive:  The prize went to the fastest team that completed the race.  Any team which, for ideological reasons, preferred elegance to speed - any team which refused to milk every bit of performance out of their systems - would surely lose to a less principled competitor.

And the winning team was Stanley, the Stanford robot, built by a team led by Sebastian Thrun.

How did he do it?  If I recall correctly, Thrun said that the key was being able to integrate probabilistic information from many different sensors, using a common representation of uncertainty.  This is likely code for "we used Bayesian methods", at least if "Bayesian methods" is taken to include algorithms like particle filtering.

And to heavily paraphrase and summarize some of Thrun's criticisms of Brooks's subsumption architecture:

Robotics becomes pointlessly difficult if, for some odd reason, you insist that there be no central model and no central planning.

Integrating data from multiple uncertain sensors is a lot easier if you have a common probabilistic representation.  Likewise, there are many potential tasks in robotics - in situations as simple as navigating a hallway - when you can end up in two possible situations that look highly similar and have to be distinguished by reasoning about the history of the trajectory.

To be fair, it's not as if the subsumption architecture has never made money.  Rodney Brooks is the founder of iRobot, and I understand that the Roomba uses the subsumption architecture.  The Roomba has no doubt made more money than was won in the DARPA Grand Challenge... though the Roomba might not seem quite as impressive...

But that's not quite today's point.

Earlier in his career, Sebastian Thrun also wrote a programming language for roboticists.  Thrun's language was named CES, which stands for C++ for Embedded Systems.

CES is a language extension for C++.  Its types include probability distributions, which makes it easy for programmers to manipulate and combine multiple sources of uncertain information.  And for differentiable variables - including probabilities - the language enables automatic optimization using techniques like gradient descent.  Programmers can declare 'gaps' in the code to be filled in by training cases:  "Write me this function."

As a result, Thrun was able to write a small, corridor-navigating mail-delivery robot using 137 lines of code, and this robot required less than 2 hours of training.  As Thrun notes, "Comparable systems usually require at least two orders of magnitude more code and are considerably more difficult to implement."  Similarly, a 5,000-line robot localization algorithm was reimplemented in 52 lines.

Why can't you get that kind of productivity with the subsumption architecture?  Scruffies, ideologically speaking, are supposed to believe in learning - it's only those evil logical Neats who try to program everything into their AIs in advance.  Then why does the subsumption architecture require so much sweat and tears from its programmers?

Suppose that you're trying to build a wagon out of wood, and unfortunately, the wagon has a problem, which is that it keeps catching on fire.  Suddenly, one of the wagon-workers drops his wooden beam.  His face lights up.  "I have it!" he says.  "We need to build this wagon from nonwood materials!"

You stare at him for a bit, trying to get over the shock of the new idea; finally you ask, "What kind of nonwood materials?"

The wagoneer hardly hears you.  "Of course!" he shouts.  "It's all so obvious in retrospect!  Wood is simply the wrong material for building wagons!  This is the dawn of a new era - the nonwood era - of wheels, axles, carts all made from nonwood!  Not only that, instead of taking apples to market, we'll take nonapples!  There's a huge market for nonapples - people buy far more nonapples than apples - we should have no trouble selling them!  It will be the era of the nouvelle wagon!"

The set "apples" is much narrower than the set "not apples".  Apples form a compact cluster in thingspace, but nonapples vary much more widely in price, and size, and use.  When you say to build a wagon using "wood", you're giving much more concrete advice than when you say "not wood".  There are different kinds of wood, of course - but even so, when you say "wood", you've narrowed down the range of possible building materials a whole lot more than when you say "not wood".

In the same fashion, "asynchronous" - literally "not synchronous" - is a much larger design space than "synchronous".  If one considers the space of all communicating processes, then synchrony is a very strong constraint on those processes.  If you toss out synchrony, then you have to pick some other method for preventing communicating processes from stepping on each other - synchrony is one way of doing that, a specific answer to the question.

Likewise "parallel processing" is a much huger design space than "serial processing", because serial processing is just a special case of parallel processing where the number of processors happens to be equal to 1.  "Parallel processing" reopens all sorts of design choices that are premade in serial processing.  When you say "parallel", it's like stepping out of a small cottage, into a vast and echoing country.  You have to stand someplace specific, in that country - you can't stand in the whole place, in the noncottage.

So when you stand up and shout:  "Aha!  I've got it!  We've got to solve this problem using asynchronous processes!", it's like shouting, "Aha!  I've got it!  We need to build this wagon out of nonwood!  Let's go down to the market and buy a ton of nonwood from the nonwood shop!"  You've got to choose some specific alternative to synchrony.

Now it may well be that there are other building materials in the universe than wood.  It may well be that wood is not the best building material.  But you still have to come up with some specific thing to use in its place, like iron.  "Nonwood" is not a building material, "sell nonapples" is not a business strategy, and "asynchronous" is not a programming architecture.

And this is strongly reminiscent of - arguably a special case of - the dilemma of inductive bias.  There's a tradeoff between the strength of the assumptions you make, and how fast you learn.  If you make stronger assumptions, you can learn faster when the environment matches those assumptions well, but you'll learn correspondingly more slowly if the environment matches those assumptions poorly.  If you make an assumption that lets you learn faster in one environment, it must always perform more poorly in some other environment.  Such laws are known as the "no-free-lunch" theorems, and the reason they don't prohibit intelligence entirely is that the real universe is a low-entropy special case.

Programmers have a phrase called the "Turing Tarpit"; it describes a situation where everything is possible, but nothing is easy.  A Universal Turing Machine can simulate any possible computer, but only at an immense expense in time and memory.  If you program in a high-level language like Python, then - while most programming tasks become much simpler - you may occasionally find yourself banging up against the walls imposed by the programming language; sometimes Python won't let you do certain things.  If you program directly in machine language, raw 1s and 0s, there are no constraints; you can do anything that can possibly be done by the computer chip; and it will probably take you around a thousand times as much time to get anything done.  You have to do, all by yourself, everything that a compiler would normally do on your behalf.

Usually, when you adopt a program architecture, that choice takes work off your hands.  If I use a standard container library - lists and arrays and hashtables - then I don't need to decide how to implement a hashtable, because that choice has already been made for me.

Adopting the subsumption paradigm means losing order, instead of gaining it.  The subsumption architecture is not-synchronous, not-serial, and not-centralized.  It's also not-knowledge-modelling and not-planning.

This absence of solution implies an immense design space, and it requires a correspondingly immense amount of work by the programmers to reimpose order.  Under the subsumption architecture, it's the programmer who decides to add an asynchronous local module which detects whether a robotic leg is blocked, and raises it higher.  It's the programmer who has to make sure that this behavior plus other module behaviors all add up to an (ideologically correct) emergent intelligence.  The lost structure is not replaced.  You just get tossed into the Turing Tarpit, the space of all other possible programs.

On the other hand, CES creates order; it adds the structure of probability distributions and gradient optimization.  This narrowing of the design space takes so much work off your hands that you can write a learning robot in 137 lines (at least if you happen to be Sebastian Thrun).

The moral:

Quite a few AI architectures aren't.

If you want to generalize, quite a lot of policies aren't.

They aren't choices.  They're just protests.

Added:  Robin Hanson says, "Economists have to face this in spades. So many people say standard econ has failed and the solution is to do the opposite - non-equilibrium instead of equilibrium, non-selfish instead of selfish, non-individual instead of individual, etc."  It seems that selling nonapples is a full-blown Standard Iconoclast Failure Mode.

New Comment
78 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I think that the reason that I find Brooks' ideas interesting is because it seems to mirror the way that natural intelligences came about.

Biological evolution seems to amount to nothing more than local systems adapting to survive in and environment, and then aggregating into more complex systems. We know that this strategy has produced intelligence at least once in the history of the universe, and thus is seems to me a productive example to follow in attempting to create artificial intelligence as well.

Now, I don't know what the state of the art is for th... (read more)

Do not copy the Blind Idiot God, for it lives much longer than you, and is a blind idiot.

I hope I live to see a world where synchronous computing is considered a quaint artifact of the dawn of computers. cognitive bias has prevented us from seeing the full of extent of what can be done with this computing thing. a limit on feasible computability (limited by our own brain capacity) that has existed for all the millions of years, shaping the way we assume we can solve problems in our world, is suddenly gone. we've made remarkable progress in a short time, I can't wait to see what happens next.

Was the darpa grand challenge winner written using CES or a successor? I see no mention of it in the Darpa paper.

If not, why not? Perhaps neither of these approaches are good in the real world.

I am also guilty of wanting to toss people back to the Turing Tarpit to get to AI, but I don't advocate staying there for long. I just think we have the wrong foundation for resource management and have to redo security and resource allocation at the architectural level. Then rebuild in a more adaptive system from there. I have a few ideas and they do have a fair amo... (read more)

Economists have to face this in spades. So many people say standard econ has failed and the solution is to do the opposite - non-equilibrium instead of equilibrium, non-selfish instead of selfish, non-individual instead of individual, etc. And of course as you point out the problem is that just doesn't say very much.

Tilden is another roboticist who's gotten rich and famous off of unintelligent robots: BEAM robotics

Robin, if you say that's true in economics too, then this is probably a full-blown Standard Iconoclast Failure Mode.

I wonder if the situation in computer programming is just an especially good illustration of it, because the programmer actually does have to reimpose order somehow afterward - you get to see the structure lost, the tarpit, and the effort. Brooks wrote real programs in his expanded design space and even made a buck off it, so we should much more strongly criticize someone who merely advocates "non-equilibrium economics" without saying which kind of disequilibrium.

My understanding is that, while there are still people in the world who speak with reverence of Brooks's subsumption architecture, it's not used much in commercial systems on account of being nearly impossible to program.

I once asked one of the robotics guys at IDSIA about subsumption architecture (he ran the German team that won the robo-soccer world cup a few years back) and his reply was that people like it because it works really well and is the simplist way to program many things. At the time, all of the top teams used it as far as he knew.

(p.s. don't expect follow up replies on this topic from me as I'm current in the middle of nowhere using semi-functional dial-up...)

blinks at Shane

Okay, I'll place my data in a state of expected-instability-pending-further-evidence. This doesn't match what I've heard/found in my own meager investigations. Or maybe it works for a Roomba but not automated vehicles.

[-]Venu40

I don't get this post. There is no big mystery to asynchronous communication - a process looks for messages whenever it is convenient for it to do so, very much like we check our mail-boxes when it is convenient for us. Although it is not clear to me how asynchronous communication helps in building an AI, I don't see any underspecification here. And if people (including Brooks) have actually used the architecture for building robots, that at least must be clear proof that there is a real architecture here.

Btw, from my understanding, Thrun's team made heavy use of supervised learning - the same paradigm that Eliezer knocked down as being unFriendly in his AI risks paper.

I DO get this post - I understand, and agree with the general concept, but I think Venu has a point that asynchronous programming is a bad example... although it LITERALLY means only "non-synchronous", in practice it refers to a pretty specific type of alternative programming methodology... much more particular than just the set of all programming methodologies that aren't synchronous.

[-]Mark10230

Demanding nonapples is the standard response of voters to the failure of the governing Apple party.

-1IBM_PC
Hi, I'm a PC; would you like to buy a non-Apple Dell, Lenovo, or HP?

I think subsumption is still popular amongst students and hobbyists.

It does raise an interesting "mini Friendliness" issue... I'm not really comfortable with the idea of subsumption software systems driving cars around on the road. Robots taking over the world may seem silly to most of the public but there are definite decisions to be made soon about what criteria we should use to trust complex software systems that make potentially deadly decisions. So far I think there's a sense that the rules robots should use are completely clear, the situations enumerable -- because of the extreme narrowness of the task domains. As tasks become more open ended, that may not be true for much longer.

@ Venu: Modern AI efforts are so far from human-level competence, that Friendly vs. Unfriendly doesn't really matter yet. Eliezer is concerned about having a Friendly foundation for the coming Singularity, which starts with human-level AIs. A fairly stupid program (compared to humans) that merely drives a car, just doesn't have the power to be a risk in the sense that Eliezer worries about.

0pnrjulius
It could still kill people if it's not programmed correctly. This seems like a good reason to understand the program well.
[-]Venu20

@Don: Eliezer says in his AI risks paper , criticising Bill Hibbard, that one cannot use supervised learning to specify the goal system for an AI. And although he doesn't say this in the AI risks paper (contra what I said in my previous comment), I remember him saying somewhere (was it in a mailing list?) that supervised learning as such is not a reliable component to include in a Friendly AI. (I may be wrong in attributing this to him however.) I feel this criticism is misguided as any viable proposal for (Friendly or not) AI will have to be built out of ... (read more)

0pnrjulius
Isn't supervised learning the current method of achieving friendly natural intelligence? (Most insane psychopaths had bad parents, didn't they?)
1orthonormal
Yes, because we get to start with a prefabricated Friendliness-compatible architecture. Probably yes, but that doesn't distinguish "bad parenting" from "psychopath genes".
7dxu
Non-insane psychopaths also exist (e.g. a significant fraction of wealthy businessmen), and while I don't have data on their childhood experiences, it seems pretty likely that they did not grow up in abusive or neglectful environments. (I agree with your point about Friendliness-compatible architecture, though.)
-3Lumifer
Is there actually evidence for this?
6dxu
Did you follow the link I gave right there in the part you quoted?
0Lumifer
Yes, the link is to a blurb for a sensationalist book on Amazon. That's not evidence. You don't happen to believe everything you read, do you?
0dxu
No, but I did happen to read the citations in the back of the book. (Unfortunately, I borrowed the book from the library, so if you want me to post said citations, you'll have to wait until this Thursday.) It's not that great of a book, on the whole (from what I remember of it, the author spends some time talking about Scientology), but the information about psychopathy, at least, mostly appears accurate.
3soreff
Here's another link, which points to quite a body of research: http://bud-meyers.blogspot.com/2012/03/study-10-on-wall-street-are-psychopaths.html
1Lumifer
Sorry, don't see a "body of research". I see a lot of handwaving, name-calling, and righteous indignation. Specific numbers? Diagnostic criteria by which "a significant fraction of wealthy businessmen" was declared to be sociopaths by their psychotherapists or psychiatrists?
-8TheAncientGeek
4Epictetus
If you're looking for evidence in the sense of a team of experts conducting the appropriate battery of tests on a random sample of wealthy businessmen to estimate the rate of psychopathy, then no, such evidence does not exist. However, there appear to have been studies performed that secretly embedded psychological profile questions into questionnaires given to businessmen and tried to estimate the rate of psychopathy that way. I found one claim by Paul Babiak that up to 1 in 25 business leaders may be psychopaths (as opposed to ~1% of the general population), but I haven't been able to find the actual study. Off my own bookshelf, Robert Hare has a chapter on white-collar psychopathy in Without Conscience.
2Lumifer
Given that he says that "In fact, you could be living with or married to one for 20 years or more and not know that person is a psychopath" and "This makes it almost impossible to distinguish between a genuinely talented team leader and a psychopath" I have to ask what kind of a definition for a "psychopath" is he using.
0TheAncientGeek
I suppose that would be one that relies on complex tests administered by a professional.
8Nornagest
The Guardian article commits the mortal sin of not naming the study or its year or coauthors, so I can't be sure about this, but when I search Google Scholar for Paul Babiak, I find this 2013 paper by Babiak et al. (Search its title for the full text; I can't get the link to behave.) It seems primarily to be about methodology, and gives means and correlations on its own scale but doesn't venture a conversion to more conventional measures; but when you get right down to what it's doing, it's based on anonymous assessments of respondents' bosses collected through Amazon's Mechanical Turk, each consisting of 20 questions on a 5-point scale. If this is the method that the study behind the Guardian article is using, I'm very skeptical of its diagnostic validity. Among other things.
3TheOtherDave
Given the evidence that Babiak is relying on, how high would you estimate the odds of his estimate being within a factor of 4 of the actual frequency?
-1Lumifer
First you imply that the "actual frequency" is a feature of the territory -- I'm not sure about that at all (the underlying "feature" is likely to be a continuum with a semi-arbitrary threshold imposed by a map). But the real question we're discussing is not what the absolute frequency is. The real question is whether sociopaths are more frequent in "business leaders" than in general population. I don't have any hard data, so I have to rely on priors. My prior would be that the frequency of sociopaths would be lower in "business leaders". The reasoning for that goes as follows. Not all sociopaths are high-functioning -- some (a lot?) are just mean, grumpy, bitchy common people with not high IQ. Low-functioning sociopaths won't make it into business leaders -- they are much more likely to go e.g. into military forces and/or law enforcement. High-functioning sociopaths are pretty rare, I think. For them the natural habit would be politics as that's where a power-hungry charming liar can get the biggest payoffs. Some will be business leaders -- Steve Jobs is the prime example -- but I don't think there will be many of those. Don't forget that using psychiatric terms as derogatory labels is standard operating procedure :-) "Idiot" used to be a clinical diagnosis, so was "cretin", "imbecile", etc. etc.
1TheOtherDave
All right. Thanks for sharing that. I'm still curious about Epictetus' estimate of Babiak's claim's accuracy.
5TheAncientGeek
But, historically, most of the data on sociopaths comes from the ones who end up in jail or therapy. Apart from the recent research, which you are rejecting on the grounds that it contradicts the older data, And your hunches, on which topic... .....or 'handwaving" as it's unsympathetically known.
2Epictetus
The issues with psychopaths in the workplace is that they're very good at finding high-ranking patrons to protect them: -- Robert Hare, Without Conscience Ch. 8 By the time someone complains about the psychopath, upper management has already gotten an earful about how awful the psychopath's coworkers are and how badly they stifle his ability to shine. Psychopaths are good manipulators. Hare mentions how even he, one of the foremost experts on psychopathy, still gets taken in by them. This is correct. The Psychopathy Checklist does have an arbitrary cutoff point.
3Nornagest
A fourth of (1 in 25) is ~1%, or about the prevalence cited for psychopathy in the general population, so if we assume the same definition of psychopathy those odds are pretty good, I'd say. They'd only not fall into that range if business leaders are less likely to be psychopaths, which isn't an absurd proposition but also isn't one I've seen any evidence for.
2Epictetus
Found an actual study. Data was collected from 203 participants at the management/executive level in seven companies. Paul Babiak does consulting work and was able to get the cooperation of the organizations involved. He was able to carry out actual interviews and had access to personnel files and various performance appraisals. In addition he was able to observe the individuals at work and interview their coworkers. Based on this information he completed the Psychopathy Checklist-Revised on each participant (consulting with Robert Hare where necessary). It was discovered was that nine participants had a score of 25 or more, which is associated with psychopathy. This comes out to about 1 in 25. Note: the average scores on the PCL-R were lower than those found in the general public, but the number of psychopaths was higher. The paper points out the difficulties involved in getting the necessary cooperation to carry out a large-scale study. A sample size of 203 is rather small to get an accurate result.

"A fairly stupid program (compared to humans) that merely drives a car, just doesn't have the power to be a risk in the sense that Eliezer worries about."

Well, not a significant one, anyway. Perhaps a very, very tiny one. Programs powering robot maids such as are being developed in Japan are a higher risk, as the required forms of intelligence are closer to human forms, and thus probably closer to general intelligence.

0pnrjulius
And again, they could cause (small-scale) human death, if they fail to feed a baby or hug someone too hard, etc.
[-][anonymous]20

The market has no central processing unit - do your arguments against asynchronous parallel decentralized programs work equally well regarding the market? True, the market doesn't always lift its leg when I'd like it to or where I'd like it to, but it does seem to get along in a decephalic fashion.

I think the strength of Brook's method of robotics is that it allows for another option. Some functions of robotics might be much better to be reflexive: encounter object, lift leg. Some functions might be better if they followed the old model: encounter objec... (read more)

-3pnrjulius
But there are central planners in markets, like the Federal Reserve and the World Bank. Admittedly they don't have anywhere near as much executive power as the human prefrontal cortex has over the human body; but they do act in the same basic executive function. Moreover, where this fails (e.g. Somalia) it doesn't fare too well for the functioning of markets.

Trevor Blake misses the point. Of course there are some good design spots in asynchronous space, it's a huge space, there are bound to be. If you can find them, more power to you.

Venu: And if people (including Brooks) have actually used the architecture for building robots, that at least must be clear proof that there is a real architecture here.

The problem is, almost any AI idea you can think of can be made to work on a few examples, so working on a few examples isn't evidence that there's anything there. The deadly words are "now we just have to scale it up".

derekz: I think subsumption is still popular amongst students and hobbyists.

That is, popular amongst people repeating for themselves the toy examples that other people have used it for already.

I'm surprised nobody put this problem in terms of optimization and "steering the future" (including Eliezer, though I suppose he might have tried to make a different point in his post).

As I see it, robots are a special case of machines intended to steer things in their immediate vicinity towards some preferred future. (The special case is that their acting parts and steering parts are housed in the same object, which is not terribly important, except that the subsumption architecture implies it.)

"Smart" robots have a component analogue ... (read more)

The idea is similar to swarm logic or artificial life where the concept there is to program a single agent who behaves naturally then put a bunch of them together and you get natural behavior. The idea of emergent behavior from smaller parts is of great interest in the defense industry.

Stanley used a Markov model.

[ad hominems / noncontent deleted]

vendor a: selling apples on wood carts isn't making as much money as I hoped.

vendor b: maybe we should sell nonapples on nonwood carts.

a: that's just silly. Which convinces me that we should continue selling non-nonapples on non-nonwood.

... ie, the opposite of stupidity is rarely intelligence, but the opposite of the opposite of stupidity never is.

Human intelligence arose out of a neurological Turing tarpit. It is reasonable to imagine that designing intelligence in less time will take tricks - ones which Mother Nature didn't use - to get out of the tarpit... (read more)

3pnrjulius
But there isn't a nonwood store. There is an iron store, and a corn store, and a water store. All these are nonwood, but only one of them is going to make much of a wagon.
1homunq
I was talking about going to the nonwood-wagon store (ie, looking at nontraditional AI projects), not about going to the nonwood store (ie, writing your AI from scratch in machine code).

Classic... delete the context and throw wild accusations. Good to know your so mature.

[-]Cyan2-10

Eliezer, you'd have done better to ignore ReadABook's trash. Hir ignorance of your arguments and expertise was obvious.

[-]Yvain2200

I don't know anything about the specific AI architectures in this post, but I'll defend non-apples. If one area of design-space is very high in search ordering but very low in preference ordering (ie a very attractive looking but in fact useless idea), then telling people to avoid it is helpful beyond the seemingly low level of optimization power it gives.

A metaphor: religious beliefs constitute a very small and specific area of beliefspace, but that area originally looks very attractive. You could spend your whole life searching within that area and never getting anywhere. Saying "be atheist!" provides an trivial amount of optimization power. But that doesn't mean it's of trivial importance in the search for correct beliefs. Another metaphor: if you're stuck in a ditch, the majority of the effort it takes to journey a mile will be the ten vertical meters it takes to climb to the top.

Saying "not X" doesn't make people go for all non-X equally. It makes them apply their intelligence to the problem again, ignoring the trap at X that they would otherwise fall into. If the problem is pretty easy once you stop trying to sell apples, then "sell non-apples" might provide most of the effective optimization power you need.

Yvain: It might be a good idea to start looking for marketable non-apples, but you can't sell generalized non-apples right away. The situation in question is the opposite of what you highlighted: non-apples can act as semantic stopsigns, a Wise and mysterious answer rather than a direction for specific research. Traps of the former domain aren't balanced out by bewilderment of its negation; it might be easier, or it might be much worse.

Yvain, that's a fair point. And to the extent that you've just got specific bad beliefs infesting your head, "Be atheist!" is operationalizable in a way that "sell nonapples!" is not.

So are you claiming that Brooks' whole plan was, on a whim, to just do the opposite of what the neats were doing up till then? I thought his inspiration for the subsumption architecture was nature, the embodied intelligence of evolved biological organisms, the only existence proof of higher intelligence we have so far. To me it seems like the neats are the ones searching a larger design space, not the other way around. The scruffies have identified some kind of solution to creating intelligent machines in nature and are targeting a constrained design space inspired by this--the neats on the other hand are trying to create intelligence seemingly out of the platonic world of forms.

"When you say to build a wagon using "wood", you're giving much more concrete advice then when you say "not wood". There are different kinds of wood, of course - but even so, when you say "wood", you've narrowed down the range of possible building materials a whole lot more then when you say "not wood"."

"When you say to build a wagon using "wood", you're giving much more concrete advice THAN when you say "not wood". There are different kinds of wood, of course - but even so, when you say "wood", you've narrowed down the range of possible building materials a whole lot more THAN when you say "not wood"."

Returning to the post, I suspect that there is a lack of relevant mathematical theorems in this area. What is needed for example is a theorem which says something like:

"In a sufficiently general environment E the AI planning system must incorporate probabilistic reasoning to satisfy its goals."

Likewise a theorem characterising which environments require "asynchronous" planning architectures is probably a holy grail in this field.

I realize I am way late for this party, but I would like to make a specific theoretical point about synchronous vs. asynchronous communication. It turns out that, given some components or modules or threads or what-have-you communicating via sending messages to one another, synchronous communication is actually more general than asynchronous in the following technical sense. Once can always use synchronous communication to implement asynchronous communication by throwing another module in the middle to act as a mailbox. On the other hand, given only asy... (read more)

8wnoise
Yes, of course asynchronous systems can be encoded in synchronous. But the opposite is true as well. Asynchronous is not "no sychronization", but "explicit, as needed, synchronization", and it can encode fully synchronous architectures at the loss of some efficiency (i.e. gating everything to a central clock unit, that lets everything step forward one unit at a time). Disclaimer: I've worked on designs for asynchronous hardware.
1mkehrt
Point taken, but I think it is a matter of definition. Let me rephrase. It is impossible to implement synchronous communication without some synchronization mechanism, and thus entirely asynchronous systems cannot implement synchronous communication. This statement may be entirely obvious, but I think that is only because thought has been put into properly stating the issue. More to the point, what I am actually trying to say is that the original post seems to imply that settling on synchronous communication is somehow more limiting than settling on asynchronous communication, and that this is false.
0pnrjulius
You can FAKE asynchrony with a synchronous system (using threading and such), but you can't really be asynchronous. If the hardware only processes one step at a time, you can't make it process two steps at once!
1dlthomas
The parent seems to have been discussing (a)synchronous messaging, whereas you seem to be discussing (a)synchronous computation.

I just came across this image and thought it was hilarious in this context: nonaspirin

I was with you until that last part about economics. Behavioral economists ARE working on actual detailed models of human economic decisions that use assumptions other than economic rationality. They use things like hyperbolic discounting (as opposed to just "nonexponential" discounting) and distance-mediated altruism (rather than just "nonselfishness"). So they're not making nonwagons out of nonwood; they're trying to make cars out of steel. They haven't finished yet; but it's really not fair to expect a new paradigm to achieve in 10 y... (read more)

0Matt_Simpson
There's a middle ground between pop-econ and the behavioral economists. E.g. Austrian economists and some non-economist social scientists often complain about assumptions of rationality and equilibrium without providing much in the way of an alternative. ETA: Some Austrians even explicitly criticize behavioral economists.
0Kenny
You're right, but there's also a lot of calls for 'non-apple' economics out there too. I try to avoid the sources of that material, but it's far more common than what you're referring to. I don't think Robin (and thus Eliezer, in what he appended to the original post) was referring to behavioral economics as an example of the topic of this post.

Apologies for commenting almost a decade after most of the comments here, but this is the exact same reason why "using nonlinear models is harder but more realistic".

The way we were taught math led us to believe that linear models form this space of tractable math, and nonlinear models form this somewhat larger space of mostly intractable math. This is mostly right, but the space of nonlinear models is almost infinitely larger than that of linear models. And that is the reason linear models are mathematically tractable : they form such a small sp... (read more)

0Lumifer
Not necessarily. Closed-form solutions are not guaranteed to exist for your particular subset of models and, in fact, often do not, forcing you to use numeric methods with all the associated problems.
0milindsmart
Sorry, hadn't seen this (note to self: mail alerts). Is this really true, even if we pick a similarly restricted set of models? I mean, consider a set of equations which can only contain products of a number of variables : like (x_1)^a (x_2)^b = const1 ,(x_1)^d (x_2)^e = const2 . Is this nonlinear? Yes. Can it be solved easily? Of course. In fact it is easily transformable to a set of linear equations through logarithms. That's what I'm kinda getting at : I think there is usually some transform that can convert your problem into a linear, or, in general, easy problem. Am I more correct now?
0Lumifer
I don't think this is true. The model must reflect the underlying reality and the underlying reality just isn't reliably linear, even after transforms. Now, historically people used to greatly prefer linear models. Why? Because they were tractable. And for something that you couldn't convert into linear, well, you just weren't able to build a good model. However in our computer age this no longer holds. For an example consider what nowadays is called "machine learning". They are still building models, but these tend to be highly non-linear models with no viable linear transformations.
4gjm
Actually infinitely larger :-). Not necessarily. Some useful classes of models will not have the specific nice properties that linear models have.
0milindsmart
Thanks :) Can you elaborate a bit? Are you saying that I overreached, and that largely there should be some transformed domain where the model turns out to be simple, but is not guaranteed to exist for every model?
4gjm
I'm not sure "overreached" is quite my meaning. Rather, I think I disagree with more or less everything you said, apart from the obvious bits :-). I don't think it has anything much to do with the size of the space. Linear things are tractable because vector spaces are nice. The only connection with the niceness of linear models and the fact that they form such a small fraction of all possible models is this: any "niceness" property they have is a constraint on the models that have them, and therefore for something to be very "nice" requires it to satisfy lots of constraints, so "nice" things have to be rare. But "nice, therefore rare" is not at all the same as "rare, therefore smart". (We could pick out some other set of models, just as sparse as the linear ones, without the nice properties linear models have. They would form just as small a space of possible models, but they would not be as nice to work with as the linear ones.) If you mean that being nonlinear doesn't guarantee anything useful, of course that's right (and this is the same point about "nonapples" being made by the original article here). Particular classes of nonlinear models might have general formulae, a possibility we'll come to in a moment. I'm not sure what that's putting "in other words"; but yes, being linear is a severe restriction. No. Not unless we cheat by e.g. defining some symbol to mean "a function satisfying this funky nonlinear condition we happen to be working with right now". (Which mathematicians sometimes do, if the same funky nonlinear condition comes up often enough. But (1) this is a special case and (2) it still doesn't get you anything as nice and easy to deal with as linearity does.) In general, having a narrowly specified set of models suitable to a specific physical phenomenon is no guarantee at all of exact explicit symbolic formulae. No. Those different special properties may be much less useful than linearity. Linearity is a big deal because it is so very usef