Followup toThe Weak Inside View

Saith Robin:

"It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions.  To see if such things are useful, we need to vet them, and that is easiest "nearby", where we know a lot.  When we want to deal with or understand things "far", where we know little, we have little choice other than to rely on mechanisms, theories, and concepts that have worked well near.  Far is just the wrong place to try new things."

Well... I understand why one would have that reaction.  But I'm not sure we can really get away with that.

When possible, I try to talk in concepts that can be verified with respect to existing history.  When I talk about natural selection not running into a law of diminishing returns on genetic complexity or brain size, I'm talking about something that we can try to verify by looking at the capabilities of other organisms with brains big and small.  When I talk about the boundaries to sharing cognitive content between AI programs, you can look at the field of AI the way it works today and see that, lo and behold, there isn't a lot of cognitive content shared.

But in my book this is just one trick in a library of methodologies for dealing with the Future, which is, in general, a hard thing to predict.

Let's say that instead of using my complicated-sounding disjunction (many different reasons why the growth trajectory might contain an upward cliff, which don't all have to be true), I instead staked my whole story on the critical threshold of human intelligence.  Saying, "Look how sharp the slope is here!" - well, it would sound like a simpler story.  It would be closer to fitting on a T-Shirt.  And by talking about just that one abstraction and no others, I could make it sound like I was dealing in verified historical facts - humanity's evolutionary history is something that has already happened.

But speaking of an abstraction being "verified" by previous history is a tricky thing.  There is this little problem of underconstraint - of there being more than one possible abstraction that the data "verifies".

In "Cascades, Cycles, Insight" I said that economics does not seem to me to deal much in the origins of novel knowledge and novel designs, and said, "If I underestimate your power and merely parody your field, by all means inform me what kind of economic study has been done of such things."  This challenge was answered by comments directing me to some papers on "endogenous growth", which happens to be the name of theories that don't take productivity improvements as exogenous forces.

I've looked at some literature on endogenous growth.  And don't get me wrong, it's probably not too bad as economics.  However, the seminal literature talks about ideas being generated by combining other ideas, so that if you've got N ideas already and you're combining them three at a time, that's a potential N!/((3!)(N - 3!)) new ideas to explore. And then goes on to note that, in this case, there will be vastly more ideas than anyone can explore, so that the rate at which ideas are exploited will depend more on a paucity of explorers than a paucity of ideas.

Well... first of all, the notion that "ideas are generated by combining other ideas N at a time" is not exactly an amazing AI theory; it is an economist looking at, essentially, the whole problem of AI, and trying to solve it in 5 seconds or less.  It's not as if any experiment was performed to actually watch ideas recombining.  Try to build an AI around this theory and you will find out in very short order how useless it is as an account of where ideas come from...

But more importantly, if the only proposition you actually use in your theory is that there are more ideas than people to exploit them, then this is the only proposition that can even be partially verified by testing your theory.

Even if a recombinant growth theory can be fit to the data, then the historical data still underconstrains the many possible abstractions that might describe the number of possible ideas available - any hypothesis that has around "more ideas than people to exploit them" will fit the same data equally well.  You should simply say, "I assume there are more ideas than people to exploit them", not go so far into mathematical detail as to talk about N choose 3 ideas.  It's not that the dangling math here is underconstrained by the previous data, but that you're not even using it going forward.

(And does it even fit the data?  I have friends in venture capital who would laugh like hell at the notion that there's an unlimited number of really good ideas out there.  Some kind of Gaussian or power-law or something distribution for the goodness of available ideas seems more in order...  I don't object to "endogenous growth" simplifying things for the sake of having one simplified abstraction and seeing if it fits the data well; we all have to do that.  Claiming that the underlying math doesn't just let you build a useful model, but also has a fairly direct correspondence to reality, ought to be a whole 'nother story, in economics - or so it seems to me.)

(If I merely misinterpret the endogenous growth literature or underestimate its sophistication, by all means correct me.)

The further away you get from highly regular things like atoms, and the closer you get to surface phenomena that are the final products of many moving parts, the more history underconstrains the abstractions that you use.  This is part of what makes futurism difficult.  If there were obviously only one story that fit the data, who would bother to use anything else?

Is Moore's Law a story about the increase in computing power over time - the number of transistors on a chip, as a function of how far the planets have spun in their orbits, or how many times a light wave emitted from a cesium atom has changed phase?

Or does the same data equally verify a hypothesis about exponential increases in investment in manufacturing facilities and R&D, with an even higher exponent, showing a law of diminishing returns?

Or is Moore's Law showing the increase in computing power, as a function of some kind of optimization pressure applied by human researchers, themselves thinking at a certain rate?

That last one might seem hard to verify, since we've never watched what happens when a chimpanzee tries to work in a chip R&D lab.  But on some raw, elemental level - would the history of the world really be just the same, proceeding on just exactly the same timeline as the planets move in their orbits, if, for these last fifty years, the researchers themselves had been running on the latest generation of computer chip at any given point?  That sounds to me even sillier than having a financial model in which there's no way to ask what happens if real estate prices go down.

And then, when you apply the abstraction going forward, there's the question of whether there's more than one way to apply it - which is one reason why a lot of futurists tend to dwell in great gory detail on the past events that seem to support their abstractions, but just assume a single application forward.

E.g. Moravec in '88, spending a lot of time talking about how much "computing power" the human brain seems to use - but much less time talking about whether an AI would use the same amount of computing power, or whether using Moore's Law to extrapolate the first supercomputer of this size is the right way to time the arrival of AI. (Moravec thought we were supposed to have AI around now, based on his calculations - and he underestimated the size of the supercomputers we'd actually have in 2008.)

That's another part of what makes futurism difficult - after you've told your story about the past, even if it seems like an abstraction that can be "verified" with respect to the past (but what if you overlooked an alternative story for the same evidence?) that often leaves a lot of slack with regards to exactly what will happen with respect to that abstraction, going forward.

So if it's not as simple as just using the one trick of finding abstractions you can easily verify on available data...

...what are some other tricks to use?

New Comment
27 comments, sorted by Click to highlight new comments since:

So what exactly are you concluding from the fact that a seminal model has some unrealistic aspects, and that the connection between models and data in this field is not direct? That this field is useless as a source of abstractions? That it is no more useful than any other source of abstractions? That your abstractions are just as good?

Eliezer, is there some existing literature that has found "natural selection not running into a law of diminishing returns on genetic complexity or brain size", or are these new results of yours? These would seem to me quite publishable, though journals would probably want to see a bit more analysis than you have shown us.

Here's Hans Moravec on the time-of-arrival of just the computing power for "practical human-level AI":

Despite this, if you contrast the curves on page 64 of "Mind Children" and page 60 of "Robot" you will note the arrival time estimate for sufficient computer power for practical human-level AI has actually come closer, from 2030 in "Mind Children" to about 2025 in "Robot."

Robin, for some odd reason, it seems that a lot of fields in a lot of areas just analyze the abstractions they need for their own business, rather than the ones that you would need to analyze a self-improving AI.

I don't know if anyone has previously asked whether natural selection runs into a law of diminishing returns. But I observe that the human brain is only four times as large as a chimp brain, not a thousand times as large. And that most of the architecture seems to be the same; but I'm not deep enough into that field to know whether someone has tried to determine whether there are a lot more genes involved. I do know that brain-related genes were under stronger positive selection in the hominid line, but not so much stronger as to imply that e.g. a thousand times as much selection pressure went into producing human brains from chimp brains as went into producing chimp brains in the first place. This is good enough to carry my point.

I'm not picking on endogenous growth, just using it as an example. I wouldn't be at all surprised to find that it's a fine theory. It's just that, so far as I can tell, there's some math tacked on that isn't actually used anything, but provides a causal "good story" that doesn't actually sound all that good if you happen to study idea generation on a more direct basis. I'm just using it to make the point - it's not enough for an abstraction to fit the data, to be "verified". One should actually be aware of how the data is constraining the abstraction. The recombinant growth notion is an example of an abstraction that fits, but isn't constrained. And this is a general problem in futurism.

If you're going to start criticizing the strength of abstractions, you should criticize your own abstractions as well. How constrained are they by the data, really? Is there more than one reasonable abstraction that fits the same data?

Talking about what a field uses as "standard" doesn't seem like a satisfying response. Leaving aside that this is also the plea of those whose financial models don't permit real estate prices to go down - "it's industry standard, everyone is doing it" - what's standard in one field may not be standard in another, and you should be careful when turning an old standard to a new purpose. Sticking with standard endogenous growth models would be one matter if you wanted to just look at a human economy investing a usual fraction of money in R&D; and another matter entirely if your real interest and major concern was how ideas scale in principle, for the sake of doing new calculations on what happens when you can buy research more cheaply.

There's no free lunch in futurism - no simple rule you can follow to make sure that your own preferred abstractions will automatically come out on top.

Moravec, "Mind Children", page 59: "I rashly conclude that the whole brain's job might be done by a computer performing 10 trillion (10^13) calculations per second."

But has that been disproved? I don't really know. But I would imagine that Moravec could always append, ". . . provided that we found the right 10 trillion calculations." Or am I missing the point?

When Robin wrote: "It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions." he gets it exactly right (though it is not necessarily so easy to make good ones, that isn't really the point).

This should have been clear from the sequence on the "timeless universe" -- just as that interesting abstraction is not going to convince more than a few credulous fans of the truth of that abstraction, the truth of the magical super-FOOM is not going to convince anybody without more substantial support than an appeal to a very specific way of looking at "things in general", which few are going to share.

On a historical time frame, we can grant pretty much everything you suppose and still be left with a FOOM that "takes" a century (a mere eyeblink in comparison to everything else in history). If you want to frighten us sufficiently about a FOOM of shorter duration, you're going to have to get your hands dirtier and move from abstractions to specifics.

[-]PK00

"...what are some other tricks to use?" --Eliezer Yudkowsky "The best way to predict the future is to invent it." --Alan Kay

It's unlikely that a reliable model of the future could be made since getting a single detail wrong could throw everything off. It's far more productive to predict a possible future and implement it.

Eliezer, the factor of four between human and chimp brains seems to be to far from sufficient to show that natural selection doesn't hit diminishing returns. In general I'm complaining that you mainly seem to ask us to believe your own new unvetted theories and abstractions, while I try when possible to rely on abstractions developed in fields of research (e.g., growth theory and research policy) where hundreds of researchers have worked full-time for decades to make and vet abstractions, confronting them with each other and data. You say your new approaches are needed because this topic area is far from previous ones, and I say test near, apply far; there is no free lunch in vetting; unvetted abstractions cannot be trusted just because it would be convenient to trust them. Also, note you keep talking about "verify", a very high standard, whereas I talked about the lower standards of "vet and "validate".

Robin, suppose that 1970 was the year when it became possible to run a human-equivalent researcher in realtime using the computers of that year. Would the further progress of Moore's Law have been different from that in our own world, relative to sidereal time? Which abstractions are you using to answer this question? Have they been vetted and validated by hundreds of researchers?

Eliezer, my Economic Growth Given Machine Intelligence does used one of the simplest endogenous growth models to explore how Moore's law changes with computer-based workers. It is an early and crude attempt, but it is the sort of approach I think promising.

I don't understand. If it is not known which model is correct, can't a Bayesian choose policies by the predictive distributions of consequences after marginalizing out the choice of model? Robin seems to be invoking an academic norm of only using vetted quantitative models on important questions, and he seems to be partly expecting that the intuitive force of this norm should somehow result in an agreement that his position is epistemically superior. Can't the intuitive force of the norm be translated into a justification in something like the game theory of human rhetoric? For example, perhaps the norm is popular in academia because everyone half-consciously understands that the norm is meant to stop people from using the strategy of selecting models which lead to emotionally compelling predictions? Is there a more optimal way to approximate the contributions (compelling or otherwise) of non-vetted models to an ideal posterior belief? If Eliezer is breaking a normal procedural safeguard in human rhetoric, one should clarify the specific epistemic consequences that should be expected when people break that safeguard, and not just repeatedly point out that he is breaking it.

Moravec, "Mind Children", page 68: "Human equivalence in 40 years". There he is actually talking about human-level intelligent machines arriving by 2028 - not just the hardware you would theoretically require to build one if you had the ten million dollars to spend on it.

You can hire a human for less than ten million dollars. So there would be little financial incentive to use a more expensive machine instead. When the machine costs a thousand dollars things are a bit different.

I think it misrepresents his position to claim that he thought we should have human-level intelligent machines by now.

Robin, I just read through that paper. Unless I missed something, you do not discuss, or even mention as a possibility, the effect of having around minds that are faster than human. You're just making a supply of em labor cheaper over time due to Moore's Law treated as an exogenous growth factor. Do you see why I might not think that this model was even remotely on the right track?

So... to what degree would you call the abstractions in your model, standard and vetted?

How many new assumptions, exactly, are fatal? How many new terms are you allowed to introduce into an old equation before it becomes "unvetted", a "new abstraction"?

And if I devised a model that was no more different from the standard - departed by no more additional assumptions - than this one, which described the effect of faster researchers, would it be just as good, in your eyes?

Because there's a very simple and obvious model of what happens when your researchers obey Moore's Law, which makes even fewer new assumptions, and adds fewer terms to the equations...

You understand that if we're to have a standard that excludes some new ideas as being too easy to make up, then - even if we grant this standard - it's very important to ensure that standard is being applied evenhandedly, and not just selectively to exclude models that arrive at the wrong conclusions, because only in the latter case does it seem "obvious" that the new model is "unvetted". Do you know the criterion - can you say it aloud for all to hear - that you use to determine whether a model is based on vetted abstractions?

'How many new assumptions, exactly, are fatal? How many new terms are you allowed to introduce into an old equation before it becomes "unvetted", a "new abstraction"?'

Every abstraction is made by holding some things the same and allowing other things to vary. If it allowed nothing to vary it would be a concrete not an abstraction. If it allowed everything to vary it would be the highest possible abstraction - simply "existence." An abstraction can be reapplied elsewhere as long as the differences in the new situation are things that were originally allowed to vary.

That's not to say this couldn't be a black swan, there's no guarantees, but going purely on evidence what other choice do you have except to do it this way.

"as long as the differences in the new situation are things that were originally allowed to vary"

And all the things that were fixed are still present of course! (since these are what we are presuming are the causal factors)

Steve, how vetted any one abstraction is in any one context is a matter of degree, as is the distance of any particular application to its areas of core vetting. Models using vetted abstractions can also be more or less clean and canonical, and more or less appropriate to a context. So there is no clear binary line, nor any binary rule like "never use unvetted stuff." The idea is just to make one's confidence be sensitive to these considerations.

Eliezer, the simplest standard model of endogenous growth is "learning by doing" where productivity increases with quantity of practice. That is the approach I tried in my paper. Also, while economists have many abstractions for modeling details of labor teams and labor markets, our standard is that the simplest versions should be of just a single aggregate quantity of labor. This one parameter of course implicitly combines the number of workers, the number of hours each work, how fast each thinks, how well trained they are, etc. If you instead have a one parameter model that only considers how fast each worker thinks, you must be implicitly assuming all these other contributions stay constant. When you have only a single parameter for a sector in a model, it is best if that single parameter is an aggregate intended to describe that entire sector, rather than a parameter of one aspect of that sector.

Also, while economists have many abstractions for modeling details of labor teams and labor markets, our standard is that the simplest versions should be of just a single aggregate quantity of labor. This one parameter of course implicitly combines the number of workers, the number of hours each work, how fast each thinks, how well trained they are, etc.

If one woman can have a baby in nine months, nine women can have a baby in one month? Having a hundred times as many people does not seem to scale even close to the same way as the effect of working for a hundred times as many years. This is a thoroughly vetted truth in the field of software management.

In science, time scales as the cycle of picking the best ideas in each generation and building on them; population would probably scale more like the right end of the curve generating what will be the best ideas of that generation.

Suppose Moore's Law to be endogenous in research. If I have new research-running CPUs with a hundred times the speed, I can use that to run the same number of researchers a hundred times as fast, or I can use it to run a hundred times as many researchers, or any mix thereof which I choose. I will choose the mix that maximizes my speed, of course. So the effect has to be at least as strong as speeding up time by a factor of 100. If you want to use a labor model that gives results stronger than that, go ahead...

Didn't Robin say in another thread that the rule is that only stars are allowed to be bold? can anyone find this line?

Consider the following. Chimpanzees make tools. The first hominid tools were simple chipped stone from 2.5 million years ago. Nothing changed for a million years. Then homo erectus came along with Acheulian tech, nothing happened for a million years. Then two thousand years ago H. Sapiens appeared and tool use really diversified. The brains had been swelling from 3 million years ago.

If brains had been getting more generally intelligent at that time as they were increasing in size, it is not shown. They may have been getting better at wooing women and looking attractive to men.

This info has been cribbed from the Red Queen page 313 hardback edition.

I would say this shows a discontinous improvement in intelligence, where intelligence is defined as the ability to generally hit a small target in search space about the world. Rather than the ability to get into another hominids pants.

Also, while economists have many abstractions for modeling details of labor teams and labor markets, our standard is that the simplest versions should be of just a single aggregate quantity of labor.

Granted, but as long as we can assume that things like numbers of workers, hours worked and level of training won't drop through the floor, then brain emulation or uploading should naturally lead to productivity going through the roof shouldn't it?

Or is that just a wild abstraction with no corroborating features whatsoever?

Eliezer, it would be reasonable to have a model where the research sector of labor had a different function for how aggregate quantity of labor varied with the speed of the workers.

Ben, I didn't at all say that productivity can't go through the roof within a model with well-vetted abstractions.

[-]Tom300

Well... first of all, the notion that "ideas are generated by combining other ideas N at a time" is not exactly an amazing AI theory; it is an economist looking at, essentially, the whole problem of AI, and trying to solve it in 5 seconds or less. It's not as if any experiment was performed to actually watch ideas recombining. Try to build an AI around this theory and you will find out in very short order how useless it is as an account of where ideas come from...

But more importantly, if the only proposition you actually use in your theory is that there are more ideas than people to exploit them, then this is the only proposition that can even be partially verified by testing your theory.

This is a good idea though. Why doesn't someone combine economics and AI theory? You could build one of those agent-based computer simulations where each agent is an entrepreneur searching the (greatly simplified) space of possible products and trading the results with other agents. Then you could tweak parameters of one of the agents' intelligences and see what sort of circumstances lead to explosive growth and what ones lead to flatlining.

". . . economics does not seem to me to deal much in the origins of novel knowledge and novel designs, and said, "If I underestimate your power and merely parody your field, by all means inform me what kind of economic study has been done of such things."

A popular professor at Harvard Business School told me that economists are like accountants--they go out on the field of battle, examine the dead and maimed, tabulate the fallen weapons, study the prints on the ground, and try and figure out what happened. Real people, the actors who fought the battle, are rarely consulted. However, the economists and accountants try to summarize what happened in the past. They often do that with some degree of accuracy. However, experience has taught us that asking them what will happen in the future begets less accuracy than found in weather forecasts. And yet the economists have constructed extensive abstract theories that presume to predict outcomes.

I don't believe that applying more brain power or faster calculations will ever improve on this predictive ability. Such super-computations could only work in a controlled environment. But, a controlled environment eliminates the genius, imagination, persistence, and irrational exuberance of individual initiative. The latter is unpredicatable, spontaneous, opportunistic. All attempts to improve on that type of common and diversified genius by central direction from on high has failed.

"Ideas" are great in the hard sciences, but as Feynman observed, almost every idea you can come up with will prove wrong. Super computational skills should alleviate the problem of sorting through the millions of possible ideas in the physical sciences to look for the good ones. But when dealing with human action, it is best to look, not at the latest idea, or concept, but at the "principles" we can see from the past 4,000 years of human societal activity. Almost every conceivable mechanism has been tested and those that worked are "near" and at hand. That record reveals the tried and true lessons of history. In dealing with the variables of human motivation and governance, those principles provide a sounder blueprint for the future than any supercomputer could compute.

This reminds me of the bit in Steven Landsburg's (excellent) book "The Armchair Economist" in which he makes the point that data on what happens on third down in football games is a very poor guide to what would happen on third down if you eliminated fourth down.

[-]ces10

Eliezer -- To a first approximation, the economy as a whole is massively, embarrassingly, parallel. It doesn't matter if you have a few very fast computers or lots of very slow computers. Processing is processing and it doesn't matter if it is centralized or distributed. Anecdotal evidence for this abounds. The Apollo program involved hundreds of thousands of distributed human scale intelligences. And that was just one program in a highly distributed economy. We're going to take artificial intelligences and throw them at a huge number of problems: biology (heart attacks, cancer, strokes, alzheimer, HIV, ...), computers (cloud computing, ...), transportation, space, energy, ... In this economy, we don't care that 9 women can't produce a baby in a month. We want a gazillion babies, and we're gloriously happy that 9 women can produce 9 babies in 9 months.

[-]ces00

Robin -- But Eliezer's basic question of whether the general models you propose are sufficient seems to remain an open question. For example, you suggest that simple jobs can be performed by simple computers leaving the complicated jobs to humans (at the current time). A more accurate view might be that employers spend insignificant amounts of money on computers (1% to 10% of the human's wages) in order to optimize the humans. Humans assisted by computers have highly accurate long term memories, and they are highly synchronized. New ideas developed by one human are rapidly transmitted throughout society. But humans remain sufficiently separated to maintain diversity.

So, what about a model where human processing is qualitatively different from computer processing, and we spend money on computers in order to augment the human processing. We spend a fixed fraction of a human's wages on auxillary computers to enhance that human. But that sorta sounds like the first phase of your models: human wages skyrocket along with productivity until machines become self-aware.

A welfare society doesn't seem unreasonable. Agriculture is a few percent of the U.S. economy. We're close to being able to pay a small number of people a lot of money to grow, process, and transport food and give the food away for free -- paid for by an overall tax on the economy. As manufacturing follows the path agriculture took over the past century and drops from being around 30% of our economy to 3%, we'll increasingly be able to give manufactured goods away for free -- paid for out of taxes on the research economy.