When I lecture on the Singularity, I often draw a graph of the "scale of intelligence" as it appears in everyday life:

Mindscaleparochial

But this is a rather parochial view of intelligence.  Sure, in everyday life, we only deal socially with other humans—only other humans are partners in the great game—and so we only meet the minds of intelligences ranging from village idiot to Einstein.  But what we really need to talk about Artificial Intelligence or theoretical optima of rationality, is this intelligence scale:

Mindscalereal

For us humans, it seems that the scale of intelligence runs from "village idiot" at the bottom to "Einstein" at the top.  Yet the distance from "village idiot" to "Einstein" is tiny, in the space of brain designs.  Einstein and the village idiot both have a prefrontal cortex, a hippocampus, a cerebellum...

Maybe Einstein has some minor genetic differences from the village idiot, engine tweaks.  But the brain-design-distance between Einstein and the village idiot is nothing remotely like the brain-design-distance between the village idiot and a chimpanzee.  A chimp couldn't tell the difference between Einstein and the village idiot, and our descendants may not see much of a difference either.

Carl Shulman has observed that some academics who talk about transhumanism, seem to use the following scale of intelligence:

Mindscaleacademic

Douglas Hofstadter actually said something like this, at the 2006 Singularity Summit.  He looked at my diagram showing the "village idiot" next to "Einstein", and said, "That seems wrong to me; I think Einstein should be way off on the right."

I was speechless.  Especially because this was Douglas Hofstadter, one of my childhood heroes.  It revealed a cultural gap that I had never imagined existed.

See, for me, what you would find toward the right side of the scale, was a Jupiter Brain.  Einstein did not literally have a brain the size of a planet.

On the right side of the scale, you would find Deep Thought—Douglas Adams's original version, thank you, not the chessplayer.  The computer so intelligent that even before its stupendous data banks were connected, when it was switched on for the first time, it started from I think therefore I am and got as far as deducing the existence of rice pudding and income tax before anyone managed to shut it off.

Toward the right side of the scale, you would find the Elders of Arisia, galactic overminds, Matrioshka brains, and the better class of God.  At the extreme right end of the scale, Old One and the Blight.

Not frickin' Einstein.

I'm sure Einstein was very smart for a human.  I'm sure a General Systems Vehicle would think that was very cute of him.

I call this a "cultural gap" because I was introduced to the concept of a Jupiter Brain at the age of twelve.

Now all of this, of course, is the logical fallacy of generalization from fictional evidence.

But it is an example of why—logical fallacy or not—I suspect that reading science fiction does have a helpful effect on futurism.  Sometimes the alternative to a fictional acquaintance with worlds outside your own, is to have a mindset that is absolutely stuck in one era:  A world where humans exist, and have always existed, and always will exist.

The universe is 13.7 billion years old, people!  Homo sapiens sapiens have only been around for a hundred thousand years or thereabouts!

Then again, I have met some people who never read science fiction, but who do seem able to imagine outside their own world.  And there are science fiction fans who don't get it.  I wish I knew what "it" was, so I could bottle it.

Yesterday, I wanted to talk about the efficient use of evidence, i.e., Einstein was cute for a human but in an absolute sense he was around as efficient as the US Department of Defense.

So I had to talk about a civilization that included thousands of Einsteins, thinking for decades.  Because if I'd just depicted a Bayesian superintelligence in a box, looking at a webcam, people would think: "But... how does it know how to interpret a 2D picture?"  They wouldn't put themselves in the shoes of the mere machine, even if it was called a "Bayesian superintelligence"; they wouldn't apply even their own creativity to the problem of what you could extract from looking at a grid of bits.

It would just be a ghost in a box, that happened to be called a "Bayesian superintelligence".  The ghost hasn't been told anything about how to interpret the input of a webcam; so, in their mental model, the ghost does not know.

As for whether it's realistic to suppose that one Bayesian superintelligence can "do all that"... i.e., the stuff that occurred to me on first sitting down to the problem, writing out the story as I went along...

Well, let me put it this way:  Remember how Jeffreyssai pointed out that if the experience of having an important insight doesn't take more than 5 minutes, this theoretically gives you time for 5760 insights per month?  Assuming you sleep 8 hours a day and have no important insights while sleeping, that is.

Now humans cannot use themselves this efficiently.  But humans are not adapted for the task of scientific research.  Humans are adapted to chase deer across the savanna, throw spears into them, cook them, and then—this is probably the part that takes most of the brains—cleverly argue that they deserve to receive a larger share of the meat.

It's amazing that Albert Einstein managed to repurpose a brain like that for the task of doing physics.  This deserves applause.  It deserves more than applause, it deserves a place in the Guinness Book of Records.  Like successfully building the fastest car ever to be made entirely out of Jello.

How poorly did the blind idiot god (evolution) really design the human brain?

This is something that can only be grasped through much study of cognitive science, until the full horror begins to dawn upon you.

All the biases we have discussed here should at least be a hint.

Likewise the fact that the human brain must use its full power and concentration, with trillions of synapses firing, to multiply out two three-digit numbers without a paper and pencil.

No more than Einstein made efficient use of his sensory data, did his brain make efficient use of his neurons firing.

Of course I have certain ulterior motives in saying all this.  But let it also be understood that, years ago, when I set out to be a rationalist, the impossible unattainable ideal of intelligence that inspired me, was never Einstein.

Carl Schurz said:

"Ideals are like stars. You will not succeed in touching them with your hands. But, like the seafaring man on the desert of waters, you choose them as your guides and following them you will reach your destiny."

So now you've caught a glimpse of one of my great childhood role models—my dream of an AI.  Only the dream, of course, the reality not being available.  I reached up to that dream, once upon a time.

And this helped me to some degree, and harmed me to some degree.

For some ideals are like dreams: they come from within us, not from outside.  Mentor of Arisia proceeded from E. E. "doc" Smith's imagination, not from any real thing.  If you imagine what a Bayesian superintelligence would say, it is only your own mind talking.  Not like a star, that you can follow from outside.  You have to guess where your ideals are, and if you guess wrong, you go astray.

But do not limit your ideals to mere stars, to mere humans who actually existed, especially if they were born more than fifty years before you and are dead.  Each succeeding generation has a chance to do better. To let your ideals be composed only of humans, especially dead ones, is to limit yourself to what has already been accomplished.  You will ask yourself, "Do I dare to do this thing, which Einstein could not do?  Is this not lèse majesté?"  Well, if Einstein had sat around asking himself, "Am I allowed to do better than Newton?" he would not have gotten where he did.  This is the problem with following stars; at best, it gets you to the star.

Your era supports you more than you realize, in unconscious assumptions, in subtly improved technology of mind.  Einstein was a nice fellow, but he talked a deal of nonsense about an impersonal God, which shows you how well he understood the art of careful thinking at a higher level of abstraction than his own field.  It may seem less like sacrilege to think that, if you have at least one imaginary galactic supermind to compare with Einstein, so that he is not the far right end of your intelligence scale.

If you only try to do what seems humanly possible, you will ask too little of yourself.  When you imagine reaching up to some higher and inconvenient goal, all the convenient reasons why it is "not possible" leap readily to mind.

The most important role models are dreams: they come from within ourselves.  To dream of anything less than what you conceive to be perfection, is to draw on less than the full power of the part of yourself that dreams.

New Comment
63 comments, sorted by Click to highlight new comments since:

Did Hofstadter explain the remark?

Maybe he felt that the difference between Einstein and a village idiot was larger than between a village idiot and a chimp. Chimps can be pretty clever.

Or, maybe he thought that the right end of the scale, where the line suddenly becomes dotted, should be the location of the rightmost point that represents something real. It's very conventional to switch from a solid to a dotted line to represent a switch from confirmed data to projections.

But I don't buy the idea of intelligence as a scalar value.

I really think you have undersold your point, especially when one considers that the distance from mouse to Einstein is tiny, in the space of brain designs. Einstein and mice both have a prefrontal cortex, a hippocampus, a cerebellum...

>>> This is something that can only be grasped through much study of cognitive science, until the full horror begins to dawn upon you.

I haven't yet studied much cognitive science (though I definitely plan to), but horror is precisely what I felt when I finally comprehended the process which produced humans and the human brain.

While I agree with the general sentiment expressed, I think the argument is nevertheless a bit weak.

The problem is that you have not based this on any concrete definition or test of intelligence that spans grass to Jupiter brains. We can all agree on the order of the points on the x axis, but what is the scale? You don't say because you don't know.

Is Einstein 20% smarter than me? Or 3x smarter? Or 10x? Any of these answers could be considered correct according to some scale of intelligence.

It's nice to know I'm closer to Einstein than chimps. People should read this post, so they stop picking o me and start picking o chimps. Or pick on Einstein for that matter. He's dead, what's he going to do about it?

Oh my impersonal God, I just realized I'm smarter than Einstein. Dead brains are dumber than idiotic brains. Isn't that neat?

In fact I'm smarter than a Bayesian superintelligence. At least until one is invented. Nonexistent brains are also much dumber than idiotic brains.

I just realized I'm a genius!!!

But I don't buy the idea of intelligence as a scalar value.
Do you have a better suggestion for specifying how effective a system is at manipulating its environment into specific future states? Unintelligent systems may work much better in specific environments than others, but any really intelligent system should be able to adapt to a wide range of environments. Which important aspect of intelligence do you think can't be expressed in a scalar rating?

@Eli: thanks for great post again, you speak to my hearts content :-)) I have also occasioned upon hero worship of Einstein in science (especially in physics circles) - this is not a good thing, as it hinders progress: people think "I can't contribute anything important because I'm not a genius like Einstein" instead of sitting down, starting to think and solve some problems.

@Shane: I think the sentience quotient is a nice scale/quantification which can give some precision to otherwise vague talk about plant/chimp/human/superhuman intelligence.

http://en.wikipedia.org/wiki/Sentience_Quotient (The wikipedia article also gives a link to the "Xenobiology"-article by Freitas, who proposed the SQ idea)

According to SQ we humans (Einstein, village idiot and all) are around +13, whereas superintelligence can soar up to 50 (log scale!).

@Günther: The problem with SQ is that it's not a measure of intelligence, but of information processing efficiency. Thus a Josephson junction has an SQ of around 23, but that doesn't mean that it's very smart.

1) We can't put ourselves in the place of a superintelligence, and 2) you're handwaving away deep problems of knowledge and data processing by attributing magical thought powers to your AI.

Your statements about what a superintelligence could infer are just speculation - at least, they [i]would[/i] be speculation if you bothered to actually work out a semi-plausible explanation for how it could infer such things, as opposed to simply stating that it could obviously do those things because it's a superintelligence.

I have to admit to some skepticism as well, Caledonian, but it seems clear to me that it should be possible with P > .99 to make an AI which is much smarter but slower than a human brain. And even if increasing the effective intelligence goes as O(exp(N)) or worse, a Manhattan-project-style parallel-brains-in-cooperation AI is still not ruled out.

but it seems clear to me that it should be possible with P > .99 to make an AI which is much smarter but slower than a human brain
'Smarter' in what way? It isn't simply a function of the total amount of data processed, because we can already go beyond the human brain in that. It can't be pure speed, either.

A mere collection of neurons won't do it, because disorganized neurons don't spontaneously self-organize into a working system. There needs to be enough of a system to start with for the system to build itself - and then, build itself how? Precisely what functions would be improved?

We haven't even established how to measure most aspects of cognitive function - one of the few things we know about how our brains work is that we don't possess tools to measure most of the things it does. How, in the midst of our profound ignorance, do we start proclaiming the results of comparisons?

Eliezer is pulling his conclusions out of thin air... and almost no one is calling him on it.

I'm not denying your point, Caledonian - right now, our best conception of a test for smarts in the sense we want is the Turing test, and the Turing test is pretty poor. If we actually understood intelligence, we could answer your questions. But as long as we're all being physicalists, here, we're obliged to believe that the human brain is a computing machine - special purpose, massively parallel, but almost certainly Turing-complete and no more. And by analogy with the computing machines we should expect to be able to scale the algorithm to bigger problems.

I'm not saying it's practical. It could be the obvious scalings would be like scaling the Bogosort. But it would seem to be special pleading to claim it was impossible in theory.

To sum up: a bird in the hand is worth two in the bush!

our best conception of a test for smarts in the sense we want is the Turing test

The Turing test doesn't look for intelligence. It looks for 'personhood' - and it's not even a definitive test, merely an application of the point that something that can fool us into thinking its a person is due the same regard we give people.

"Ideals are like stars". All Schurz is doing is defining, yet again, desire. Desire is metonymic by definition, and I think it is one of the most important evolutionary traits of the human mind. This permanent dissatisfaction of the mind must have proven originally very useful in going after more game that we could consume, and it is still useful in scientific pursuits. How would AI find its ideals? What would be the origin of the desire of AI that would make it spend energy for finding something utterly useless like general knowledge? If AI evolves it would be focused on energy problems (how to think more and faster with lower energy consumption) and it may find interesting answers, but only on that practical area. If you don't solve the problem of AI desire (and this is the path of solving friendliness) AI will evolve very fast on a single direction and will reach real fast the limits of its own "evolutionary destiny". I still think the way to go is to replace biological mass with replaceable material in humans, not the other way around.

[-]ME3-20

Likewise the fact that the human brain must use its full power and concentration, with trillions of synapses firing, to multiply out two three-digit numbers without a paper and pencil.

Some people can do it without much effort at all, and not all of them are autistic, so you can't just say that they've repurposed part of their brain for arithmetic. Furthermore, other people learn to multiply with less effort through tricks. So, I don't think it's really a flaw in our brains, per se.

The Turing test doesn't look for intelligence. It looks for 'personhood' - and it's not even a definitive test, merely an application of the point that something that can fool us into thinking its a person is due the same regard we give people.

I said the Turing test was weak - in fact, I linked an entire essay dedicated to describing exactly why the Turing test was weak. In fact, I did so entirely to accent your point that we don't know what we're looking for. What we are looking for, however, is, by the Church-Turing thesis, an algorithm, an information-processing algorithm, and I invite the computer scientists et al. here to name any known information-processing algorithm which doesn't scale.

[+]nal-60

I remember reading GEB in High School and being fiercely disappointed the first few times I had the chance to heard Hofstadter talk in person. He seems to have focussed in on minutiae of what he considers essential to intelligence (like the ability to recognize letters in different fonts, a project he spent years working on at IU), and let the big ideas he explored in GEB go by the wayside.

I said the Turing test was weak - in fact, I linked an entire essay dedicated to describing exactly why the Turing test was weak.
Irrelevant. It doesn't address the relevant concept - whether it is 'weak' or 'strong' makes no difference.

He seems to have focussed in on minutiae of what he considers essential to intelligence (like the ability to recognize letters in different fonts, a project he spent years working on at IU), and let the big ideas he explored in GEB go by the wayside.
Can someone who wants to work on the big ideas get funding? Can someone who wants to work on the big ideas find a place to start? The important questions are often Gordian knots that don't offer convenient starting points for progress.

Academics need to demonstrate that they've accomplished things to retain their positions. Even once tenure is attained, you don't get the rewards of academia without publication.

[-]ME310

By the way, when the best introduction to a supposedly academic field is works of science fiction, it sets off alarm bells in my head. I know that some of the best ideas come from sci-fi and yada, yada, but just throwing that out there. I mean, when your response to an AI researcher's disagreement is "Like, duh! Go read some sci-fi and then we'll talk!" who is really in the wrong here?

[-]poke00

I don't see any knock down evidence for general intelligence. Talk about "manipulating the environment" just assumes what you're trying to prove. If we're going to include the "evidence" that you can imagine humans doing something then we've already stepped well beyond the bounds of reasonable inquiry and are skipping gleefully through the lush green meadows fantasy land; you don't need fictional superintelligent AIs to cross that line. The data we have is what humans actually do not what we imagine they could do. It's nice to think we can do anything, which is probably why these ideas have so much appeal, but what we're actually experiencing in such flights of fancy is simply ignorance of our own constrained behavioral repertoire.

He looked at my diagram showing the "village idiot" next to "Einstein", and said, "That seems wrong to me; I think Einstein should be way off on the right."

We can distinguish between system I and system II abilities (http://web.cenet.org.cn/upfile/37554.pdf). Einstein and the village idiot share most of their system I abilities. For example: They learned the complex syntax and semantics of their respective native languages effortlessly as children without needed explicit tuition. They both mastered basic human folk psychology / theory of mind including reasoning about desire and belief ascriptions and motivation. They both are competent with standard human folk physics (involving recognition of objects as discrete, crude mechanics, etc.). They both have a basic competence in terms of picking up their native culture (e.g. etiquette, moralistic/religious taboos, hierarchy, simple arts and religion).

Now, non-human animals possess some of these system I abilities. However, a fair amount of the human language, folk psychology and culture abilities may be well beyond those of chimps, bonobos, etc.

Einstein and the village idiot may differ more significantly in system II abilities, i.e. conscious reasoning. My experience of people good at conscious reasoning in multiple domains is that they can do more good conscious reasoning (both performing analysis and synthesis)in 30 minutes than an average IQ person (NOT a village idiot) has in a lifetime. Thus, in terms of system II abilities, it might be that Einstein is further from the village idiot (relative to the distance between the idiot and the chimp) than Eliezer's diagram suggests.

Evolutionary Psychology stresses the uniformity of human cognitive abilities, suggesting something like Eliezer's diagram. But I'm skeptical that this uniformity extends to system II. The system II abilities of the best rationalists of today may depend significantly on their having learned a set of reasoning skills developed by their culture over a long period of time. The learning of these skills requires more basic abilities (g factor, etc.) but once these skills have been mastered the resulting difference in system II analytical and creative reasoning is much larger than the difference in Spearman's g. Another reason for an objectively huge range of human abilities in system II comes from human general learning capacities (which may significantly exceed those of our primate relatives). Top rationalists can spend hours a day (every day) engaged in focused system II reasoning. They probably do as much in a day as the idiot does in six months.

As long as arguing from fictional evidence is ok as long as you admit you're doing it, somebody should write the novelization.

Bayesian Ninja Army contacted by secret government agency due to imminent detonation of Logic Bomb* in evil corporate laboratory buried deep beneath some exotic location. Hijinks ensue; they fail to stop Logic Bomb detonation but do manage to stuff in a Friendliness supergoal at the last minute. Singularity ensues, with lots of blinky lights and earth-rending. Commentary on the human condition follows, ending in a sequel-preparing twist.

  • see commentary on yesterday's post

You keep repeating how much information could an AI derive from a very small measurement (the original example was an apple falling) and the last story was supposed to be an analogy to it, but the idea of an entire civilization worth of physical evidence already available makes the job of the AI much easier. The original assertion of deriving modern physics from an apple falling looks ridiculous because you never specified the prior knowledge the AI had and amount of non-redundant information available in the falling apple scenario. If we are rigorous enough with the definitions we end up with a measure of how efficiently can an intelligence observe new information from a certain piece of evidence and how efficiently it can update it's own theories in the face of this new evidence. I agree that a self improving AI could reach the theoretical limits of efficiency on updating its own theories, but the efficiency of information observation from an experiment is more related to what the experiment is measuring and the resolution of the measurements. The assertion that an AI could see an apple falling and theorize general relativity is meaningless without saying how much prior knowledge it has, in a tabula rasa state almost nothing could come from this observation, it would need much more evidence before anything meaningful started to arise. The resolution of the evidence is also very important, it's absurd to believe that there aren't local maxima in the theory space search that would be favored by the AI because the resolution isn't sufficient to show that the theories are dead wrong. The AI would have no way to accurately assess this impact (if we assume it's unable to improve the resolution and manipulate the environment). That's it, the essence of what I think is wrong with your belief about how much an AI could learn from certain forms of evidence: I agree with the idea but your reasoning is much less formal than it should be and it end up looking like magical thinking. With sufficient resolution and a good set of sensors there's sufficient evidence today to believe that an AI could use a very small amount of orthogonal experiments to derive all of modern science, I would bet the actual amount is smaller than one hundred, but if the resolution is insufficient no amount of experiments would do.

I'm deeply puzzled by Hofstadter's response, but I don't imagine a culture gap explains it. The only thing I can thing of is that Hofstadter must have gotten a LOT more pessimistic about the prospects for a robust AI since the days of GEB.

The following link is quite illuminative on Hofstadter's feelings on things: Interview

He's rather skeptical of the sort of transhumanist claims that are common among certain sorts of futurists.

Eliezer's scale's more logarithmic, Carl Shulman's academics' is more linear, but neither quite makes it's mind up which it is. Please take your origin point away from that poor mouse.

I wonder how much confusion and miscommunication comes from people being unaware they're comparing in different scales. I still remember being shocked when I realized 60 decibels was a thousand times louder than 30.

I've read Fire on a deep and Diaspora. I have some idea of the powerfulness of intelligence you invisage, I do think something like the flowering of the blight is possible. However it is very very unlikely that anyone on earth will do it.

I think it as likely as people working on vaccines creating a highly virulent strain of a virus that incubates for 2 years and has a 100 percent mortality rate. Such a virus is seems to me possible but highly unlikely.

It boils down to the fact I don't think simple bayes is Power-ful in a blight fashion. Attempting to use a universal searcher is prohibitive resource wise. The simplest best theory we have for precisely predicting an arbitrary 12 grams of carbons behaviour over time requires avogadros of data for the different degrees of freedom of the start state, the electron energy states etc. To get one avogadros worth of data through a gigabit connection would take 19 013 258 years.

So you need to make many short cuts. And have heuristics and biases so you can predict your environment reasonably. So you only run statistics over certain parts of your inputs and internal workings. Discovering new places to run statistics on is hard, as if you don't currently run statistics there, you have no reason to think running statistics over those variables is a good idea. It requires leaps of faith, and these can lead you down blind alleys. The development of intelligence as far as I am concerned is always ad hoc slow and requires complexity, intelligence is only very powerful after it has been developed.

Caledonian:The following link is quite illuminative on Hofstadter's feelings on things: Interview He's rather skeptical of the sort of transhumanist claims that are common among certain sorts of futurists.

I'm a Hofstadter fan too, but look at your evidence again, bearing in mind how existing models and beliefs shape perception and judgment...

"I think it's very murky"

"the craziest sort of dog excrement mixed with very good food."

"Frankly, because it sort of disgusts me"

"The car driving across the Nevada desert still strikes me as being closer to the thermostat or the toilet that regulates itself"

"and the whole idea of humans is already down the drain?"

We might get a very different intelligence scale if we graph: (1) "how much mind-design effort is required to build that intelligence" than if we graph (2) "how much [complex problem solving / technology / etc.] that intelligence can accomplish".

On scale (1), it is obvious that the chimp-human distance is immensely larger than the human-human distance. (The genetic differences took immensely longer to evolve).

On scale (2), it is less obvious. Especially if we compared linear rather than log differences in what Einstein, the village idiot, and the chimp can accomplish in the way of technological innovation.

I'll echo Hofstadter and a few of the commenters. The mouse/chimp/VI/Einstein scale seems wrong to me; I think Einstein should be further off to the right. It all depends on what you mean by intelligence and how you define the scale, of course, but if intelligence is something like the generalized ability to learn, understand things, and solve problems, then the range of problems that Einstein is able to solve, and the set of things that Einstein is able to understand well, seem many times larger than what the village idiot is able to do.

The village idiot may be able to pull off some intellectual feats (like language) in specific contexts, but then again so can the mouse (like learning associations and figuring out the layout of its surroundings). When it comes to a general intellectual ability (rather than specialized abilities), Einstein can do much more than an idiot with a similar brain because he is much much better at thinking more abstractly, looking for and understanding the underlying logic of something, and thinking his way through more complex ideas and problems. The minor tweaks in brain design allowed enormous improvements in cognitive performance, and I think that the intelligence scale should reflect the performance differences rather than the anatomical ones. Even if it is a log scale, the village idiot should probably be closer to the chimp than to Einstein.

@Anna Salamon: Fair point, these are conceptually different scales. I don't think the graphs on those two scales would diverge so much as the commenters seem to think, but you're right that it's less obvious for scale (2) than scale (1).

By choosing a sufficiently biased scale, like "Numbers of times this intelligence has invented General Relativity", you can obviously generate arbitrarily sharp gradients between Einstein and VI.

The question is whether a natural scale from any perspective besides the human one - like the scale a chimpanzee, or a General Systems Vehicle, might use - would favor Einstein over the VI so highly.

Also, I don't think we can hammer the graph too hard into the realm of measuring (what humans consider to be impressive) accomplishments, because then it no longer talks about the thing I'm trying to discuss, which is the generator that spins to produce accomplishments. I.e., if you throw Einstein-1 into a lunatic asylum while Einstein-2 is allowed to do physics and write letters, Einstein-2 will look much more "intelligent" on the graph of historical accomplishments that humans consider impressive, even though the two are clones and got the same education up to age 23 and had many of the same ideas.

An intuition pump I sometimes offer is that if a virus killed off sufficiently many high-IQ folks to shift the entire curve one standard deviation to the left, with a correspondingly shaped right-side tail, then the next generation would still have physicists. They might be people who, in the old world, would have been managers at McDonalds. But in the new world they would be directed into university jobs and educated accordingly, to fill the now-empty ecological niche. The competence gap wouldn't be like taking an adult McDonalds manager and trying to reeducate them as a physicist. The one who would have been a McDonalds manager, would now have been perceived and stereotyped as "bright" from a young age, and developed a self-image accordingly.

Einstein and the VI both live in societies that encourage economic specialization. Einstein finds himself studying physics, the VI ends up working as a janitor. If the average g-factor was sufficiently higher, Einstein might have been the "stupid kid" in class, developed a view of his own strengths and weaknesses accordingly, and ended up as a janitor - a much brighter janitor than our janitors, but he wouldn't think of himself as "bright" or of intelligence as one of his strengths. And the Village Idiot, born into the world of Idiocracy, might have been too relatively brainy to play well with the other kids, and ended up reading books during recess.

I usually tell this story with the moral of: "You are a Homo sapiens, not a lion: Your intelligence is the most important fact about you, and your greatest strength, regardless of whether other humans are smarter or dumber."

But the relevance to this debate should be obvious.

So you need to make many short cuts. And have heuristics and biases so you can predict your environment reasonably. So you only run statistics over certain parts of your inputs and internal workings. Discovering new places to run statistics on is hard, as if you don't currently run statistics there, you have no reason to think running statistics over those variables is a good idea. It requires leaps of faith, and these can lead you down blind alleys.

You can still do one heck of a lot better than a human. LOGI 3.1: Advantages of minds-in-general

"The car driving across the Nevada desert still strikes me as being closer to the thermostat or the toilet that regulates itself"

Sounds true to me; an ultra-narrow AI is more like a trivial optimization process like a thermostat than like a general intelligence.

Eliezer: good intuition pump. The level of argument about Einstein's intelligence relative to the village idiot, instead of discussion of the larger point, is odd.

It seems to me there are so, so, so many apparently trivial things a village idiot knows/can do that a chimp doesn't/can't, the difference is indeed larger than between VI and Einstein, on most reasonable metrics. The point is not about any specific metric, but about the badness of our intuitions.

"And the Village Idiot, born into the world of Idiocracy, might have been too relatively brainy to play well with the other kids, and ended up reading books during recess."

Eliezer, I think this whole frame of analysis has an element of ego-stroking/sour grapes (stroking your ego and perhaps the ego of your reading audience that defines brainy as being Einstein-like, and that defines social success as being inversely correlated, because y'all are more Einstein-like than you're socially successful).

The empiricism based seduction community indicates a braininess advantage in being able "to play well with the other kids".

I've resisted this thread, but I'm more interested in James Simon and the google founders as an example as the high end of braininess than the Albert Einsteins of today.

The most popular kid at recess is probably the smartest kid that cares about popularity factoring the different gradient kids who care about popularity have to work against to achieve it. It's an open question whether or not they're smarter than the kid reading by themselves -that's best resolvable, in my opinion, when the two of them compete with each other for a scare resource that they BOTH value.

there are so, so, so many apparently trivial things a village idiot knows/can do that a chimp doesn't/can't

There are many things chimps can do that village idiots can't, too. You just don't think about those things very often, so you don't value them.

Also: we don't compete with lions any more. We complete with other humans. It matters a LOT whether we're smarter than the humans around us or not. We still have to specify: smarter how? There isn't a single property that can be increased or decreased. This very simple point is still being ignored.

Something that nobody seems to be asking: is it valid to use a flat scale at all? Or does linearly increasing intelligence cause nonlinear or punctuated increases in capability?

First of all, to Eliezer: Great post, but I think you'll need a few more examples of how stupid chimps are compared to VIs and how stupid Einsteins are compared to Jupiter Brains to convince most of the audience.

"Maybe he felt that the difference between Einstein and a village idiot was larger than between a village idiot and a chimp. Chimps can be pretty clever."

We see chimps as clever because we have very low expectations of animal intelligence. If a chimp were clever in human terms, it would be able to compete with humans in at least some areas, which is clearly silly. How well would an adult chimp do, if he was teleported into a five-year-old human's body and thrown into kindergarten?

"But I don't buy the idea of intelligence as a scalar value."

Intelligence is obviously not a scalar, but there does seem to be a scalar component of intelligence, at least when dealing with humans. It has long been established that intelligence tests strongly correlate with each other, forming a single scalar known as Spearman's g (http://en.wikipedia.org/wiki/General_intelligence_factor), which correlates with income, education, etc.

"2) you're handwaving away deep problems of knowledge and data processing by attributing magical thought powers to your AI."

Yes. If you have a way to solve those problems, and it's formal and comprehensive enough to be published in a reputable journal, I will pay you $1,000. Other people on OB will probably pay you much more. Until then, we do the best we can.

"as opposed to simply stating that it could obviously do those things because it's a superintelligence."

See the previous post at http://lesswrong.com/lw/qk/that_alien_message/ for what simple overclocking can do.

"We haven't even established how to measure most aspects of cognitive function - one of the few things we know about how our brains work is that we don't possess tools to measure most of the things it does."

Er, yes, we do, actually. See http://lesswrong.com/lw/kj/no_one_knows_what_science_doesnt_know/.

"Some people can do it without much effort at all, and not all of them are autistic, so you can't just say that they've repurposed part of their brain for arithmetic."

Since when is autism necessary for brain repurposing? Autism specifically refers to difficulty in social interaction and communication. Savantism is actually an excellent example of what we could do with the brain if it worked efficiently.

"By the way, when the best introduction to a supposedly academic field is works of science fiction, it sets off alarm bells in my head. I know that some of the best ideas come from sci-fi and yada, yada, but just throwing that out there."

Sci-fi is useful for introducing the reader to the idea that there are possibilities for civilization other than 20th-century Earth. It's not meant to be technical material.

"But I'm skeptical that this uniformity extends to system II. The system II abilities of the best rationalists of today may depend significantly on their having learned a set of reasoning skills developed by their culture over a long period of time."

That's precisely the point; the biological difference between humans is not that great, so the huge differences we see in human accomplishment must be due in large part to other factors.

"The simplest best theory we have for precisely predicting an arbitrary 12 grams of carbons behaviour over time requires avogadros of data for the different degrees of freedom of the start state, the electron energy states etc."

No, it doesn't; the Standard Model only has eighteen adjustable parameters (physical constants) that must be found through experiment.

"The minor tweaks in brain design allowed enormous improvements in cognitive performance, and I think that the intelligence scale should reflect the performance differences rather than the anatomical ones."

The difference between humans and chimps is fairly small anatomically; we share 95-98% of our DNA and most of our brain architecture. The huge difference between a civilization inhabited entirely by village idiots and a civilization of chimps is obvious.

"Eliezer, I think this whole frame of analysis has an element of ego-stroking/sour grapes (stroking your ego and perhaps the ego of your reading audience that defines brainy as being Einstein-like, and that defines social success as being inversely correlated, because y'all are more Einstein-like than you're socially successful)."

Social success will gradually become more irrelevant as society develops further, because social success is a zero-sum game; it doesn't produce anything of value. Dogs, orangutans, and chimps all have complex social structures. Dogs, orangutans, and chimps would all currently be extinct if we didn't have domesticated animals and environmentalists.

"The empiricism based seduction community indicates a braininess advantage in being able "to play well with the other kids"."

If you define braininess as social success, social success is obviously going to correlate with braininess. The ability to find an optimal mate is not why people are successful. Monks, who were the closest thing to scholars during the medieval period, explicitly renounced the quest for a mate, and they didn't do too badly by the standards of their time period.

"I've resisted this thread, but I'm more interested in James Simon and the google founders as an example as the high end of braininess than the Albert Einsteins of today."

If you're referring to this James Simon (http://en.wikipedia.org/wiki/James_Simon), he is obviously less accomplished than Newton, Einstein, etc., by any reasonable metric. Larry Page and Sergey Brin are rich primarily because they were more interested in being rich than in publishing papers. They sure as heck didn't become rich because they knew how to win a high school popularity contest; Bill Gates, the most famous of the dot-com billionaires, is widely reputed to be autistic.

"The simplest best theory we have for precisely predicting an arbitrary 12 grams of carbons behaviour over time requires avogadros of data for the different degrees of freedom of the start state, the electron energy states etc."

No, it doesn't; the Standard Model only has eighteen adjustable parameters (physical constants) that must be found through experiment.

The standard model won't allow you to predict what 12g of carbon atoms will do unless you also know what the relative position, acceleration and bonding of the carbon atoms are. Is it gaseous, or solid in buckyballs, diamond or some exotic MNT configuration.

Scientific theories are simple, actually getting them to make predictions about the world is very hard information wise.

Apologies about the previous post, only the last two paragraphs belong to me. Nested quotes should be easier

Nick: "You can still do one heck of a lot better than a human."

A lot better than humans can be done. Since humans are having lots of trouble getting our heads round this intelligence stuff, I find it very very unlikely that the first one will be awe inspiringly dangerous.

Humans our dumb, there are no sivler bullets for implementing a real life AI, so real life AI is likely to only slightly better than humans if not worse for a long time.

AI for me is about doing the right processing at the right time, if you are doing the wrong processing it doesn't matter how much of it you are doing or how precisely you are doing it, you are not going to get much use out of it.

Shane,

I'm well aware that SQ is not a measure of intelligence, but I thought that it would be a nice heuristic (metaphor, whatever...) to intuit possible superintelligences. I was presupposing that they have an agent structure (sensors, actuators) and the respective cognitive algorithms (AIXI maybe?).

With this organizational backdrop, SQ becomes very interesting - after all, intelligent agents are bounded in space and time, and other things being equal (especially optimal cognitive algorithms) SQ is the way to go.

Robin Z: What we are looking for, however, is, by the Church-Turing thesis, an algorithm, an information-processing algorithm, and I invite the computer scientists et al. here to name any known information-processing algorithm which doesn't scale.

Assuming P != NP, no algorithm for an NP-hard problem scales. That is what makes them NP-hard.

Given that intelligent beings exist, that can be taken as evidence that AI does not require solving NP-hard problems. But NP-hard or not, nothing in the history of AI research has ever scaled up from toy problems to human level, never mind beyond, except for a few specialised party tricks like Deep Thought (the chess player). If it had, we would already have strong AI.

Richard Kennaway: I don't know what you mean - the subset-sum problem is NP-hard (and NP-complete) and the best known algorithms can - given infinite resources - be run on lists of any size with speed O(2^(N/2) N). It scales - it can be run on bigger sets - even if it is impractical to. Likewise, the traveling salesman problem can be solved in O(N^2 2^N). What I'm asking is if there are any problems where we can't change N. I can't conceive of any.

"But I'm skeptical that this uniformity extends to system II. The system II abilities of the best rationalists of today may depend significantly on their having learned a set of reasoning skills developed by their culture over a long period of time."

That's precisely the point; the biological difference between humans is not that great, so the huge differences we see in human accomplishment must be due in large part to other factors.

Agreed. But I think that if you put up a scale of "intelligence" then people will take into account abilities other than those that are (in some ill-defined and crude sense) dependent only on biology. And if we're talking about building an AI, then I'm not sure how useful it is to attempt to distinguish biological and other factors. If Feynman or Von Neumann or Pauling really are a bigger distance from the VI than Eliezer allows in terms of their system II thinking ability, then that seems to me significant independently of whether that distance is best explained in terms of the interaction of powerful learned thinking tools (scientific method, analogical reasoning, logical argument) with high g factor rather than the g factor on its own.

Robin Z: the context was the feasibility of AI. We do not have infinite resources, and if P != NP, algorithms for NP-hard problems do not feasibly scale. Mathematically, the travelling salesman can be exactly solved in exponential time. Physically, exponential time is not available. Neither is exponential space, so parallelism doesn't help.

Richard Kennaway: I don't think we actually disagree about this. It's entirely possible that doubling the N of a brain - whatever the relevant N would be, I don't know, but we can double it - would mean taking up much more than twice as many processor cycles (how fast do neurons run?) to run the same amount of processing.

In fact, if it's exponential, the speed would drop by orders of magnitude for every constant increase. That would kill superintelligent AI as effectively as the laws of thermodynamics killed perpetual motion machines.

On the other hand, if you believe Richard Dawkins, Anatole France's brain was less that 1000 cc, and brains bigger than 2000 cc aren't unheard of (he lists Oliver Cromwell as an unverified potential example). Even if people are exchanging metaphorical clock rate for metaphorical instruction set size and vice-versa, and even if people have different neuron densities, this would seem to suggest the algorithm isn't particularly high-order, or if it is the high-order bottlenecks haven't kicked in at our current scale.

I struggle to get a handle on the "amount" of intelligence in something. For example, we have IQ scores for humans, but if we have a group of N humans with score X, what single human with score Y is that equivalent to? If we can't even answer this question, I don't see how we can hope to compare humans, mice, chimps, etc.

[-]icr50

In the late 19th Century a baboon was employed as a (proved competent) railway signalman. I wouldn't trust the "village idiot" to be a competent signalman. source: http://www.earthfoot.org/lit_zone/signalmn.htm

The "earthfoot" name sets off alarm bells, but the article seems legit.

I think it would be better to illustrate intelligence as a three dimensional tree instead of a scalar chain of being. Intelligence evolved. Sometimes evolution figured out how to do the same thing in completely different ways. What is hardwired and softwired into each organism gives it a unique form of intelligence.

@Robin: Would you agree that what we label "intelligence" is essentially acting as a constructed neural category relating a bunch of cognitive abilities that tend to strongly correlate?

If so, it shouldn't be possible to get an exact handle on it as anything more than arbitrary weighted average of whatever cognitive abilities we chose to measure, because there's nothing else there to get a handle on.

But, because of that real correlation between measurable abilities that "intelligence" represents, it's still meaningful to make rough comparisons, certainly enough to say humans > chimps > mice.

Hero-worship of Einstein is something science will eventually be forced to get over.

Hero-worship of Feynman, though, is here to stay.

Hmm.
If you look at basic physical structure of brain, all mammals are quite close, mouse and Einstein.

If you look at basic computationally intensitive abilities (object recognition, navigation, etc) there is not much difference between humans and chimps at all; even mouse is rather close. If you look at language and other social stuff, Einstein and some average dude are quite close and chimp is significantly behind.
If you look at ability for doing theoretical physics, or perhaps on the complexity of concepts that intelligence can come up with, however, Einstein is far ahead of some average dude. And some average dude is at same level with chimp and mouse.
Thing is, humans, generally speaking, almost entirely lack any innate capacity for doing things such as theoretical physics, or innate capacity to develop such abilities. A small restructuring of brain, combined with years of training, translates into huge difference comparing to typical ability (which is very close to zero).

Its entirely subjective how to weight those abilities to make x coordinate for a chart.
Some people feel that a lot of weight must be given to construction of extremely complex, highly consistent constructs (like in mathematics). That puts Einstein far to the right, and typical human somewhere near mouse and chimp.

Some people feel that a lot of weight must be given to verbal abilities, which puts Einstein somewhere close to typical human, and chimp significantly to the left.

Some people feel that a lot of weight must be given to visual recognition and such. This puts Einstein, typical human, and chimp very close together.

There is a lot of good arguments that can be made why each of those choices makes most sense.
For example, human dominance as species and power that humans have over environment is largely due to inventions made by some highly capable individuals; without those inventions biologically we'd be not much more powerful than other predators hunting in packs. It is argument for putting Einstein and the like far to the right, and chimp and average human relatively close together. Or for exaple you can say that you're concerned only with some sort of "algorithmic complexity" or "computational complexity" and do not care about dominance as species, and make argument in favour of second or third.

Dmytry: For example, human dominance as species and power that humans have over environment is largely due to inventions made by some highly capable individuals; without those inventions biologically we'd be not much more powerful than other predators hunting in packs. It is argument for putting Einstein and the like far to the right, and chimp and average human relatively close together.

An interesting point, but Einstein is famous because he's the guy who got there first - not necessarily the only guy who could have gotten there in principle.

This should be pretty obvious - but human intelligence varies considerably - and ranges way down below that of an average chimp or mouse. That is because humans have lots of ways to go wrong. Mutate the human genome enough, and you wind up with a low-grade moron. Mutate it a bit more, and you wind up with an agent in a permanent coma - with an intelligence probably similar to that of an amoeba

The idea that humans are all of roughly similar intelligence, strikes me as being curiously wrong. I would guess that it probably arises from political correctness - and the strange idea that "everyone is equal".

[+]Joe8-110
[-][anonymous]00

My role model used to be famous sometimes historical apparently perfect being. Since my role model has shifted to my older cousin, someone less unrealistic, I feel I've become a better leader and more alpha. I have found that lofty goals have their use, but lofty role models not so much.

When “Old One” and “the Blight” are mentioned: which characters are these?

They are characters in the well-known Vinge SF novel A Fire Upon the Deep.

I forget if I've said this elsewhere, but we should expect human intelligence to be just a bit above the bare minimum required to result in technological advancement. Otherwise, our ancestors would have been where we are now.

(Just a bit above, because there was the nice little overhang of cultural transmission: once the hardware got good enough, the software could be transmitted way more effectively between people and across generations. So we're quite a bit more intelligent than our basically anatomically equivalent ancestors of 500,000 years ago. But not as big a gap as the gap from that ancestor to our last common ancestor with chimps, 6-7 million years ago.)

This point is made in Superintelligence, right? It sounds really familiar. It's also a good addendum to this post, perhaps I'll add it into the print version, thanks!