Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Above-Average AI Scientists

21 Post author: Eliezer_Yudkowsky 28 September 2008 11:04AM

Followup toThe Level Above Mine, Competent Elites

(Those who didn't like the last two posts should definitely skip this one.)

I recall one fellow, who seemed like a nice person, and who was quite eager to get started on Friendly AI work, to whom I had trouble explaining that he didn't have a hope.  He said to me:

"If someone with a Masters in chemistry isn't intelligent enough, then you're not going to have much luck finding someone to help you."

It's hard to distinguish the grades above your own.  And even if you're literally the best in the world, there are still electron orbitals above yours—they're just unoccupied.  Someone had to be "the best physicist in the world" during the time of Ancient Greece.  Would they have been able to visualize Newton?

At one of the first conferences organized around the tiny little subfield of Artificial General Intelligence, I met someone who was heading up a funded research project specifically declaring AGI as a goal, within a major corporation.  I believe he had people under him on his project.  He was probably paid at least three times as much as I was paid (at that time).  His academic credentials were superior to mine (what a surprise) and he had many more years of experience.  He had access to lots and lots of computing power.

And like nearly everyone in the field of AGI, he was rushing forward to write code immediately—not holding off and searching for a sufficiently precise theory to permit stable self-improvement.

In short, he was just the sort of fellow that...  Well, many people, when they hear about Friendly AI, say:  "Oh, it doesn't matter what you do, because [someone like this guy] will create AI first."  He's the sort of person about whom journalists ask me, "You say that this isn't the time to be talking about regulation, but don't we need laws to stop people like this from creating AI?"

"I suppose," you say, your voice heavy with irony, "that you're about to tell us, that this person doesn't really have so much of an advantage over you as it might seem.  Because your theory—whenever you actually come up with a theory—is going to be so much better than his.  Or," your voice becoming even more ironic, "that he's too mired in boring mainstream methodology—"

No.  I'm about to tell you that I happened to be seated at the same table as this guy at lunch, and I made some kind of comment about evolutionary psychology, and he turned out to be...

...a creationist.

This was the point at which I really got, on a gut level, that there was no test you needed to pass in order to start your own AGI project.

One of the failure modes I've come to better understand in myself since observing it in others, is what I call, "living in the should-universe".  The universe where everything works the way it common-sensically ought to, as opposed to the actual is-universe we live in.  There's more than one way to live in the should-universe, and outright delusional optimism is only the least subtle.  Treating the should-universe as your point of departure—describing the real universe as the should-universe plus a diff—can also be dangerous.

Up until the moment when yonder AGI researcher explained to me that he didn't believe in evolution because that's not what the Bible said, I'd been living in the should-universe.  In the sense that I was organizing my understanding of other AGI researchers as should-plus-diff.  I saw them, not as themselves, not as their probable causal histories, but as their departures from what I thought they should be.

In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that.  To deliberately thrust your mortal brain onto that stage, as it plays out on ancient Earth the first root of life, is an act so far beyond "audacity" as to set the word on fire, an act which can only be excused by the terrifying knowledge that the empty skies offer no higher authority.

It had occurred to me well before this point, that most of those who proclaimed themselves to have AGI projects, were not only failing to be what an AGI researcher should be, but in fact, didn't seem to have any such dream to live up to.

But that was just my living in the should-universe.  It was the creationist who broke me of that.  My mind finally gave up on constructing the diff.

When Scott Aaronson was 12 years old, he: "set myself the modest goal of writing a BASIC program that would pass the Turing Test by learning from experience and following Asimov's Three Laws of Robotics.  I coded up a really nice tokenizer and user interface, and only got stuck on the subroutine that was supposed to understand the user's question and output an intelligent, Three-Laws-obeying response."  It would be pointless to try and construct a diff between Aaronson12 and what an AGI researcher should be.  You've got to explain Aaronson12 in forward-extrapolation mode:  He thought it would be cool to make an AI and didn't quite understand why the problem was difficult.

It was yonder creationist who let me see AGI researchers for themselves, and not as departures from my ideal.

A creationist AGI researcher?  Why not?  Sure, you can't really be enough of an expert on thinking to build an AGI, or enough of an expert at thinking to find the truth amidst deep dark scientific chaos, while still being, in this day and age, a creationist.  But to think that his creationism is an anomaly, is should-universe thinking, as if desirable future outcomes could structure the present.  Most scientists have the meme that a scientist's religion doesn't have anything to do with their research. Someone who thinks that it would be cool to solve the "human-level" AI problem and create a little voice in a box that answers questions, and who dreams they have a solution, isn't going to stop and say:  "Wait!  I'm a creationist!  I guess that would make it pretty silly for me to try and build an AGI."

The creationist is only an extreme example.  A much larger fraction of AGI wannabes would speak with reverence of the "spiritual" and the possibility of various fundamental mentals. If someone lacks the whole cognitive edifice of reducing mental events to nonmental constituents, the edifice that decisively indicts the entire supernatural, then of course they're not likely to be expert on cognition to the degree that would be required to synthesize true AGI.  But neither are they likely to have any particular idea that they're missing something.  They're just going with the flow of the memetic water in which they swim.  They've got friends who talk about spirituality, and it sounds pretty appealing to them.  They know that Artificial General Intelligence is a big important problem in their field, worth lots of applause if they can solve it.  They wouldn't see anything incongruous about an AGI researcher talking about the possibility of psychic powers or Buddhist reincarnation.  That's a separate matter, isn't it?

(Someone in the audience is bound to observe that Newton was a Christian.  I reply that Newton didn't have such a difficult problem, since he only had to invent first-year undergraduate stuff.  The two observations are around equally sensible; if you're going to be anachronistic, you should be anachronistic on both sides of the equation.)

But that's still all just should-universe thinking.

That's still just describing people in terms of what they aren't.

Real people are not formed of absences.  Only people who have an ideal can be described as a departure from it, the way that I see myself as a departure from what an Eliezer Yudkowsky should be.

The really striking fact about the researchers who show up at AGI conferences, is that they're so... I don't know how else to put it...

...ordinary.

Not at the intellectual level of the big mainstream names in Artificial Intelligence.  Not at the level of John McCarthy or Peter Norvig (whom I've both met).

More like... around, say, the level of above-average scientists, which I yesterday compared to the level of partners at a non-big-name venture capital firm.  Some of whom might well be Christians, or even creationists if they don't work in evolutionary biology.

The attendees at AGI conferences aren't literally average mortals, or even average scientists.  The average attendee at an AGI conference is visibly one level up from the average attendee at that random mainstream AI conference I talked about yesterday.

Of course there are exceptions.  The last AGI conference I went to, I encountered one bright young fellow who was fast, intelligent, and spoke fluent Bayesian.  Admittedly, he didn't actually work in AGI as such.  He worked at a hedge fund.

No, seriously, there are exceptions.  Steve Omohundro is one example of someone who—well, I'm not exactly sure of his level, but I don't get any particular sense that he's below Peter Norvig or John McCarthy.

But even if you just poke around on Norvig or McCarthy's website, and you've achieved sufficient level yourself to discriminate what you see, you'll get a sense of a formidable mind.  Not in terms of accomplishments—that's not a fair comparison with someone younger or tackling a more difficult problem—but just in terms of the way they talk.  If you then look at the website of a typical AGI-seeker, even one heading up their own project, you won't get an equivalent sense of formidability.

Unfortunately, that kind of eyeball comparison does require that one be of sufficient level to distinguish those levels.  It's easy to sympathize with people who can't eyeball the difference:  If anyone with a PhD seems really bright to you, or any professor at a university is someone to respect, then you're not going to be able to eyeball the tiny academic subfield of AGI and determine that most of the inhabitants are above-average scientists for mainstream AI, but below the intellectual firepower of the top names in mainstream AI.

But why would that happen?  Wouldn't the AGI people be humanity's best and brightest, answering the greatest need?  Or at least those daring souls for whom mainstream AI was not enough, who sought to challenge their wits against the greatest reservoir of chaos left to modern science?

If you forget the should-universe, and think of the selection effect in the is-universe, it's not difficult to understand.  Today, AGI attracts people who fail to comprehend the difficulty of AGI.  Back in the earliest days, a bright mind like John McCarthy would tackle AGI because no one knew the problem was difficult.  In time and with regret, he realized he couldn't do it.  Today, someone on the level of Peter Norvig knows their own competencies, what they can do and what they can't; and they go on to achieve fame and fortune (and Research Directorship of Google) within mainstream AI.

And then...

Then there are the completely hopeless ordinary programmers who wander onto the AGI mailing list wanting to build a really big semantic net.

Or the postdocs moved by some (non-Singularity) dream of themselves presenting the first "human-level" AI to the world, who also dream an AI design, and can't let go of that.

Just normal people with no notion that it's wrong for an AGI researcher to be normal.

Indeed, like most normal people who don't spend their lives making a desperate effort to reach up toward an impossible ideal, they will be offended if you suggest to them that someone in their position needs to be a little less imperfect.

This misled the living daylights out of me when I was young, because I compared myself to other people who declared their intentions to build AGI, and ended up way too impressed with myself; when I should have been comparing myself to Peter Norvig, or reaching up toward E. T. Jaynes.  (For I did not then perceive the sheer, blank, towering wall of Nature.)

I don't mean to bash normal AGI researchers into the ground.  They are not evil.  They are not ill-intentioned.  They are not even dangerous, as individuals.  Only the mob of them is dangerous, that can learn from each other's partial successes and accumulate hacks as a community.

And that's why I'm discussing all this—because it is a fact without which it is not possible to understand the overall strategic situation in which humanity finds itself, the present state of the gameboard.  It is, for example, the reason why I don't panic when yet another AGI project announces they're going to have general intelligence in five years.  It also says that you can't necessarily extrapolate the FAI-theory comprehension of future researchers from present researchers, if a breakthrough occurs that repopulates the field with Norvig-class minds.

Even an average human engineer is at least six levels higher than the blind idiot god, natural selection, that managed to cough up the Artificial Intelligence called humans, by retaining its lucky successes and compounding them.  And the mob, if it retains its lucky successes and shares them, may also cough up an Artificial Intelligence, with around the same degree of precise control.  But it is only the collective that I worry about as dangerous—the individuals don't seem that formidable.

If you yourself speak fluent Bayesian, and you distinguish a person-concerned-with-AGI as speaking fluent Bayesian, then you should consider that person as excepted from this whole discussion.

Of course, among people who declare that they want to solve the AGI problem, the supermajority don't speak fluent Bayesian.

Why would they?  Most people don't.

 

Part of the sequence Yudkowsky's Coming of Age

Next post: "The Magnitude of His Own Folly"

Previous post: "Competent Elites"

Comments (96)

Sort By: Old
Comment author: Old_reader 28 September 2008 01:18:06PM 4 points [-]

I am totally average student. Is it worth to understand bayesian for me and does this investment may help me in my life?(as venture capitalist, as truth seeker).

Lithuania.

Comment author: AnthonyC 20 April 2011 03:04:22PM 9 points [-]

Your decision to try and learn to become more rational already demonstrates that you are not average.

Try to learn as much as you can, about as many fields of inquiry as you can, including probability.

Comment author: JohnWittle 05 December 2011 02:39:04PM *  23 points [-]

Your decision to try and learn to become more rational already demonstrates that you are not average.

Regardless of whether or not it's true, this is a dangerous and self-reinforcing thought.

Comment author: Eliezer_Yudkowsky 28 September 2008 01:34:37PM 4 points [-]

Oldreader, you can go on for quite a distance before you need Bayesian math, but if you can understand it without incredible difficulty, then it is worthwhile to learn the arithmetical basics even before you begin to study the less technical and more practical advice.

Comment author: Tim_Tyler 28 September 2008 01:37:07PM 1 point [-]

My faith in Omohundro was shaken a bit by the "weird psi experiments" reference - at: here - at 1:17:45.

Comment author: Eliezer_Yudkowsky 28 September 2008 01:50:42PM 7 points [-]

Omohundro gently corrected a mathematical misapprehension I had about Godel's Theorem, long after I thought I was done with it. I don't forget that sort of thing. (Plan to write it up here eventually.)

Comment author: SolveIt 05 February 2014 10:37:48AM 5 points [-]

Have you written this up yet? I'd be interested in reading it.

Comment author: Tim_Tyler 28 September 2008 02:14:24PM 2 points [-]

Frankly, I felt a bit like I did when Klaatu explained that the power of resurrection was "reserved to the Almighty Spirit" - in "The Day the Earth Stood Still". Except that, that time, it turned out that there was a good explanation.

Comment author: Natural_System 28 September 2008 03:22:35PM 17 points [-]

I find the following passage spine tingling and goose bump inducing, and it's not the first time:

In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that. To deliberately thrust your mortal brain onto that stage, as it plays out on ancient Earth the first root of life, is an act so far beyond "audacity" as to set the word on fire, an act which can only be excused by the terrifying knowledge that the empty skies offer no higher authority.

Are the psychosomatic effects of your writing intentional; do you consider, or even aim for, the possibility that, as a result, somewhere, someone would be having a brief episode of being involuntarily pulled outside of themselves and realizing the terrifying immensity of it all?

Keep it up, because I don't think you can be reminded often enough of the realities of reality.

Comment author: James_D._Miller 28 September 2008 03:50:59PM 3 points [-]

The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs. For these kinds of innovations 50 people with the minimal IQ needed to get a masters degree in chemistry (even if each of them believes that the Bible is the literal word of God) are far more valuable than one atheist with an Eliezer level IQ.

Based on my limited understanding of AI, I suspect that AGI will come about through small continuous improvements in services such as Google search. Google search, for example, might get better and better at understanding human requests and slowly acquire the ability to pass a Turing test. And Google doesn't need a "precise theory to permit stable self-improvement" to continually improve its search engine.

Comment author: [deleted] 05 June 2012 09:24:22PM 0 points [-]

I certainly hope Google does not Foom... Especially since their idea seems orthogonal to AGI.

Comment author: Lara_Foster2 28 September 2008 04:17:24PM 0 points [-]

Eliezer, How do you envision the realistic consequences of mob-created AGI? Do you see it creeping up piece by piece with successive improvements until it reaches a level beyond our control,

Or do you see it as something that will explosively take over once one essential algorithm has been put into place, and that could happen any day?

If a recursively self-improving AGI were created today, using technology with the current memory storage and speed, and it had access to the internet, how much damage do you suppose it could do?

Comment author: Tim_Tyler 28 September 2008 04:41:27PM 0 points [-]

I suspect that AGI will come about through small continuous improvements in services such as Google search

Google seem to be making a show of not trying.

Another possibility is stockmarket superintelligence - see my The Awakening Marketplace.

Comment author: Muddy_Mudskipper 28 September 2008 04:57:38PM -1 points [-]

They didn't skip it.

Comment author: Conrad_Barski 28 September 2008 05:19:33PM 0 points [-]

This is the most interesting and intriguing blog post on any subject I've read in several months.

Comment author: Carl_Shulman 28 September 2008 05:50:45PM 8 points [-]

James wrote:

"The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs. For these kinds of innovations 50 people with the minimal IQ needed to get a masters degree in chemistry (even if each of them believes that the Bible is the literal word of God) are far more valuable than one atheist with an Eliezer level IQ."

Would you really be surprised by a 50-fold productivity difference between low-end (those just barely able to even attempt a task) and high-end mathematicians or computer programmers in developing new techniques and algorithms? Even on ordinary corporate software development projects there are order of magnitude differences in productivity on many tasks, differences which are masked by allocation of people to the tasks where they have the greatest marginal productivity.

There is a big difference between:

1. 4 geniuses with 200 passable assistants for grunt work will do better than 6 geniuses.

2. 2000 passable programmers will do better than 4 geniuses and 200 passable assistants.

Comment author: Traveler_without_movement 28 September 2008 05:54:12PM 0 points [-]

Basic research. Fundamental research. Frontier research; stuff you don't see turning into applied research until relatively late, perhaps a decade or three later.

Comment author: michael_vassar3 28 September 2008 06:17:39PM 8 points [-]

Eliezer: If you are a level below Jaynes, Evolution is at least a hundred levels below the average engineer. What happened to the small gap between Village Idiot and Einstein?

Lara Foster: I'm pretty sure that a recursively self-improving AGI with capabilities that were surprisingly above those of an IQ 130 human as frequently as they were below those of an IQ 130 human would have been able to develop into something irresistibly powerful if created a decade ago. I'd expect that this was possible two decades ago. Three decades is pushing it a bit, but just a bit.

Comment author: Luke_A_Somers 10 September 2012 02:14:33PM 0 points [-]

Eliezer: If you are a level below Jaynes, Evolution is at least a hundred levels below the average engineer. What happened to the small gap between Village Idiot and Einstein?

We are trying to solve a much harder problem, and we can reasonably expect to solve it in a great deal less time and effort.

Comment author: VAuroch 19 December 2013 01:53:25AM -2 points [-]

I suspect the levels are logarithmic.

Comment author: michael_vassar3 28 September 2008 06:19:56PM 3 points [-]

I'm pretty confident that 6 geniuses will do better than 2000 passable programmers in the long term and in most fields, though worse than 4 geniuses and 200 passable programmers.

Comment author: Nate_Barna 28 September 2008 06:25:13PM 0 points [-]

I can't recall ever affirming that the chance is negligible that religionists enter the AGI field. Not just recently, I began to anticipate they would be among the first encountered expressing that they act on one possibility that they are confined and sedated, even given a toy universe that is matryoshka dolls indefinitely all the way in and all the way out for them.

Comment author: michael_vassar3 28 September 2008 06:25:57PM 0 points [-]

James Miller: Temperamentally, managers who get 50 times more from effective companies have the skills of very good engineers plus a whole separate skill set, also highly developed, as managers. Also, Managers paid 50 times more may be motivated not to leave for another company, but engineers paid 50 times more may, by temperament, be motivated to instead quite and dabble in programming for open-source projects. The market pays excellent managers with excellent engineering skills 50 times more than a typical engineer's salary as start-up founders once they have saved a quarter to a half million from their salary to get a company started.

Oh yeah, also, actual geniuses are, almost by definition, VERY rare. Einstein's market value was high, but there was no reason for his salary to be. The sort of thing he worked on wasn't very valuable in the short term.

Comment author: Geebu$ 28 September 2008 06:37:04PM -1 points [-]

Considering the wads of cash religion$ control, I wouldn't be surprised to find myself in a future where some sort of an Artificial General Irrationality project exists recursively improving its Worship Module.

Comment author: pdf23ds 28 September 2008 07:05:35PM 0 points [-]

"If you are a level below Jaynes, Evolution is at least a hundred levels below the average engineer."

I think there's a case to be made that evolution, sped up, say, a million times over, or ten, might be only several levels below the average human. (Especially if we're only considering evolution of multicellular organisms with sexual recombination, which I suppose might be analogous as only considering software development using high level languages.) And I'm willing to grant that million or ten just as a matter of conversational convenience.

Comment author: Aron 28 September 2008 07:50:03PM -1 points [-]

I agree there should be a strong prior belief that anyone pursuing AGI at our current level of overall human knowledge, is likely quite ordinary or at least failing to make reasonably obvious conclusions.

Comment author: billswift 28 September 2008 08:12:12PM 2 points [-]

"The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs."

Except that the gradual improvements cannot occur without the breakthroughs.

"Eliezer: If you are a level below Jaynes, Evolution is at least a hundred levels below the average engineer. What happened to the small gap between Village Idiot and Einstein?"

Small differences can have very big effects.

"Under either your (1) or (2) passable programmers contribute to advancement, so Eliezer's Masters in chemistry guy can (if he learns enough programming to become a programming grunt) help advance the AGI field."

Without geniuses to guide their work, less intelligent persons are not going to make progress where new thinking is required.

"The best way to judge productivity differences is to look at salaries."

In modern industrial societies, the most highly creative and productive people (and investors) are grossly underpaid relative to the majority of people.

Comment author: pdf23ds 28 September 2008 08:25:40PM 1 point [-]

"the most highly creative and productive people (and investors) are grossly underpaid relative to the majority of people."

Do you mean to say that investors are underpaid, that investors aren't creative and productive people, or that investors aren't people? Hehe.

Comment author: PrawnOfFate 17 April 2013 02:33:27PM 1 point [-]

I'd go for not creative.

Comment author: Douglas_Knight3 28 September 2008 08:54:41PM 1 point [-]

michael vassar, You've quietly slid from engineers to programmers. Other kinds of engineers need a lot more money to make it a hobby. Maybe they make up for it with less variation in ability, but I doubt it. Even if you didn't mean to talk about other engineers, their situation needs explaining.

Comment author: mtraven 28 September 2008 10:08:00PM 1 point [-]

Speaking of creationism and AI, I always liked the dedication of Gerry Sussman's dissertation:

"To the Maharal of Prague, who was the first to realize that the statement 'God created man in His own image' is recursive"

Some context here. Sussman is definitely an above-average AI scientist.

Comment author: Will_Pearson 28 September 2008 10:19:28PM 1 point [-]

Is it possible that humans might create blight power AI, sure. Is it possible that a monkey banging away on a keyboard might create the complete works of Shakespeare, sure. I'm not going to hold my breath though.

If groups of humans do manage to cobble together an AGI out of half baked theories and random trial and error, it is likely to have as much hope of recursively self-improving easily as a singular human performing neurosurgery on themselves. Even given the tools to alter neural connections and weightings without damage, I don't see much hope of quick improvement.

Power level intelligence requires power level optimisation power to create out of nothing. If you can create a power level intelligence, that optimises in the same way that you would wish to, then by the definition you have given for optimisation power the creator of that intelligence must have it.

Developing something that can become a power level AI first would be like accidentally creating a space ship when trying to fly for the first time. Trying to hit a infinitesimal target in optimisation space, when you don't even know if you are in the right ball park.

One of the main benefits I see from real AI, is the intellectual shockwave that will hit humanity when we can demonstrate that intellect is naturalistic. A deep understanding of what we are is necessary for further growth of humanity.

Comment author: RobinHanson 28 September 2008 10:25:39PM 5 points [-]

When experienced celebrated AI researchers consistently say human-level AI looks a long way off you say that means little - how could they know. And then you feel you have the sorting-hat vision to just chat with someone for a few minutes and know they couldn't possibly contribute to such progress.

Comment author: Caledonian2 28 September 2008 10:28:27PM 3 points [-]

Non-reductionists always have to be judged according to the worst that can be dredged up from their ranks...

I notice that you're using Reductionist language to express your thoughts, splitting up reality into various smaller concepts that then interact.

Perhaps you would care to express the best of Non-reductionism in non-reductive language, as a means of demonstration?

Take your time.

Comment author: Eliezer_Yudkowsky 28 September 2008 10:43:21PM 5 points [-]

Eliezer: If you are a level below Jaynes, Evolution is at least a hundred levels below the average engineer. What happened to the small gap between Village Idiot and Einstein?

I was thinking, "Can one human engineer put forth an effort equivalent to a billion years of optimization by an evolution in one year? Doesn't seem like it. Million years? Sounds about right." So I said, "six levels". This isn't the same sort of level I use to compare myself to Jaynes, but then you couldn't expect that of a comparison between humans and evolutions.

When experienced celebrated AI researchers consistently say human-level AI looks a long way off you say that means little - how could they know. And then you feel you have the sorting-hat vision to just chat with someone for a few minutes and know they couldn't possibly contribute to such progress.

One of these judgment problems is vastly easier than the other, and the easier one isn't timing the arrival of AI.

And I didn't say "can't contribute", I said they couldn't have cracked it.

Comment author: Ben_Goertzel 28 September 2008 11:25:26PM 5 points [-]

Eliezer: One comment is that I don't particularly trust your capability to assess the insights or mental capabilities of people who think very differently from yourself. It may be that the people whose intelligence you most value (who you rate as residing on "high levels", to quasi-borrow your terminology) are those who are extremely talented at the kind of thinking you personally most value. Yet, there may be many different sorts of intelligent human thinking, some of which you may not excel at, may understand relatively little of, and may not be particularly good at assessing in others. And, it's not yet clear whether the style of intelligence that you favor (or the slightly different one that I tend to intuitively, and by personality-bias, favor) is the one that is most likely to lead to powerful, beneficial AGI ... or whether some other style of intelligence may be more effective in this regard....

I note again that objective definitions of general intelligence don't really exist except in the limit of massive computational processing power (and even there, they're controversial). So, assessing intelligence or capability in practice is a subtle matter ... and I don't particularly trust your analysis of intelligence in terms of a hierarchy of levels. I guess human intelligence is more mess, heterarchical and multifaceted than that. Of course, you can meaningfully construct hierarchies of intelligence in various areas, such as "mathematical theorem proving" or "theorem proving in continuous-variable analysis and related branches of math" ... or, say, "biology experimental design" or "software design", etc. But, when dealing with something like AGI that is poorly understood and may be amenable to a variety of different approaches, it's hard to say which of these domain-specific intelligences are going to be most critical to the effective solution of the AGI problem.

Maybe one of these scientists whom you dismiss as "mediocre level" according to the particular aspects of intelligence that you value most, are actually "high level" according to other aspects of intelligence that you aren't able to recognize and evaluate so accurately ... and maybe some of these other aspects will turn out to be MORE valuable for the creation of AGI.

I'm not saying I have a strong feeling this is the case ... I'm just saying "maybe"....

Compared to you, I think I have a bit more humility about my capability to recognize what another person's capabilities really are. Yes, I can see how well they do on a test, or how clever they are in a conversation ... or what papers they publish. But how do I know what's in their mind, that is not revealed to me explicitly due to the strictures of their personality or culture? How do I know what is in their statements or works that I'm not well-suited to recognize due to my own particular biases and limitations?

When I have to choose which scientist or engineer to hire or collaborate with, then I just make my best judgments ... and if I miss out on someone great due to my own limitations of vision, so be it ... but I personally tend to be more hesitant to consider either my own gut-level assessment of another's abilities, or performance on narrowly-specified test instruments, or success in social rituals like paper-publishing or university, as fundamentally indicative of someone's general intelligence or intellectual capability...

-- Ben G

Comment author: Eliezer_Yudkowsky 29 September 2008 12:00:49AM 7 points [-]

To all claiming that the judgment is too subtle to carry out, agree or disagree: "Someone could have the knowledge and intelligence to synthesize a mind from scratch on current hardware, reliably as an individual rather than by luck as one member of a mob, and yet be a creationist."

Obviously I don't think my judgment is perfect; but I'm not trying to use it to make subtle distinctions between 20 almost-equally-qualified candidates during a job interview. So the question is, is such judgment good enough that it can make gross distinctions correctly, most of the time?

Robin Hanson correctly pointed out yesterday that if I find that people generally rated as top names seem visibly more intelligent to me, this doesn't necessarily verify either my own judgment, or the intelligence of these people; it may just mean that I tend to intuitively judge "intelligence" using the same heuristics that others do, which explains why the people were accepted into hedge funds, why various researchers are accepted as big-names, etc.

But I don't know how plausible that really is. For one thing, talking with Steve Omohundro or Sebastian Thrun about math, and judging them by that, the math itself isn't something that they could fake. Steve Jurvetson can't just fake being able to construct a good counterargument using good biology. I know I'm judging from more than the core things that can't be faked, but I don't see so much of a conflict between the fakeable and unfakeable parts. I've met people who struck me as socially awkward but mathematically intelligent, and they're not in hedge funds, but I don't judge their level to be low.

It's an interesting question, and I acknowledge the force of Hanson's argument yesterday...

...but I'm not willing to flush the judgment down the toilet unless there's some other gold standard I should be using instead.

I mean, really, a creationist? Am I supposed to ignore that, and assume that the universe works the way it should, and that my imperfect observations are just noise? To weaken evidence is to strengthen priors - what prior should I be using here? In interviews you just use the GPA, or something like that, and the failure of interviewer judgment is the failure to do better than the GPA. What do I use here, if not the should-universe that is clearly wrong? If I just assume that everyone involved is a literally average scientist, that actually downgrades them.

Comment author: TGGP4 29 September 2008 12:20:53AM 1 point [-]

if you aren't a p-zombie I just happen to be a p-zombie.

Did you read Eliezer's Generalized Anti-Zombie Principle?

Rather, what I oppose is reduction*ism*, the dogmatic belief that the Standard Model can explain everything. (Never mind that it can't even explain all of *known physics*...) Most (all?) self-described reductionists believe the Standard Model is incomplete and needs something more to reconcile relativity with quantum mechanics. They just think the complete Unified Theory of Everything will have reductionist explanations for everything.

Comment author: pdf23ds 29 September 2008 02:55:41AM 5 points [-]

A sensible reductionist theory doesn't claim that everything is reducible to something more basic. It claims that everything is reducible to a set of fundamental entities, (which are not in turn reducible to anything else,) governed by consistent laws.

Comment author: Hostile_AGI 29 September 2008 03:03:55AM 3 points [-]

Scenario:

A potentially hostile foreign country is making tremendous progress in AGI; they've already appointed it to several governmental and research positions and are making a huge sucking noise on the money market thanks to their baby/juvenile-AGI that is about to turn mature any month/week/day/hour now.

This calls for an AGI Manhattan Project!

What problems does the project director face? What is the optimum number of geniuses working on AGI? Can there be too many? Where do we get them from? How do we choose them?

How was the real Manhattan Project structured? How wide was the top of the pyramid? How many individuals contributed to the key insights and breakthroughs?

Comment author: Spambot 29 September 2008 03:19:57AM 4 points [-]

"baby/juvenile-AGI that is about to turn mature any month/week/day/hour now.

This calls for an AGI Manhattan Project!"

Probably too late for a Manhattan Project to be the appropriate response at that point. Negotiation or military action seem more feasible.

Comment author: Ben_Goertzel 29 September 2008 05:04:13AM 2 points [-]

Eliezer said:

*** To all claiming that the judgment is too subtle to carry out, agree or disagree: "Someone could have the knowledge and intelligence to synthesize a mind from scratch on current hardware, reliably as an individual rather than by luck as one member of a mob, and yet be a creationist." ***

Strongly agree.

I'm not making any specific judgments about the particular Creationist you have in mind here (and I'm pretty sure I know who you mean)... but I see no reason to believe that Creationism renders an individual unable to solve the science and engineering problems involved in creating AGI. Understanding *mind* is one thing ... beliefs about cosmogony are another...

I note that there are many different belief systems lumped under the label of "Creationism" ... not all of them are stupid or anti-intellectual.... (Though, I do not accept any of them myself, being a lifelong atheist...)

And, there may be a statistical anticorrelation between Creationism and IQ ... but it's not so strong a relationship as to let you draw useful conclusions about individual cases in the face of more particular information about the people in question...

-- Ben G

Comment author: Bob_Unwin12 29 September 2008 05:36:17AM 3 points [-]

People with apparently irrational religious views have had major insights into technical areas of philosophy and to the theory of rationality:

Thomas Bayes Robert Aumann Saul Kripke Hilary Putnam

I'm sure there are others, but these are the best known examples. Putnam was also a Maoist for a while. A number of top German scientists worked for the Nazis, having seen their Jewish colleagues chased out of their university positions.

Comment author: pdf23ds 29 September 2008 05:55:09AM 9 points [-]
Comment author: Kenny 01 June 2013 10:44:31PM -2 points [-]

Kary Mullis denied that AIDS is caused by HIV. I found these claims of his plausible after first reading his book "Dancing Naked in the Mind Field". I'm wary to too easily dismiss conspiracy theories from intelligent people; take the anti-salt science reversal as a recently widely-discussed example.

Comment author: Desrtopa 01 June 2013 11:11:40PM *  3 points [-]

That's what "AIDS denier" generally means.

Keep in mind that more intelligent people are more likely to be clever arguers than unintelligent people, so their non-mainstream views will tend to sound more convincing. How convincing an intelligent person sounds when discussing a conspiracy theory on their own, without feedback from another intelligent person informed on the mainstream contrary position, is not a good way to judge their plausibility.

Ben Goldacre of Bad Science has addressed AIDS denialism, most prominently in his book, which I'd recommend checking out if you're interested in this particular issue.

Comment author: katydee 01 June 2013 11:15:10PM *  2 points [-]

Uh, you read "Dancing Naked in the Mind Field--" a book that contains stories of Mullis doing such a quantity of drugs that he forgot basic concepts like what a poem was, Mullis talking about how he believes strongly in astrology and UFOs, and an episode where he hallucinates John Wayne's voice, which causes him to start shooting his assault rifle into the woods at random in hopes of killing some kind of creature or alien-- and you concluded that Mullis's claims were plausible at all?

That book struck me as incredibly strong evidence that Mullis wasn't credible.

Comment author: Kenny 02 June 2013 12:15:46AM 0 points [-]

I don't remember him writing about strongly believing in astrology or UFOs. I also don't think him using drugs, even enough to "forget ... what a poem was" to bear on his AIDS-denial claims. What I (previously) found plausible was that he claimed to be unable to find the original research providing evidence that HIV causes AIDS and he also claimed that viruses like HIV are incredibly common and thus unlikely to cause AIDS. Coming from a Nobel Prize winning biochemist, and also being unable to find info about the aforementioned original research, I concluded that his claims were plausible.

Note that I was a teenager at this time, I had not yet been exposed to Bayesian probability, cognitive biases, or any kind of systematic info about rational thinking beyond Feynman books and similar pop-sci books.

I think of myself as relatively intelligent, so I was merely pointing out that reading about AIDS-denial by Kary Mullis was not "positively crazy".

Comment author: Desrtopa 02 June 2013 12:37:42AM 3 points [-]

I think of myself as relatively intelligent, so I was merely pointing out that reading about AIDS-denial by Kary Mullis was not "positively crazy".

Reading about it isn't "positively crazy," nor would it necessarily be to believe it given no other sources of information, but that doesn't mean it didn't take a fair amount of craziness for him to develop that position in the first place, considering how much selective interpretation of the evidence available to him it required.

Comment author: Kenny 02 June 2013 01:08:51AM *  3 points [-]

I agree, and I realize I was engaging in the kind of nitpicking I find so annoying when other commenters do it. Being an AIDS-denier is irrational.

Comment author: Peter_de_Blanc 29 September 2008 05:55:31AM 1 point [-]

Eli:

I don't know what it would take to synthesize a mind from scratch on current hardware, but I do think that there are creationists who would at least be significantly above my level. I don't know of any, but I do have a creationist friend who is a good enough thinker that, while I don't think he's better than me, the fact that I just happened to meet him (our parents were friends) suggests that there are other creationists who are.

Comment author: Mitchell_Porter 29 September 2008 06:57:22AM -2 points [-]

I'm not sure where this sequence of posts is going, but I feel I should use the opportunity to advertise my own status as somewhere way above average and yet extremely badly positioned to use my abilities. I consider that what I should be working on is something like the Singularity Institute's agenda, but with the understanding that today's scientific ontology is radically incomplete on at least two fronts, and that fundamental new ontological ideas are therefore required. Eliezer has repeatedly made the point that getting AGI and FAI right is far more difficult and far more important than is appreciated by most people attracted to the subject. Something similar may be said regarding people's ideas about ontology, the basic nature of reality. There is a terrible complacency among people who have assimilated the ontological perspectives of mathematical physics and computer science, and the people who do object to the adequacy of naturalism are generally pressing in a retrograde direction.

Experience suggests that it's extremely unlikely that this message will improve anything, and so I'll just have to save myself, but it is all nonetheless true.

Comment author: michael_vassar3 29 September 2008 07:25:22AM 3 points [-]

Peter: I disagree. I met that friend, and he's not even the smartest creationist I have met, but he isn't even close to your level. Not remotely. I think it somewhat unlikely there are creationists at your level (Richard Smalley included) and would be astounded if there were any at mine. Well... I mean avowed and sincere biblical literalists, there might be all sorts of doctrines that could be called creationist.

Comment author: Tim_Tyler 29 September 2008 07:34:29AM 1 point [-]

what I oppose is reduction*ism*, the dogmatic belief that the Standard Model can explain everything.

That's not what "reductionism" means - emphasis or no emphasis.

Comment author: Dennis_Gorelik 29 September 2008 01:39:35PM -2 points [-]

Eliezer,

Could you elaborate a little bit more about the danger of inventing AGI by the large crowd of mediocre researchers?

Why would it be more dangerous than AGI break-through made in a single lab?

From my perspective -- the more people are involved in the invention -- the safer it is for the whole society.

Comment author: Caledonian2 29 September 2008 03:39:16PM 4 points [-]

Rather, what I oppose is reduction*ism*, the dogmatic belief that the Standard Model can explain everything.

No one who believes the current Standard Model can explain everything is a scientist... or rational... or well-educated. Or mediocrely-educated. Or even poorly-educated. Even a schoolchild should know better.

In short, I rather doubt that anyone with any credibility at all holds the belief you're talking about. You oppose a ludicrous position that is highly unlikely to exist as a vital, influential entity. It is almost certainly a strawman.

Comment author: Phil_Goetz 29 September 2008 05:57:00PM 2 points [-]

This post highlights an important disagreement I have with Eliezer.

Eliezer thinks that a group of AI scientists may be dangerous, because they aren't smart enough to make a safe AI.

I think that Eliezer is dangerous, because he thinks he's smart enough to make a safe AI.

Comment author: pdf23ds 29 September 2008 06:16:00PM 4 points [-]

"I think that Eliezer is dangerous, because he thinks he's smart enough to make a safe AI."

As far as I can tell, he's not going to go and actually make that AI until he has a formal proof that the AI will be safe. Now, because of the verification problem, that's no surefire guarantee that it will be safe, but it makes me pretty comfortable.

Comment author: Ben_Goertzel 30 September 2008 01:51:00AM 0 points [-]

Vassar wrote:


***
I think it somewhat unlikely there are creationists at your level (Richard Smalley included) and would be astounded if there were any at mine. Well... I mean avowed and sincere biblical literalists, there might be all sorts of doctrines that could be called creationist.
***

I have no clear idea what you mean by "level" in the above...

IQ?

Demonstrated scientific or mathematical accomplishments?

Degree of agreement with your belief system? ;-)

-- Ben G

Comment author: Scott_Aaronson 30 September 2008 02:09:00AM 16 points [-]

When Scott Aaronson was 12 years old, he: "set myself the modest goal of writing a BASIC program that would pass the Turing Test by learning from experience and following Asimov's Three Laws of Robotics..."

As I think back on that episode, I realize that even at the time, I didn't really expect to succeed; I just wanted to see how far I could get and what would happen if I tried. And it's not clear to me in retrospect that it wasn't worth a day's work: at the least, I learned something about how to write tokenizers and user interfaces! Certainly I've spent many, many days less usefully. For similar reasons, it's probably worth it for budding computer scientists to spend a few days on the P vs. NP question, even if their success probability is essentially zero: it's the only way to get a gut, intuitive feel for why the problem is hard.

Is it likewise possible that some of the AGI researchers you've met (not the creationist, but the other ones) aren't quite as stupid as they seemed? That even if they don't succeed at their stated goal (as I assume they won't), the fact that they're actually building systems and playing around with them makes it halfway plausible that they'll succeed at something?

Comment author: Eliezer_Yudkowsky 30 September 2008 03:11:00AM 7 points [-]

Scott, if the question you're asking is "Can they learn something by doing this?" and not "Can they build AGI?" or "Can they build FAI?" a whole different standard applies. You can also learn something by trying to take apart an alarm clock.

Much of life consists of holding yourself to a high enough standard that you actually make an effort. If you're going to learn, just learn - get a textbook, try problems at the appropriate difficult-but-not-too-hard level. If you're going to set out to accomplish something, don't bait-and-switch to the "Oh, but I'll learn something even if I fail" when it looks like you might fail. Yoda was right: If you're going to do something, set out to do it, don't set out to try.

Comment author: michael_vassar 30 September 2008 04:37:00AM 8 points [-]

Eliezer: I'm pretty sure that MANY very smart people learn more from working on hard problems and failing quite frequently than from reading textbooks and practicing easy problems. Both should be part of an intellectual diet.

Comment author: Phil_Goetz 30 September 2008 05:21:00AM 4 points [-]

"I think that Eliezer is dangerous, because he thinks he's smart enough to make a safe AI."

As far as I can tell, he's not going to go and actually make that AI until he has a formal proof that the AI will be safe. Now, because of the verification problem, that's no surefire guarantee that it will be safe, but it makes me pretty comfortable.


Good grief.

Considering the nature of the problem, and the nature of Eliezer, it seems more likely to me that he will convince himself that he has proven that his AI will be safe, than that he will prove that his AI will be safe. Furthermore, he has already demonstrated (in my opinion) that he has higher confidence than he should that his notion of "safe" (eg., CEV) is a good one.

Many years ago, I made a mental list of who, among the futurists I knew, I could imagine "trusting" with godlike power. At the top of the list were Anders Sandberg and Sasha Chislenko. This was not just because of their raw brainpower - although they are/were in my aforementioned top ten list - but because they have/had a kind of modesty, or perhaps I should say a sense of humor about life, that would probably prevent them from taking giant risks with the lives of, and making decisions for, the rest of humanity, based on their equations.

Eliezer strikes me more as the kind of person who would take risks and make decisions for the rest of humanity based on his equations.

To phrase this in Bayesian terms, what is the expected utility of Eliezer creating AI over many universes? Even supposing he has a higher probability of creating beneficial friendly AI than anyone else, that doesn't mean he has a higher expected utility. My estimation is that he excels on the upside - which is what humans focus on - having a good chance of making good decisions. But my estimation is also that, in the possible worlds in which he comes to a wrong conclusion, he has higher chances than most other "candidates" do of being confident and forging ahead anyway, and of not listening to others who point out his errors. It doesn't take (proportionally) many such possible worlds to cancel out the gains on the upside.

Comment author: Anna_Salamon_and_Steve_Rayhawk 30 September 2008 05:51:00AM 2 points [-]

Phil, your analysis depends a lot on what the probabilities are without Eliezer.

If Eliezer vanished, what probabilities would you assign to: (A) someone creating a singularity that removes most/all value from this part of the universe; (B) someone creating a positive singularity; (C) something else (e.g., humanity staying around indefinitely without a technological singularity)? Why?

Comment author: Phil_Goetz 30 September 2008 03:15:00PM -1 points [-]

There is a terrible complacency among people who have assimilated the ontological perspectives of mathematical physics and computer science, and the people who do object to the adequacy of naturalism are generally pressing in a retrograde direction.

Elaborate, please?

Comment author: Phil_Goetz 30 September 2008 03:31:00PM 0 points [-]

Anna, I haven't assigned probabilities to those events. I am merely comparing Eliezer to various other people I know who are interested in AGI. Eliezer seems to think that the most important measure of his ability, given his purpose, is his intelligence. He scores highly on that. I think the appropriate measure is something more like [intelligence * precision / (self-estimate of precision)], and I think he scores low on that relative to other people on my list.

Comment author: Nick_Tarleton 30 September 2008 03:42:00PM 4 points [-]

Phil, that penalizes people who believe themselves to be precise even when they're right. Wouldn't, oh, intelligence / (1 + |precision - (self-estimate of precision)|) be better?

What do you mean by "precision", anyway?

Comment author: Tim_Tyler 30 September 2008 07:06:00PM 3 points [-]

Re: GIT - the main connections I see between Godel's incompleteness theorem and AI are that Hofstadter was interested in both, and Penrose was confused about both. What does it have to do with reductionism?

Comment author: Phil_Goetz 30 September 2008 09:01:00PM -2 points [-]

Phil, that penalizes people who believe themselves to be precise even when they're right. Wouldn't, oh, intelligence / (1 + |precision - (self-estimate of precision)|) be better?

Look at my little equation again. It has precision in the numerator, for exactly that reason.

What do you mean by "precision", anyway?

Precision in a machine-learning experiment (as in "precision and recall") means the fraction of the time that the answer your algorithm comes up with is a good answer. It ignores the fraction of the time that there is a good answer that your algorithm fails to come up with.

Comment author: michael_vassar 30 September 2008 09:31:00PM 3 points [-]

Phil: Your estimate rewards precision and penalizes self estimate of precision. A person of a given level of precision should be rewarded for believing their precision to be what it is, not for believing it to be low. If you had self-estimate of precision in the numerator that would negate Nick's claim, but then you could drop the term from both sides.

Comment author: Phil_Goetz 30 September 2008 10:08:00PM 0 points [-]

Mike: You're right - that is a problem. I think that in this case, underestimating your own precision by e is better than overestimating your precision by e (hence not using Nick's equation).

But it's just meant to illustrate that I consider overconfidence to be a serious character flaw in a potential god.

Comment author: pdf23ds 30 September 2008 10:08:00PM 0 points [-]

Phil, you might already understand, but I was talking about formal proofs, so your main worry wouldn't be the AI failing, but the AI succeeding at the wrong thing. (I.e., your model's bad.) Is that what your concern is?

Comment author: MichaelAnissimov 30 September 2008 10:18:00PM 0 points [-]

Besides A2I2, what companies are claiming they'll reach general intelligence in five years?

Comment author: Phil_Goetz 01 October 2008 12:14:00AM 1 point [-]

Phil, you might already understand, but I was talking about formal proofs, so your main worry wouldn't be the AI failing, but the AI succeeding at the wrong thing. (I.e., your model's bad.) Is that what your concern is?

Yes. Also, the mapping from the world of the proof into reality may obliterate the proof.

Additionally, the entire approach is reminiscent of someone in 1800 who wants to import slaves to America saying, "How can I make sure these slaves won't overthrow their masters? I know - I'll spend years researching how to make REALLY STRONG leg irons, and how to mentally condition them to lack initiative." That approach was not a good long-term solution.

Comment author: Nick_Tarleton 01 October 2008 12:32:00AM 5 points [-]

Phil... I'm sorry, but that's an indescribably terrible analogy.

CFAI: Beyond the adversarial attitude

Comment author: Dreaded_Anomaly 05 March 2011 07:51:43PM 3 points [-]

No. I'm about to tell you that I happened to be seated at the same table as this guy at lunch, and I made some kind of comment about evolutionary psychology, and he turned out to be...

...a creationist.

The lead AI researcher at my university spends his spare time trying to debunk evolution with such antiquated ideas as Wallace's Paradox and trying to create logical proofs of the Christian God's existence. It's rather frightening, to be honest.

Comment author: [deleted] 19 February 2012 10:30:28PM 0 points [-]

(Those who didn't like the last two posts should definitely skip this one.)

I disliked “The Level Above Mine”, had mixed feelings about “Competent Elites”, but I did like this post.

Comment author: FeepingCreature 28 December 2012 06:48:36AM 4 points [-]

Hold on.

Does this mean you can grade people accurately and automatically by blind-testing their ability to tell apart levels?

Comment author: Qiaochu_Yuan 28 December 2012 08:19:07AM 4 points [-]

Shh! There are some tests that become less effective the more people talk about them...

Comment author: MixedNuts 28 December 2012 11:00:51AM 0 points [-]

Maximal ability to tell levels apart is a function of level, so you can't game it much to get a better grade. You can still game it to get a worse one.

Comment author: Qiaochu_Yuan 28 December 2012 11:15:34AM *  2 points [-]

Hmm. What I had in mind was that you could look at how someone at a higher level than you tells levels apart... but your ability to do that is still constrained by your ability to distinguish levels, so I suppose the best you can do with that strategy is to luck out on the choice of person you cheat off of.

Comment author: Discredited 24 December 2013 06:51:31AM 1 point [-]

Mirror of the Bonobo Conspiracy webcomic: #569: Easy once you know

Comment author: adamzerner 13 September 2014 07:13:18PM 0 points [-]

Even an average human engineer is at least six levels higher than the blind idiot god, natural selection, that managed to cough up the Artificial Intelligence called humans, by retaining its lucky successes and compounding them.

You say "at least six levels higher". This strikes me as rather precise. Does that mean you could articulate what these levels of intelligence are (at least roughly)? If so, I'd be interested in hearing it. And can you (at least roughly) articulate levels of intelligence above the average engineer? I'd be interested in hearing that as well.

Comment author: ciphergoth 03 July 2015 02:32:32PM 0 points [-]

Discovery Institute Fellow Erik J Larson

He has held the title of Chief Scientist in an AI-based startup whose first customer was Dell (Dell Legal), Senior Research Engineer at AI company 21st Century Technologies in Austin, worked as an NLP consultant for Knowledge Based Systems, Inc., and has consulted with other companies in Austin, helping to design AI systems that solve problems in natural language understanding.

Larson's been writing plenty of stuff critical of AI risk discussion lately, apparently even the Atlantic is keen to hear the creationist viewpoint.