Followup toThe Level Above Mine, Competent Elites

(Those who didn't like the last two posts should definitely skip this one.)

I recall one fellow, who seemed like a nice person, and who was quite eager to get started on Friendly AI work, to whom I had trouble explaining that he didn't have a hope.  He said to me:

"If someone with a Masters in chemistry isn't intelligent enough, then you're not going to have much luck finding someone to help you."

It's hard to distinguish the grades above your own.  And even if you're literally the best in the world, there are still electron orbitals above yours—they're just unoccupied.  Someone had to be "the best physicist in the world" during the time of Ancient Greece.  Would they have been able to visualize Newton?

At one of the first conferences organized around the tiny little subfield of Artificial General Intelligence, I met someone who was heading up a funded research project specifically declaring AGI as a goal, within a major corporation.  I believe he had people under him on his project.  He was probably paid at least three times as much as I was paid (at that time).  His academic credentials were superior to mine (what a surprise) and he had many more years of experience.  He had access to lots and lots of computing power.

And like nearly everyone in the field of AGI, he was rushing forward to write code immediately—not holding off and searching for a sufficiently precise theory to permit stable self-improvement.

In short, he was just the sort of fellow that...  Well, many people, when they hear about Friendly AI, say:  "Oh, it doesn't matter what you do, because [someone like this guy] will create AI first."  He's the sort of person about whom journalists ask me, "You say that this isn't the time to be talking about regulation, but don't we need laws to stop people like this from creating AI?"

"I suppose," you say, your voice heavy with irony, "that you're about to tell us, that this person doesn't really have so much of an advantage over you as it might seem.  Because your theory—whenever you actually come up with a theory—is going to be so much better than his.  Or," your voice becoming even more ironic, "that he's too mired in boring mainstream methodology—"

No.  I'm about to tell you that I happened to be seated at the same table as this guy at lunch, and I made some kind of comment about evolutionary psychology, and he turned out to be...

...a creationist.

This was the point at which I really got, on a gut level, that there was no test you needed to pass in order to start your own AGI project.

One of the failure modes I've come to better understand in myself since observing it in others, is what I call, "living in the should-universe".  The universe where everything works the way it common-sensically ought to, as opposed to the actual is-universe we live in.  There's more than one way to live in the should-universe, and outright delusional optimism is only the least subtle.  Treating the should-universe as your point of departure—describing the real universe as the should-universe plus a diff—can also be dangerous.

Up until the moment when yonder AGI researcher explained to me that he didn't believe in evolution because that's not what the Bible said, I'd been living in the should-universe.  In the sense that I was organizing my understanding of other AGI researchers as should-plus-diff.  I saw them, not as themselves, not as their probable causal histories, but as their departures from what I thought they should be.

In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that.  To deliberately thrust your mortal brain onto that stage, as it plays out on ancient Earth the first root of life, is an act so far beyond "audacity" as to set the word on fire, an act which can only be excused by the terrifying knowledge that the empty skies offer no higher authority.

It had occurred to me well before this point, that most of those who proclaimed themselves to have AGI projects, were not only failing to be what an AGI researcher should be, but in fact, didn't seem to have any such dream to live up to.

But that was just my living in the should-universe.  It was the creationist who broke me of that.  My mind finally gave up on constructing the diff.

When Scott Aaronson was 12 years old, he: "set myself the modest goal of writing a BASIC program that would pass the Turing Test by learning from experience and following Asimov's Three Laws of Robotics.  I coded up a really nice tokenizer and user interface, and only got stuck on the subroutine that was supposed to understand the user's question and output an intelligent, Three-Laws-obeying response."  It would be pointless to try and construct a diff between Aaronson12 and what an AGI researcher should be.  You've got to explain Aaronson12 in forward-extrapolation mode:  He thought it would be cool to make an AI and didn't quite understand why the problem was difficult.

It was yonder creationist who let me see AGI researchers for themselves, and not as departures from my ideal.

A creationist AGI researcher?  Why not?  Sure, you can't really be enough of an expert on thinking to build an AGI, or enough of an expert at thinking to find the truth amidst deep dark scientific chaos, while still being, in this day and age, a creationist.  But to think that his creationism is an anomaly, is should-universe thinking, as if desirable future outcomes could structure the present.  Most scientists have the meme that a scientist's religion doesn't have anything to do with their research. Someone who thinks that it would be cool to solve the "human-level" AI problem and create a little voice in a box that answers questions, and who dreams they have a solution, isn't going to stop and say:  "Wait!  I'm a creationist!  I guess that would make it pretty silly for me to try and build an AGI."

The creationist is only an extreme example.  A much larger fraction of AGI wannabes would speak with reverence of the "spiritual" and the possibility of various fundamental mentals. If someone lacks the whole cognitive edifice of reducing mental events to nonmental constituents, the edifice that decisively indicts the entire supernatural, then of course they're not likely to be expert on cognition to the degree that would be required to synthesize true AGI.  But neither are they likely to have any particular idea that they're missing something.  They're just going with the flow of the memetic water in which they swim.  They've got friends who talk about spirituality, and it sounds pretty appealing to them.  They know that Artificial General Intelligence is a big important problem in their field, worth lots of applause if they can solve it.  They wouldn't see anything incongruous about an AGI researcher talking about the possibility of psychic powers or Buddhist reincarnation.  That's a separate matter, isn't it?

(Someone in the audience is bound to observe that Newton was a Christian.  I reply that Newton didn't have such a difficult problem, since he only had to invent first-year undergraduate stuff.  The two observations are around equally sensible; if you're going to be anachronistic, you should be anachronistic on both sides of the equation.)

But that's still all just should-universe thinking.

That's still just describing people in terms of what they aren't.

Real people are not formed of absences.  Only people who have an ideal can be described as a departure from it, the way that I see myself as a departure from what an Eliezer Yudkowsky should be.

The really striking fact about the researchers who show up at AGI conferences, is that they're so... I don't know how else to put it...

...ordinary.

Not at the intellectual level of the big mainstream names in Artificial Intelligence.  Not at the level of John McCarthy or Peter Norvig (whom I've both met).

More like... around, say, the level of above-average scientists, which I yesterday compared to the level of partners at a non-big-name venture capital firm.  Some of whom might well be Christians, or even creationists if they don't work in evolutionary biology.

The attendees at AGI conferences aren't literally average mortals, or even average scientists.  The average attendee at an AGI conference is visibly one level up from the average attendee at that random mainstream AI conference I talked about yesterday.

Of course there are exceptions.  The last AGI conference I went to, I encountered one bright young fellow who was fast, intelligent, and spoke fluent Bayesian.  Admittedly, he didn't actually work in AGI as such.  He worked at a hedge fund.

No, seriously, there are exceptions.  Steve Omohundro is one example of someone who—well, I'm not exactly sure of his level, but I don't get any particular sense that he's below Peter Norvig or John McCarthy.

But even if you just poke around on Norvig or McCarthy's website, and you've achieved sufficient level yourself to discriminate what you see, you'll get a sense of a formidable mind.  Not in terms of accomplishments—that's not a fair comparison with someone younger or tackling a more difficult problem—but just in terms of the way they talk.  If you then look at the website of a typical AGI-seeker, even one heading up their own project, you won't get an equivalent sense of formidability.

Unfortunately, that kind of eyeball comparison does require that one be of sufficient level to distinguish those levels.  It's easy to sympathize with people who can't eyeball the difference:  If anyone with a PhD seems really bright to you, or any professor at a university is someone to respect, then you're not going to be able to eyeball the tiny academic subfield of AGI and determine that most of the inhabitants are above-average scientists for mainstream AI, but below the intellectual firepower of the top names in mainstream AI.

But why would that happen?  Wouldn't the AGI people be humanity's best and brightest, answering the greatest need?  Or at least those daring souls for whom mainstream AI was not enough, who sought to challenge their wits against the greatest reservoir of chaos left to modern science?

If you forget the should-universe, and think of the selection effect in the is-universe, it's not difficult to understand.  Today, AGI attracts people who fail to comprehend the difficulty of AGI.  Back in the earliest days, a bright mind like John McCarthy would tackle AGI because no one knew the problem was difficult.  In time and with regret, he realized he couldn't do it.  Today, someone on the level of Peter Norvig knows their own competencies, what they can do and what they can't; and they go on to achieve fame and fortune (and Research Directorship of Google) within mainstream AI.

And then...

Then there are the completely hopeless ordinary programmers who wander onto the AGI mailing list wanting to build a really big semantic net.

Or the postdocs moved by some (non-Singularity) dream of themselves presenting the first "human-level" AI to the world, who also dream an AI design, and can't let go of that.

Just normal people with no notion that it's wrong for an AGI researcher to be normal.

Indeed, like most normal people who don't spend their lives making a desperate effort to reach up toward an impossible ideal, they will be offended if you suggest to them that someone in their position needs to be a little less imperfect.

This misled the living daylights out of me when I was young, because I compared myself to other people who declared their intentions to build AGI, and ended up way too impressed with myself; when I should have been comparing myself to Peter Norvig, or reaching up toward E. T. Jaynes.  (For I did not then perceive the sheer, blank, towering wall of Nature.)

I don't mean to bash normal AGI researchers into the ground.  They are not evil.  They are not ill-intentioned.  They are not even dangerous, as individuals.  Only the mob of them is dangerous, that can learn from each other's partial successes and accumulate hacks as a community.

And that's why I'm discussing all this—because it is a fact without which it is not possible to understand the overall strategic situation in which humanity finds itself, the present state of the gameboard.  It is, for example, the reason why I don't panic when yet another AGI project announces they're going to have general intelligence in five years.  It also says that you can't necessarily extrapolate the FAI-theory comprehension of future researchers from present researchers, if a breakthrough occurs that repopulates the field with Norvig-class minds.

Even an average human engineer is at least six levels higher than the blind idiot god, natural selection, that managed to cough up the Artificial Intelligence called humans, by retaining its lucky successes and compounding them.  And the mob, if it retains its lucky successes and shares them, may also cough up an Artificial Intelligence, with around the same degree of precise control.  But it is only the collective that I worry about as dangerous—the individuals don't seem that formidable.

If you yourself speak fluent Bayesian, and you distinguish a person-concerned-with-AGI as speaking fluent Bayesian, then you should consider that person as excepted from this whole discussion.

Of course, among people who declare that they want to solve the AGI problem, the supermajority don't speak fluent Bayesian.

Why would they?  Most people don't.

 

Part of the sequence Yudkowsky's Coming of Age

Next post: "The Magnitude of His Own Folly"

Previous post: "Competent Elites"

New to LessWrong?

New Comment
96 comments, sorted by Click to highlight new comments since: Today at 8:28 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I am totally average student. Is it worth to understand bayesian for me and does this investment may help me in my life?(as venture capitalist, as truth seeker).

Lithuania.

Your decision to try and learn to become more rational already demonstrates that you are not average.

Try to learn as much as you can, about as many fields of inquiry as you can, including probability.

Your decision to try and learn to become more rational already demonstrates that you are not average.

Regardless of whether or not it's true, this is a dangerous and self-reinforcing thought.

Oldreader, you can go on for quite a distance before you need Bayesian math, but if you can understand it without incredible difficulty, then it is worthwhile to learn the arithmetical basics even before you begin to study the less technical and more practical advice.

My faith in Omohundro was shaken a bit by the "weird psi experiments" reference - at: here - at 1:17:45.

Omohundro gently corrected a mathematical misapprehension I had about Godel's Theorem, long after I thought I was done with it. I don't forget that sort of thing. (Plan to write it up here eventually.)

8[anonymous]10y
Have you written this up yet? I'd be interested in reading it.

Frankly, I felt a bit like I did when Klaatu explained that the power of resurrection was "reserved to the Almighty Spirit" - in "The Day the Earth Stood Still". Except that, that time, it turned out that there was a good explanation.

I find the following passage spine tingling and goose bump inducing, and it's not the first time:

In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that. To deliberately thrust your mortal brain onto that stage, as it plays out on ancient Earth the first root of life, is an act so far beyond "audacity" as to set the word on fire, an act which can only be excused by the terrifying knowledge that the empty skies offer no higher authority.

Are the psychosomatic effects of your writing intentional; do you consider, or even aim for, the possibility that, as a result, somewhere, someone would be having a brief episode of being involuntarily pulled outside of themselves and realizing the terrifying immensity of it all?

Keep it up, because I don't think you can be reminded often enough of the realities of reality.

The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs. For these kinds of innovations 50 people with the minimal IQ needed to get a masters degree in chemistry (even if each of them believes that the Bible is the literal word of God) are far more valuable than one atheist with an Eliezer level IQ.

Based on my limited understanding of AI, I suspect that AGI will come about through small continuous improvements in services such as Google search. Goo... (read more)

0[anonymous]12y
I certainly hope Google does not Foom... Especially since their idea seems orthogonal to AGI.

Eliezer, How do you envision the realistic consequences of mob-created AGI? Do you see it creeping up piece by piece with successive improvements until it reaches a level beyond our control,

Or do you see it as something that will explosively take over once one essential algorithm has been put into place, and that could happen any day?

If a recursively self-improving AGI were created today, using technology with the current memory storage and speed, and it had access to the internet, how much damage do you suppose it could do?

I suspect that AGI will come about through small continuous improvements in services such as Google search

Google seem to be making a show of not trying.

Another possibility is stockmarket superintelligence - see my The Awakening Marketplace.

They didn't skip it.

This is the most interesting and intriguing blog post on any subject I've read in several months.

James wrote:

"The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs. For these kinds of innovations 50 people with the minimal IQ needed to get a masters degree in chemistry (even if each of them believes that the Bible is the literal word of God) are far more valuable than one atheist with an Eliezer level IQ."

Would you really be surprised by a 50-fold productivity difference between low-end (those just barely able to even attempt a task) and high-end mathematicians or computer programmers in developing new techniques and algorithms? Even on ordinary corporate software development projects there are order of magnitude differences in productivity on many tasks, differences which are masked by allocation of people to the tasks where they have the greatest marginal productivity.

There is a big difference between:

  1. 4 geniuses with 200 passable assistants for grunt work will do better than 6 geniuses.

  2. 2000 passable programmers will do better than 4 geniuses and 200 passable assistants.

Basic research. Fundamental research. Frontier research; stuff you don't see turning into applied research until relatively late, perhaps a decade or three later.

Eliezer: If you are a level below Jaynes, Evolution is at least a hundred levels below the average engineer. What happened to the small gap between Village Idiot and Einstein?

Lara Foster: I'm pretty sure that a recursively self-improving AGI with capabilities that were surprisingly above those of an IQ 130 human as frequently as they were below those of an IQ 130 human would have been able to develop into something irresistibly powerful if created a decade ago. I'd expect that this was possible two decades ago. Three decades is pushing it a bit, but just a bit.

0Luke_A_Somers12y
We are trying to solve a much harder problem, and we can reasonably expect to solve it in a great deal less time and effort.
-2VAuroch10y
I suspect the levels are logarithmic.

Carl Shulman,

Under either your (1) or (2) passable programmers contribute to advancement, so Eliezer's Masters in chemistry guy can (if he learns enough programming to become a programming grunt) help advance the AGI field.

The best way to judge productivity differences is to look at salaries. Would Google be willing to pay Eliezer 50 times more than what it pays its average engineer? I know that managers are often paid more than 50 times what average employees are, but do pure engineers ever get 50 times more? I really don't know.

4taryneast13y
no, but not because they're not worth it. But because of market forces. Engineers are often willing to work for only a few times average salary, even if they are worth ten times more. A classic article on this phenomena, and also the difference between "lots of average programmers" vs "one or two awesome programmers" is in Joel Spolsky's article, Hitting the high notes

I'm pretty confident that 6 geniuses will do better than 2000 passable programmers in the long term and in most fields, though worse than 4 geniuses and 200 passable programmers.

I can't recall ever affirming that the chance is negligible that religionists enter the AGI field. Not just recently, I began to anticipate they would be among the first encountered expressing that they act on one possibility that they are confined and sedated, even given a toy universe that is matryoshka dolls indefinitely all the way in and all the way out for them.

James Miller: Temperamentally, managers who get 50 times more from effective companies have the skills of very good engineers plus a whole separate skill set, also highly developed, as managers. Also, Managers paid 50 times more may be motivated not to leave for another company, but engineers paid 50 times more may, by temperament, be motivated to instead quite and dabble in programming for open-source projects. The market pays excellent managers with excellent engineering skills 50 times more than a typical engineer's salary as start-up founders once t... (read more)

Considering the wads of cash religion$ control, I wouldn't be surprised to find myself in a future where some sort of an Artificial General Irrationality project exists recursively improving its Worship Module.

"If you are a level below Jaynes, Evolution is at least a hundred levels below the average engineer."

I think there's a case to be made that evolution, sped up, say, a million times over, or ten, might be only several levels below the average human. (Especially if we're only considering evolution of multicellular organisms with sexual recombination, which I suppose might be analogous as only considering software development using high level languages.) And I'm willing to grant that million or ten just as a matter of conversational convenience.

I agree there should be a strong prior belief that anyone pursuing AGI at our current level of overall human knowledge, is likely quite ordinary or at least failing to make reasonably obvious conclusions.

"The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs."

Except that the gradual improvements cannot occur without the breakthroughs.

"Eliezer: If you are a level below Jaynes, Evolution is at least a hundred levels below the average engineer. What happened to the small gap between Village Idiot and Einstein?"

Small differences can have very big effects.

"Under either your (1) or (2) passable programmers contribute to advanceme... (read more)

"the most highly creative and productive people (and investors) are grossly underpaid relative to the majority of people."

Do you mean to say that investors are underpaid, that investors aren't creative and productive people, or that investors aren't people? Hehe.

1PrawnOfFate11y
I'd go for not creative.

michael vassar, You've quietly slid from engineers to programmers. Other kinds of engineers need a lot more money to make it a hobby. Maybe they make up for it with less variation in ability, but I doubt it. Even if you didn't mean to talk about other engineers, their situation needs explaining.

Speaking of creationism and AI, I always liked the dedication of Gerry Sussman's dissertation:

"To the Maharal of Prague, who was the first to realize that the statement 'God created man in His own image' is recursive"

Some context here. Sussman is definitely an above-average AI scientist.

Is it possible that humans might create blight power AI, sure. Is it possible that a monkey banging away on a keyboard might create the complete works of Shakespeare, sure. I'm not going to hold my breath though.

If groups of humans do manage to cobble together an AGI out of half baked theories and random trial and error, it is likely to have as much hope of recursively self-improving easily as a singular human performing neurosurgery on themselves. Even given the tools to alter neural connections and weightings without damage, I don't see much hope of quic... (read more)

When experienced celebrated AI researchers consistently say human-level AI looks a long way off you say that means little - how could they know. And then you feel you have the sorting-hat vision to just chat with someone for a few minutes and know they couldn't possibly contribute to such progress.

Non-reductionists always have to be judged according to the worst that can be dredged up from their ranks...
I notice that you're using Reductionist language to express your thoughts, splitting up reality into various smaller concepts that then interact.

Perhaps you would care to express the best of Non-reductionism in non-reductive language, as a means of demonstration?

Take your time.

Eliezer: If you are a level below Jaynes, Evolution is at least a hundred levels below the average engineer. What happened to the small gap between Village Idiot and Einstein?

I was thinking, "Can one human engineer put forth an effort equivalent to a billion years of optimization by an evolution in one year? Doesn't seem like it. Million years? Sounds about right." So I said, "six levels". This isn't the same sort of level I use to compare myself to Jaynes, but then you couldn't expect that of a comparison between humans and evol... (read more)

Eliezer: One comment is that I don't particularly trust your capability to assess the insights or mental capabilities of people who think very differently from yourself. It may be that the people whose intelligence you most value (who you rate as residing on "high levels", to quasi-borrow your terminology) are those who are extremely talented at the kind of thinking you personally most value. Yet, there may be many different sorts of intelligent human thinking, some of which you may not excel at, may understand relatively little of, and may not... (read more)

To all claiming that the judgment is too subtle to carry out, agree or disagree: "Someone could have the knowledge and intelligence to synthesize a mind from scratch on current hardware, reliably as an individual rather than by luck as one member of a mob, and yet be a creationist."

Obviously I don't think my judgment is perfect; but I'm not trying to use it to make subtle distinctions between 20 almost-equally-qualified candidates during a job interview. So the question is, is such judgment good enough that it can make gross distinctions correctly, most of the time?

Robin Hanson correctly pointed out yesterday that if I find that people generally rated as top names seem visibly more intelligent to me, this doesn't necessarily verify either my own judgment, or the intelligence of these people; it may just mean that I tend to intuitively judge "intelligence" using the same heuristics that others do, which explains why the people were accepted into hedge funds, why various researchers are accepted as big-names, etc.

But I don't know how plausible that really is. For one thing, talking with Steve Omohundro or Sebastian Thrun about math, and judging them by that, th... (read more)

if you aren't a p-zombie I just happen to be a p-zombie.

Did you read Eliezer's Generalized Anti-Zombie Principle?

Rather, what I oppose is reductionism, the dogmatic belief that the Standard Model can explain everything. (Never mind that it can't even explain all of known physics...) Most (all?) self-described reductionists believe the Standard Model is incomplete and needs something more to reconcile relativity with quantum mechanics. They just think the complete Unified Theory of Everything will have reductionist explanations for everything.

A sensible reductionist theory doesn't claim that everything is reducible to something more basic. It claims that everything is reducible to a set of fundamental entities, (which are not in turn reducible to anything else,) governed by consistent laws.

Scenario:

A potentially hostile foreign country is making tremendous progress in AGI; they've already appointed it to several governmental and research positions and are making a huge sucking noise on the money market thanks to their baby/juvenile-AGI that is about to turn mature any month/week/day/hour now.

This calls for an AGI Manhattan Project!

What problems does the project director face? What is the optimum number of geniuses working on AGI? Can there be too many? Where do we get them from? How do we choose them?

How was the real Manhattan Project structured? How wide was the top of the pyramid? How many individuals contributed to the key insights and breakthroughs?

"baby/juvenile-AGI that is about to turn mature any month/week/day/hour now.

This calls for an AGI Manhattan Project!"

Probably too late for a Manhattan Project to be the appropriate response at that point. Negotiation or military action seem more feasible.

Eliezer said:


To all claiming that the judgment is too subtle to carry out, agree or disagree: "Someone could have the knowledge and intelligence to synthesize a mind from scratch on current hardware, reliably as an individual rather than by luck as one member of a mob, and yet be a creationist."


Strongly agree.

I'm not making any specific judgments about the particular Creationist you have in mind here (and I'm pretty sure I know who you mean)... but I see no reason to believe that Creationism renders an individual unable to solve the science a... (read more)

People with apparently irrational religious views have had major insights into technical areas of philosophy and to the theory of rationality:

Thomas Bayes Robert Aumann Saul Kripke Hilary Putnam

I'm sure there are others, but these are the best known examples. Putnam was also a Maoist for a while. A number of top German scientists worked for the Nazis, having seen their Jewish colleagues chased out of their university positions.

-4Kenny11y
Kary Mullis denied that AIDS is caused by HIV. I found these claims of his plausible after first reading his book "Dancing Naked in the Mind Field". I'm wary to too easily dismiss conspiracy theories from intelligent people; take the anti-salt science reversal as a recently widely-discussed example.
5Desrtopa11y
That's what "AIDS denier" generally means. Keep in mind that more intelligent people are more likely to be clever arguers than unintelligent people, so their non-mainstream views will tend to sound more convincing. How convincing an intelligent person sounds when discussing a conspiracy theory on their own, without feedback from another intelligent person informed on the mainstream contrary position, is not a good way to judge their plausibility. Ben Goldacre of Bad Science has addressed AIDS denialism, most prominently in his book, which I'd recommend checking out if you're interested in this particular issue.
7katydee11y
Uh, you read "Dancing Naked in the Mind Field--" a book that contains stories of Mullis doing such a quantity of drugs that he forgot basic concepts like what a poem was, Mullis talking about how he believes strongly in astrology and UFOs, and an episode where he hallucinates John Wayne's voice, which causes him to start shooting his assault rifle into the woods at random in hopes of killing some kind of creature or alien-- and you concluded that Mullis's claims were plausible at all? That book struck me as incredibly strong evidence that Mullis wasn't credible.
0Kenny11y
I don't remember him writing about strongly believing in astrology or UFOs. I also don't think him using drugs, even enough to "forget ... what a poem was" to bear on his AIDS-denial claims. What I (previously) found plausible was that he claimed to be unable to find the original research providing evidence that HIV causes AIDS and he also claimed that viruses like HIV are incredibly common and thus unlikely to cause AIDS. Coming from a Nobel Prize winning biochemist, and also being unable to find info about the aforementioned original research, I concluded that his claims were plausible. Note that I was a teenager at this time, I had not yet been exposed to Bayesian probability, cognitive biases, or any kind of systematic info about rational thinking beyond Feynman books and similar pop-sci books. I think of myself as relatively intelligent, so I was merely pointing out that reading about AIDS-denial by Kary Mullis was not "positively crazy".
6Desrtopa11y
Reading about it isn't "positively crazy," nor would it necessarily be to believe it given no other sources of information, but that doesn't mean it didn't take a fair amount of craziness for him to develop that position in the first place, considering how much selective interpretation of the evidence available to him it required.
6Kenny11y
I agree, and I realize I was engaging in the kind of nitpicking I find so annoying when other commenters do it. Being an AIDS-denier is irrational.

Eli:

I don't know what it would take to synthesize a mind from scratch on current hardware, but I do think that there are creationists who would at least be significantly above my level. I don't know of any, but I do have a creationist friend who is a good enough thinker that, while I don't think he's better than me, the fact that I just happened to meet him (our parents were friends) suggests that there are other creationists who are.

I'm not sure where this sequence of posts is going, but I feel I should use the opportunity to advertise my own status as somewhere way above average and yet extremely badly positioned to use my abilities. I consider that what I should be working on is something like the Singularity Institute's agenda, but with the understanding that today's scientific ontology is radically incomplete on at least two fronts, and that fundamental new ontological ideas are therefore required. Eliezer has repeatedly made the point that getting AGI and FAI right is far more di... (read more)

Peter: I disagree. I met that friend, and he's not even the smartest creationist I have met, but he isn't even close to your level. Not remotely. I think it somewhat unlikely there are creationists at your level (Richard Smalley included) and would be astounded if there were any at mine. Well... I mean avowed and sincere biblical literalists, there might be all sorts of doctrines that could be called creationist.

what I oppose is reductionism, the dogmatic belief that the Standard Model can explain everything.

That's not what "reductionism" means - emphasis or no emphasis.

Eliezer,

Could you elaborate a little bit more about the danger of inventing AGI by the large crowd of mediocre researchers?

Why would it be more dangerous than AGI break-through made in a single lab?

From my perspective -- the more people are involved in the invention -- the safer it is for the whole society.

Rather, what I oppose is reductionism, the dogmatic belief that the Standard Model can explain everything.

No one who believes the current Standard Model can explain everything is a scientist... or rational... or well-educated. Or mediocrely-educated. Or even poorly-educated. Even a schoolchild should know better.

In short, I rather doubt that anyone with any credibility at all holds the belief you're talking about. You oppose a ludicrous position that is highly unlikely to exist as a vital, influential entity. It is almost certainly a strawman.

This post highlights an important disagreement I have with Eliezer.

Eliezer thinks that a group of AI scientists may be dangerous, because they aren't smart enough to make a safe AI.

I think that Eliezer is dangerous, because he thinks he's smart enough to make a safe AI.

"I think that Eliezer is dangerous, because he thinks he's smart enough to make a safe AI."

As far as I can tell, he's not going to go and actually make that AI until he has a formal proof that the AI will be safe. Now, because of the verification problem, that's no surefire guarantee that it will be safe, but it makes me pretty comfortable.

Vassar wrote:



I think it somewhat unlikely there are creationists at your level (Richard Smalley included) and would be astounded if there were any at mine. Well... I mean avowed and sincere biblical literalists, there might be all sorts of doctrines that could be called creationist.

I have no clear idea what you mean by "level" in the above...

IQ?

Demonstrated scientific or mathematical accomplishments?

Degree of agreement with your belief system? ;-)

-- Ben G

When Scott Aaronson was 12 years old, he: "set myself the modest goal of writing a BASIC program that would pass the Turing Test by learning from experience and following Asimov's Three Laws of Robotics..."

As I think back on that episode, I realize that even at the time, I didn't really expect to succeed; I just wanted to see how far I could get and what would happen if I tried. And it's not clear to me in retrospect that it wasn't worth a day's work: at the least, I learned something about how to write tokenizers and user interfaces! Certainly I've spent many, many days less usefully. For similar reasons, it's probably worth it for budding computer scientists to spend a few days on the P vs. NP question, even if their success probability is essentially zero: it's the only way to get a gut, intuitive feel for why the problem is hard.

Is it likewise possible that some of the AGI researchers you've met (not the creationist, but the other ones) aren't quite as stupid as they seemed? That even if they don't succeed at their stated goal (as I assume they won't), the fact that they're actually building systems and playing around with them makes it halfway plausible that they'll succeed at something?

Scott, if the question you're asking is "Can they learn something by doing this?" and not "Can they build AGI?" or "Can they build FAI?" a whole different standard applies. You can also learn something by trying to take apart an alarm clock.

Much of life consists of holding yourself to a high enough standard that you actually make an effort. If you're going to learn, just learn - get a textbook, try problems at the appropriate difficult-but-not-too-hard level. If you're going to set out to accomplish something, don't bait-and-switch to the "Oh, but I'll learn something even if I fail" when it looks like you might fail. Yoda was right: If you're going to do something, set out to do it, don't set out to try.

Eliezer: I'm pretty sure that MANY very smart people learn more from working on hard problems and failing quite frequently than from reading textbooks and practicing easy problems. Both should be part of an intellectual diet.

"I think that Eliezer is dangerous, because he thinks he's smart enough to make a safe AI."

As far as I can tell, he's not going to go and actually make that AI until he has a formal proof that the AI will be safe. Now, because of the verification problem, that's no surefire guarantee that it will be safe, but it makes me pretty comfortable.


Good grief.

Considering the nature of the problem, and the nature of Eliezer, it seems more likely to me that he will convince himself that he has proven that his AI will be safe, than that he will prove th... (read more)

Phil, your analysis depends a lot on what the probabilities are without Eliezer.

If Eliezer vanished, what probabilities would you assign to: (A) someone creating a singularity that removes most/all value from this part of the universe; (B) someone creating a positive singularity; (C) something else (e.g., humanity staying around indefinitely without a technological singularity)? Why?

There is a terrible complacency among people who have assimilated the ontological perspectives of mathematical physics and computer science, and the people who do object to the adequacy of naturalism are generally pressing in a retrograde direction.
Elaborate, please?

Anna, I haven't assigned probabilities to those events. I am merely comparing Eliezer to various other people I know who are interested in AGI. Eliezer seems to think that the most important measure of his ability, given his purpose, is his intelligence. He scores highly on that. I think the appropriate measure is something more like [intelligence * precision / (self-estimate of precision)], and I think he scores low on that relative to other people on my list.

Phil, that penalizes people who believe themselves to be precise even when they're right. Wouldn't, oh, intelligence / (1 + |precision - (self-estimate of precision)|) be better?

What do you mean by "precision", anyway?

Re: GIT - the main connections I see between Godel's incompleteness theorem and AI are that Hofstadter was interested in both, and Penrose was confused about both. What does it have to do with reductionism?

Phil, that penalizes people who believe themselves to be precise even when they're right. Wouldn't, oh, intelligence / (1 + |precision - (self-estimate of precision)|) be better?
Look at my little equation again. It has precision in the numerator, for exactly that reason.
What do you mean by "precision", anyway?

Precision in a machine-learning experiment (as in "precision and recall") means the fraction of the time that the answer your algorithm comes up with is a good answer. It ignores the fraction of the time that there is a good answer that your algorithm fails to come up with.

Phil: Your estimate rewards precision and penalizes self estimate of precision. A person of a given level of precision should be rewarded for believing their precision to be what it is, not for believing it to be low. If you had self-estimate of precision in the numerator that would negate Nick's claim, but then you could drop the term from both sides.

Mike: You're right - that is a problem. I think that in this case, underestimating your own precision by e is better than overestimating your precision by e (hence not using Nick's equation).

But it's just meant to illustrate that I consider overconfidence to be a serious character flaw in a potential god.

Phil, you might already understand, but I was talking about formal proofs, so your main worry wouldn't be the AI failing, but the AI succeeding at the wrong thing. (I.e., your model's bad.) Is that what your concern is?

Besides A2I2, what companies are claiming they'll reach general intelligence in five years?

Phil, you might already understand, but I was talking about formal proofs, so your main worry wouldn't be the AI failing, but the AI succeeding at the wrong thing. (I.e., your model's bad.) Is that what your concern is?
Yes. Also, the mapping from the world of the proof into reality may obliterate the proof.

Additionally, the entire approach is reminiscent of someone in 1800 who wants to import slaves to America saying, "How can I make sure these slaves won't overthrow their masters? I know - I'll spend years researching how to make REALLY STRONG leg irons, and how to mentally condition them to lack initiative." That approach was not a good long-term solution.

Phil... I'm sorry, but that's an indescribably terrible analogy.

CFAI: Beyond the adversarial attitude

No. I'm about to tell you that I happened to be seated at the same table as this guy at lunch, and I made some kind of comment about evolutionary psychology, and he turned out to be...

...a creationist.

The lead AI researcher at my university spends his spare time trying to debunk evolution with such antiquated ideas as Wallace's Paradox and trying to create logical proofs of the Christian God's existence. It's rather frightening, to be honest.

(Those who didn't like the last two posts should definitely skip this one.)

I disliked “The Level Above Mine”, had mixed feelings about “Competent Elites”, but I did like this post.

Hold on.

Does this mean you can grade people accurately and automatically by blind-testing their ability to tell apart levels?

6Qiaochu_Yuan11y
Shh! There are some tests that become less effective the more people talk about them...
0MixedNuts11y
Maximal ability to tell levels apart is a function of level, so you can't game it much to get a better grade. You can still game it to get a worse one.
4Qiaochu_Yuan11y
Hmm. What I had in mind was that you could look at how someone at a higher level than you tells levels apart... but your ability to do that is still constrained by your ability to distinguish levels, so I suppose the best you can do with that strategy is to luck out on the choice of person you cheat off of.

Mirror of the Bonobo Conspiracy webcomic: #569: Easy once you know

Even an average human engineer is at least six levels higher than the blind idiot god, natural selection, that managed to cough up the Artificial Intelligence called humans, by retaining its lucky successes and compounding them.

You say "at least six levels higher". This strikes me as rather precise. Does that mean you could articulate what these levels of intelligence are (at least roughly)? If so, I'd be interested in hearing it. And can you (at least roughly) articulate levels of intelligence above the average engineer? I'd be interested in hearing that as well.

Discovery Institute Fellow Erik J Larson

He has held the title of Chief Scientist in an AI-based startup whose first customer was Dell (Dell Legal), Senior Research Engineer at AI company 21st Century Technologies in Austin, worked as an NLP consultant for Knowledge Based Systems, Inc., and has consulted with other companies in Austin, helping to design AI systems that solve problems in natural language understanding.

Larson's been writing plenty of stuff critical of AI risk discussion lately, apparently even the Atlantic is keen to hear the creationist ... (read more)

I don't mean to bash normal AGI researchers into the ground. They are not evil. They are not ill-intentioned. They are not even dangerous, as individuals. Only the mob of them is dangerous, that can learn from each other's partial successes and accumulate hacks as a community.

And that's why I'm discussing all this—because it is a fact without which it is not possible to understand the overall strategic situation in which humanity finds itself, the present state of the gameboard. It is, for example, the reason why I don't panic when yet another AGI

... (read more)