One thing that most puzzles me about Eliezer's writings on AI is his apparent belief that a small organization like MIRI is likely to be able to beat larger organizations like Google or the US Department of Defense to building human-level AI. In fact, he seems to believe such larger organizations may have no advantage at all over a smaller one, and perhaps will even be at a disadvantage. In his 2011 debate with Robin Hanson, he said:

As far as I can tell what happens when the government tries to develop AI is nothing. But that could just be an artifact of our local technological level and it might change over the next few decades. To me it seems like a deeply confusing issue whose answer is probably not very complicated in an absolute sense. Like we know why it’s difficult to build a star. You’ve got to gather a very large amount of interstellar hydrogen in one place. So we understand what sort of labor goes into a star and we know why a star is difficult to build. When it comes to building a mind, we don’t know how to do it so it seems very hard. We like query our brains to say “map us a strategy to build this thing” and it returns null so it feels like it’s a very difficult problem. But in point of fact we don’t actually know that the problem is difficult apart from being confusing. We understand the star-building problem so we know it’s difficult. This one we don’t know how difficult it’s going to be after it’s no longer confusing.

So to me the AI problem looks like a—it looks to me more like the sort of thing that the problem is finding bright enough researchers, bringing them together, letting them work on that problem instead of demanding that they work on something where they’re going to produce a progress report in two years which will validate the person who approved the grant and advance their career. And so the government has historically been tremendously bad at producing basic research progress in AI, in part because the most senior people in AI are often people who got to be very senior by having failed to build it for the longest period of time. (This is not a universal statement. I’ve met smart senior people in AI.)

But nonetheless, basically I’m not very afraid of the government because I don’t think it’s a throw warm bodies at the problem and I don’t think it’s a throw warm computers at the problem. I think it’s a good methodology, good people selection, letting them do sufficiently blue sky stuff, and so far historically the government has been tremendously bad at producing that kind of progress. (When they have a great big project to try to build something it doesn’t work. When they fund long-term research it works.)

I admit, I don't feel like I fully grasp all the reasons for the disagreement between Eliezer and myself on this issue. Some of the disagreement, I suspect, comes from slightly different views on the nature of intelligence, though I'm having the trouble pinpointing what those differences might be. But some of the difference, I'm think, comes from the fact that I've become convinced humans suffer from a Lone Genius Bias—a tendency to over-attribute scientific and technological progress to the efforts of lone geniuses.

Disclaimer: My understanding of Luke's current strategy for MIRI is that it does not hinge on whether or not MIRI itself eventually builds AI or not. It seems to me that as long as MIRI keeps publishing research that could potentially help other people build FAI, MIRI is doing important work. Therefore, I wouldn't advocate anything in this post being taken as a reason not to donate to MIRI. I've donated recently, and will probably [edit: see below] continue to do so in the future.

Intelligence Explosion Microeconomics has an interesting section labeled "Returns on Population" (section 3.4) where, among other things, Eliezer says:

Although I expect that this section of my analysis will not be without controversy, it appears to the author to also be an important piece of data to be explained that human science and engineering seem to scale over time better than over population—an extra decade seems much more valuable than adding warm bodies.

Indeed, it appears to the author that human science scales ludicrously poorly with increased numbers of scientists, and that this is a major reason there hasn’t been more relative change from 1970–2010 than from 1930–1970 despite the vastly increased num- ber of scientists. The rate of real progress seems mostly constant with respect to time, times a small factor more or less. I admit that in trying to make this judgment I am trying to summarize an overwhelmingly distant grasp on all the fields outside my own handful. Even so, a complete halt to science or a truly exponential (or even quadratic) speedup of real progress both seem like they would be hard to miss, and the exponential increase of published papers is measurable. Real scientific progress is continuing over time, so we haven’t run out of things to investigate; and yet somehow real scientific progress isn’t scaling anywhere near as fast as professional scientists are being added.

The most charitable interpretation of this phenomenon would be that science problems are getting harder and fields are adding scientists at a combined pace which produces more or less constant progress. It seems plausible that, for example, Intel adds new researchers at around the pace required to keep up with its accustomed exponential growth...

Eliezer goes on to suggest, however, that Intel is not at all typical, and proposes some other explanations, two of which ("science is inherently bounded by serial causal depth" and that scientific progress is limited by the need to wait for the last generation to die) suggest that progress doesn't scale at all with added researchers, at least past a certain point.

I'm inclined to think that that Eliezer's basic claim here—that research progress scales better with time than population—is probably correct. Doubling the number of researchers working on a problem rarely means solving the problem twice as fast. However, I doubt the scaling is as ludicrously bad as Eliezer suggests. I suspect the case of Intel is fairly typical, and the "science problems are getting harder" theory of the history of science has a lot more going for it than Eliezer wants to grant.

For one thing, there seems to be a human bias in favor of attributing scientific and technological progress to lone geniuses—call it the Lone Genius Bias. In fiction, it's common for the cast to have a single "smart guy," a Reed Richards type, who does everything important in the the science and technology area, pulling off miraculous achievements all by himself. (If you're lucky, this role will be shared by two characters, like Fitz-Simmons on Joss Whedon's new S.H.I.L.D. TV show.) Similarly, villainous plots often hinge on kidnapping one single scientist who will be able to fulfill all the villain with all the villain's technical know-how needs.

There's some reason to chalk this up to peculiarities of fiction (see TVTtropes articles on the Omnidisciplinary Scientist and The Main Characters Do Everything generally). But it often seems to bleed over into perceptions of real-life scientists and engineers. Saul Kripke, in the course of making a point about proper names, once claimed that he often met people who identified Einstein as the inventor of the atom bomb.

Of course, in reality, Einstein just provided the initial theoretical basis for the atom bomb. Not only did the bomb itself require the Manhattan Project (which involved over 100,000 people) to build, but there there was a fair amount of basic science that had to take place after Einstein's original statement of mass-energy equivalence in 1905 before the Manhattan Project could even be conceived of.

Or: in the popular imagination, Thomas Edison was an amazingly brilliant inventor, almost on par with Reed Richards. A contrarian view, popular among tech geeks, says that actually Edison was a jerk who got famous taking credit for other people's work, and also he depended on having a lot of other people working for him at Menlo Park. But then there's a meta-contrarian view that argues that Menlo Park was "the first industrial research lab," and industrial research labs are very important, to the point that Menlo Park itself was Edison's "major innovation." On this view, it's not Edison's fault that Lone Genius Bias leads people to misunderstand what his true contribution was.

It's easy to see, in evolutionary terms, why humans might suffer from Lone Genius Bias. In the ancestral environment, major achievements would often have been the work of a single individual. Theoretically, there might have been the occasional achievement that required the cooperation of a whole entire hunter-gatherer band, but major achievements were never the work of Intel-sized R&D departments or 100,000 person Manhattan Projects. (The is an instance of the more general principle that humans have trouble fully grokking complex modern societies.)

Once you know about Lone Genius Bias, you should be suspicious when you find yourself gravitating towards future scenarios where the key innovations are the work of a few geniuses. Furthermore, it's not just that big projects are more common now than they were in the ancestral environment. The tendency of major advances to be the work of large groups seems to have noticeably increased over just the last century or so, and that trend may only continue even further in the future.

Consider Nobel Prizes. The first Nobel Prizes were awarded in 1901. When people think of Nobel Prize winners they tend to think of unshared Nobel Prizes, like Einstein's, but in fact a Nobel Prize can be shared by up to three people. And when you look at the list of Nobel Prize winners over the years, the tendency towards giving out more and more shared prizes as time goes on is obvious.

In fact, given the way science currently works, many people find the rule rule that no more than three people can share a prize too restrictive. The Nobel for the discovery of the Higgs Boson, for example, went to two theoreticians who predicted the particle decades ago, while ignoring the contributions of the large number of experimental scientists whose work was required to confirm the particle's existence. An IEEE Spectrum headline went as far as to state the prize "ignores how modern science works."

You can reach the same conclusion just looking at the bylines on scientific papers. The single-author scientific paper "has all but disappeared." Some of that may be due to people gaming the citation-count-as-measure-of-scientific-productivity system, but my impression is that the typical university science lab's PI (principle investigator) really couldn't be nearly as productive without their miniature army of postdocs, grad students, and paid staff. (Consider also that gaming of citation counts hasn't led to an explosion of authors-per-paper in fields like philosophy, where there are obviously fewer benefits to collaboration.)

And if you need one more argument that scientific problems are getting harder, and increasingly unlikely to be solved by lone geniuses... what does anyone honestly think the chances are that the Next Big Thing in science will come in the form some 26 year old publishing a few single-author papers in the same year he got his PhD?

Update: Luke's comments on this post are awesome and I recommend people read them.

New Comment
64 comments, sorted by Click to highlight new comments since:

My understanding of Luke's current strategy for MIRI is that it does not hinge on whether or not MIRI itself eventually builds AI or not.

Correct. I doubt that the most likely winning scenario involves MIRI building most of an FAI itself. (By "MIRI" I mean to include MIRI-descendents.) I helped flip MIRI to a focus on technical research because:

  • Except to the degree that public FAI progress enables others to build uFAI, humanity is better off with more FAI research on hand (than will be the case without MIRI's efforts) when it becomes clear AGI is around the corner. And right now MIRI is focused on safely publishable FAI research.
  • Compared to tech forecasting or philosophy research, technical research is more able to attract the attention of the literally-smartest young people in the world. Once they're engaged, they sometimes turn their attention to the strategic issues as well, so in the long run a focus on technical research might actually be better for strategy research than a focus on strategy research is, at least now that the basics will be laid out in Bostrom's Superintelligence book next year. Example: Paul Christiano (an IMO silver medalist who authored a paper on quantum money with Scott Aaronson before he was 20) is now one of the most useful superintelligence strategists, but was initially attracted to the area because he found the issue of cryptographic boxes for AI intellectually stimulating, and then he spent 200+ hours talking to Carl Shulman and became a good strategist.
  • There are several important strategic questions about which I only expect to get evidence by trying to build Friendly AI. E.g. how hard is FAI relative to AGI? What's the distribution of eAIs around FAIs in mind design space, and how much extra optimization power do you have to apply to avoid them? How much better can we do on the value-loading problem than Paul's brain in a box approach? How parallellizable is FAI development? How much FAI research can be done in public? To what degree can we slap Friendliness onto arbitrary AGIs, vs. having to build systems from the ground up for Friendliness? I think we'll mostly learn about these important questions by making FAI progress.
  • It's much easier to get traction with academics with technical research. Technical research is also better for gaining prestige, which helps with outreach and recruitment. E.g. within a few hours of publishing our first math result, John Baez and Timothy Gowers were discussing it on Google+.
  • The startup founder's heuristic that you should do some strategy thinking up front, but then you have to get out of your armchair and try to build the thing, and that's how you'll learn what's going to work and what's not.
  • The heuristic that ethics & safety thinking is best done by people who know the details of what they're talking about, i.e. from within a science.
  • And, well, there's some chance that MIRI really will need to try to build FAI itself, if nobody bigger and better-funded will. (Right now, better-funded outfits seem lackadaisical about FAI. I expect that to change in a least a few cases, but not particularly quickly. If AGI comes surprisingly soon, say in 20 years, then MIRI+FHI+friends might be our only shot at winning.)

Also note that MIRI has in fact spent most of its history on strategic research and movement-building, and now that those things are also being done pretty well by FHI, CEA, and CFAR, it makes sense for MIRI to focus on (what we think is) the most useful object-level thing (FAI research), especially since we have a comparative advantage there: Eliezer.

I really like both of your comments in this thread, Luke.

Also note that MIRI has in fact spent most of its history on strategic research and movement-building, and now that those things are also being done pretty well by FHI, CEA, and CFAR, it makes sense for MIRI to do (what we think is) the most useful object-level thing (FAI research), especially since we have a comparative advantage there (Eliezer).

I'm glad you mentioned this. I should clarify that most of my uncertainty about continuing to donate to MIRI in the future is uncertainty about donating to MIRI vs. one of these other organizations. To the extent that it's really important to have people at Google, the DoD, etc. be safety-conscious, it think it's possible movement building might offer better returns than technical research right now... but I'm not sure about that, and I do think the technical research is valuable.

Right; I think it's hard to tell whether donations do more good at MIRI, FHI, CEA, or CFAR — but if someone is giving to AMF then I assume they must care only about beings who happen to be living today (a Person-Affecting View), or else they have a very different model of the world than I do, one where the value of the far future is somehow not determined by the intelligence explosion.

Edit: To clarify, this isn't an exhaustive list. E.g. I think GiveWell's work is also exciting, though less in need of smaller donors right now because of Good Ventures.

There is also the possibility that they believe that MIRI/FHI/CEA/CFAR will have no impact on the intelligence explosion or the far future.

He's talking specifically about people donating to AMF. There are more things people can do than donate to AMF and donate to one of MIRI, FHI, CEA, and CFAR.

Correct.

Or simply because the quality of research is positively correlated with ability to secure funding, and thus research that would not be done without your donations generally has the lowest expected value of all research. In case of malaria, we need quantity, in case of AI research, we need quality.

[-][anonymous]00

Given the mention of Christiano above, I want to shout out one of his more important blog posts.

Increasing the quality of the far future. In principle there may be some way to have a lasting impact by making society better off for the indefinite future. I tend to think this is not very unlikely; it would be surprising if a social change (other than a values change or extinction) had an impact lasting for a significant fraction of civilization’s lifespan, and indeed I haven’t seen any plausible examples of such a change.

...

I think the most promising interventions at the moment are:

  1. Increase the profile of effective strategies for decision-making, particularly with respect to policy-making and philanthropy.
[This comment is no longer endorsed by its author]Reply

They could also reasonably believe that marginal donations to the organizations listed would not reliably influence an intelligence explosion in a way that would have significant positive impact on the value of the far future. They might also believe that AMF donations would have a greater impact on potential intelligence explosions (for example, because an intelligence explosion is so far into the future that the best way to help is to ensure human prosperity up to the point where GAI research actually becomes useful).

They might also believe that AMF donations would have a greater impact on potential intelligence explosions

It is neither probable nor plausible that AMF, a credible maximum of short-term reliable known impact on lives saved valuing all current human lives equally, should happen to also possess a maximum of expected impact on future intelligence explosions. It is as likely as that donating to your local kitten shelter should be the maximum of immediate lives saved. This kind of miraculous excuse just doesn't happen in real life.

OK. Granted. Even a belief that the AMF is better at affecting intelligence explosions is unlikely to justify the claim that it is the best, and thus not justify the behavior described.

Amazing how even after reading all Eliezer's posts (many more than once), I can still get surprise, insight and irony at a rate sufficient enough to produce laughter for 1+ minute.

I'm curious as to why you include CEA - my impression was that GWWC and 80k both focus on charities like AMF anyway? Is that wrong, or does CEA do more than it's component organizations?

Perhaps because GWWC's founder Toby Ord is part of FHI, and because CEA now shares offices with FHI, CEA is finding / producing new far future focused EAs at a faster clip than, say, GiveWell (as far as I can tell).

I'm currently donating to FHI for the UK tax advantages, so that's good to hear.

Bill Gates presents his rationale for attacking Malaria and Polio here.

I can't make much sense of it personally - but at least he isn't working on stopping global warming.

I’m not very afraid of the government because I don’t think it’s a throw warm bodies at the problem and I don’t think it’s a throw warm computers at the problem.

I suspect the easiest path to AGI is to just throw a ton of bodies and computing power at the problem, build a Kludge AI, and let it stumble its way into recursive self-improvement. This is what Larry Page is trying to do. I don't expect it to work this time, but if China or the NSA or Google or Goldman Sachs tries to do it with the computing power and AI researchers we'll have 35 years from now, they very well might succeed, even without any deep philosophical insights. After all, this is how evolution built general intelligence: no philosophical insight, just a bunch of specialized modules kludged together, some highly general learning algorithms, and lots of computing power. The problem is that this approach is very unlikely to yield something capable of Friendliness, and yet there are massive nearer-term incentives for China and the NSA and everyone else to race towards it.

Still, you might be able to outpace the Kludge AI approach via philosophical insight, as David Deutsch suggested. I think that's roughly Eliezer's hope. One reason for optimism about this approach is that top-notch philosophical skill looks to be extremely rare, and few computer scientists are encouraged to bother developing it, and even if they try to develop it, most of what's labeled "philosophy" in the bookstore will actively make them worse at philosophy, especially compared to someone who avoids everything labeled "philosophy" and instead studies the Sequences, math, logic, computer science, AI, physics, and cognitive science. Since hedge funds and the NSA and Google don't seem to be selecting for philosophical ability (in part because they don't know what it looks like), maybe MIRI+FHI+friends can grab a surprisingly large share of the best mathematician-philosophers, and get to FAI before the rest of the world gets to Kludge AI.

I suspect the easiest path to AGI is to just throw a ton of bodies and computing power at the problem, build a Kluge AI, and let it stumble its way into recursive self-improvement. This is what Larry Page is trying to do. I don't expect it to work this time, but if China or the NSA or Google or Goldman Sachs tries to do it with the computing power and AI researchers we'll have 35 years from now, they very well might succeed, even without any deep philosophical insights. After all, this is how evolution built general intelligence: no philosophical insight, just a bunch of specialized modules kludged together, some highly general learning algorithms, and lots of computing power. The problem is that this approach is very unlikely to yield something capable of Friendliness, and yet there are massive nearer-term incentives for China and the NSA and everyone else to race towards it.

Ah, yes, you expressed better than I could my other reason for thinking AI is most likely to be built by a big organization. I'd really been struggling how to say that.

One thought I have, building on this comment of yours, is that while making kludge AI safe may look impossible, given that sometimes you have to shut up and do the impossible, I wonder if making kludge AI safe might be the less-impossible option here.

EDIT: I'm also really curious to know how Eliezer would respond to the paragraph I quoted above.

I wonder if making kludge AI safe might be the less-impossible option here.

Yeah, that's possible. But as I said here, I suspect that learning whether that's true mostly comes from doing FAI research (and from watching closely as the rest of the world inevitably builds toward Kludge AI). Also: if making Kludge AI safe is the less-impossible option, then at least some FAI research probably works just as well for that scenario — especially the value-loading problem stuff. MIRI hasn't focused on that lately but that's a local anomaly: some of the next several open problems on Eliezer's to-explain list fall under the value-loading problem.

[-][anonymous]00

I'm not sure how value-loading would apply to that situation, since you're implicitly assuming a non-steadfast goal system as the default case of a kludge AI. Wouldn't boxing be more applicable?

Well, there are many ways it could turn out to be that making Kludge AI safe is the less-impossible option. The way I had in mind was that maybe goal stability and value-loading turn out to be surprisingly feasible with Kludge AI, and you really can just "bolt on" Friendliness. I suppose another way making Kludge AI safe could be the less-impossible option is if it turns out to be possible to keep superintelligences boxed indefinitely but also use them to keep non-boxed superintelligences from being boxed, or something. In which case boxing research would be more relevant.

[-][anonymous]30

Wouldn't it be a more effective strategy to point out to China, the NSA, Goldman Sachs, etc that if they actually succeed in building a Kludge AI they'll paper-clip themselves and die? I would figure that knowing it's a deadly cliff would dampen their enthusiasm to be the first ones over it.

The issue is partially the question of AI Friendliness as such, but also the question of AI Controllability as such. They may well have the belief that they can build an agent which can safely be left alone to perform a specified task in a way that doesn't actually affect any humans or pose danger to humans. That is, they want AI agents that can predict stock-market prices and help humans allocate investments without caring about taking over the world, or stepping outside of the task/job/role given to them by humans.

Hell, ideally they want AI agents they can leave alone to do any job up to and including automated food trucks, and which will never care about using the truck to do anything other than serving humans kebab in exchange for money, and giving the money to their owners.

Admittedly, this is the role currently played by computer programs, and it works fairly well. The fact that extrapolating epistemically from Regular Computer Programs to Kludge AGIs is not sound reasoning needs to be pointed out to them.

(Or it could be sound reasoning, in the end, completely by accident. We can't actually know what kind of process, with what kind of conceptual ontology, and what values over that ontology, will be obtained by Kludge AI efforts, since the Kludge efforts almost all use black-box algorithms.)

Wouldn't it be a more effective strategy to point out to China, the NSA, Goldman Sachs, etc that if they actually succeed in building a Kludge AI they'll paper-clip themselves and die?

We've been trying, and we'll keep trying, but the response to this work so far is not encouraging.

[-][anonymous]00

Yeah, you kind of have to deal with the handicap of being the successor-organization to the Singularity Institute, who were really noticeably bad at public relations. Note that I say "at public relations" rather than "at science".

Hopefully you got those $3 I left on your desk in September to encourage PUBLISHING MOAR PAPERS ;-).

Actually, to be serious a moment, there are some open scientific questions here.

  • Why should general intelligence in terms of potential actions correspond to general world optimization in terms of motivations? If values and intelligence are orthogonal, why can't we build a "mind design" for a general AI that would run a kebab truck as well as a human and do nothing else whatsoever?

  • Why is general intelligence so apparently intractable when we are a living example that provably manages to get up in the morning and act usefully each day without having to spend infinite or exponential time calculating possibilities?

  • Once we start getting into the realm of Friendliness research, how the hell do you specify an object-level ontology to a generally-intelligent agent, to deal with concepts like "humans are such-and-so agents and your purpose is to calculate their collective CEV"? You can't even build Clippy without ontology, though strangely enough, you may be able to build a Value Learner without it.

All of these certainly make a difference in probable outcomes of a Kludge AI between Clippy, FAI, and Kebab AI.

Hopefully you got those $3 I left on your desk

I did. :)

[-]V_V30

One reason for optimism about this approach is that top-notch philosophical skill looks to be extremely rare

Top-notch skill in any field is rare by definition, but philosophical skill seems more difficult to measure than skill in other fields. What makes you think that MIRI+FHI+friends are better positioned than, say IBM, in this regard?

AFAICS, they have redefined "good philosopher" as "philosopher who does things our way".

The David Deutsch article seems silly - as usual :-(

Deutsch argues of "the target ability" that "the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees".

That makes no sense. Maybe a bigger brain alone would enable cumulative cultural evolution - and so all that would be needed is some more "add brain here" instructions. Yet: "make more of this" is hardly the secret of intelligence. So: Deutsch's argument here is not coherent.

I think some parts of the article are wrong, but not that part, and I can't parse your counterargument. Could you elaborate?

Looking into the difference between human genes and chimpanzee genes probably won't help much with developing machine intelligence. Nor would it be much help in deciding how big the difference is.

The chimpanzee gene pool doesn't support cumulative cultural evolution, while human gene pool does. However, all that means is that chimpanzees are one side of the cultural "tipping point" - while humans are on the other. Crossing such a threshold may not require additional complex machinery. It might just need an instruction of the form: "delay brain development" - since brains can now develop safely in baby slings.

Indeed, crossing the threshold might not have required gene changes at all - at the time. It probably just required increased population density - e.g. see: High Population Density Triggers Cultural Explosions.

I don't think Deutsch is arguing that looking at the differences between human and chimpanzee genomes is a promising path for AGI insights; he's just saying that there might not be all that much insight needed to get to AGI, since there don't seem to be huge differences in cognitive algorithms between chimpanzees and humans. Even a culturally-isolated feral child (e.g. Dani) has qualitatively more intelligence than a chimpanzee, and can be taught crafts, sports, etc. — and language, to a more limited degree (as far as we know so far; there are very few cases).

It is true that there might not be all that much insight needed to get to AGI on top of the insight needed to build a chimpanzee. The problem that Deutsch is neglecting is that we have no idea about how to build a chimpanzee.

Oh I see what you mean. Well, I certainly agree with that!

[-]V_V20

The point is that, almost paradoxically, computers so far have been good at doing tasks that are difficult for humans and impossible for chimps (difficult mathematical computations, chess, Jeopardy, etc.), yet they can't do well at tasks which are trivial for chimps or even for dogs.

[-][anonymous]20

Which is actually not all that striking a revelation when you consider that when humans find something difficult, it is because it is a task or a procedure we were not built to do. It makes sense then that programs designed to do the hard things would not do them the way humans do. It's the mind projection fallacy to assume that easy tasks are in fact easy, and hard tasks hard.

[-]V_V00

Which is actually not all that striking a revelation when you consider that when humans find something difficult, it is because it is a task or a procedure we were not built to do.

But this doesn't explain why we can build and program computers to do it much better than we do.

We are actually quite good at inventing algorithmic procedures to solve problems that we find difficult, but we suck at executing them, while we excel at doing things we can't easily describe as algorithmic procedures.
In fact, inventing algorithmic procedures is perhaps the hardest task to describe algorithmically, due to various theoretical results from computability and complexity theory and some empirical evidence.

Some people consider this fact as evidence that the human mind is fundamentally non-computational. I don't share this view, mainly for physical reasons, but I think it might have a grain of truth:
While the human mind is probably computational in the sense that in principle (and maybe one day in practice) we could run low-level brain simulations on a computer, its architecture differs from the architecture of typical computer hardware and software in a non-trivial, and probably still poorly understood way. Even the most advanced modern machine learning algorithms are at best only crude approximations of what's going on inside the human brain.

Maybe this means that we are missing some insight, some non-trivial property that sets apart intelligent processes than other typical computations, or maybe there is just no feasible way of obtaining human-level, or even chimp-level, intelligence without a neuromorphic architecture, a substantially low-level emulation of a brain.

Anyway, I think that this counterintuitive observation is probably the source of the over-optimistic predictions about AI: people, even experts, consistently underestimate how difficult apparently easy cognitive really are.

[-][anonymous]20

But this doesn't explain why we can build and program computers to do it much better than we do.

It absolutely does. These are things that humans are not designed to do. (Things humans are designed to do: recognizing familiar faces, isolating a single voice in a crowded restaurant, navigating from point A to point B by means of transport, walking, language, etc.) Imagine hammering in a nail with a screwdriver. You could do it.. but not very well. When we design machines to solve problems we find difficult, we create solutions that don't exist in the structure of our brain. It would be natural to expect that artificial minds would be better suited to solve some problems than us.

Other than that, I'm not sure what you're arguing, since that "counterintuitive observation" is exactly what I was saying.

After all, this is how evolution built general intelligence: no philosophical insight, just a bunch of specialized modules kludged together, some highly general learning algorithms, and lots of computing power.

The other thing evolution needed is a very complex and influenceable environment. I don't know how complex an environment a GAI needs, but it's conceivable that AIs have to develop outside a box. If that's true, then the big organizations are going to get a lot more interested in Friendliness.

[I]t's conceivable that AIs have to develop outside a box. If that's true, then the big organizations are going to get a lot more interested in Friendliness.

Or they'll just forgo boxes and friendliness to get the project online faster.

Maybe I'm just being pessimistic, but I wouldn't count on adequate safety precautions with a project half this complex if it's being run by a modern bureaucracy (government corporate or academic). There are exceptions, but it doesn't seem like most of the organizations likely to take an interest care more about long term risks than immediate gains.

I suspect the easiest path to AGI is to just throw a ton of bodies and computing power at the problem, build a Kludge AI, and let it stumble its way into recursive self-improvement. This is what Larry Page is trying to do.

Oh, really. Both Google and MIRI are secretive organisations. Outsiders don't really have much idea about what goes on inside them - because that's classified. What does come out of them is PR material. When Peter Norvig says: "The goal should be superhuman partnership", that is propaganda.

I'm not basing my claim on publicly available information about what Google is doing.

I suspect... the "science problems are getting harder" theory of the history of science has a lot more going for it than Eliezer wants to grant.

It's interesting to glance at how much this differs by field. E.g. progress in physics, microprocessors, number theory, and some parts of biology seems much harder than it was in 1940 because so much low-hanging fruit has been scooped up, and one now has to study for a very long time to make advances on the current frontier. Progress in logic seems harder now than in 1940, but not incredibly so — there is still lots of low-hanging fruit. Progress in psychology might actually be easier today than it was in 1940, because we have better tools, and so few psychological phenomena have been studied with a good method and large-enough sample sizes. (But this doesn't mean I expect progress in psychology to accelerate.) Progress in robotics is vastly easier today than it was in 1980 because sensors, actuators, and microprocessors are vastly cheaper. Progress in genomics is probably easier today than in 1980 due to rapidly falling sequencing costs, and falling computation costs for data storage and processing.

Also, I might as well mention the three cites on this topic from IE:EI and When Will AI Be Created: Davis (2012); Arbesman (2011); Jones (2009).

microprocessors

Obligatory link.

I think that it used to be fun to be a hardware architect. Anything that you invented would be amazing, and the laws of physics were actively trying to help you succeed. Your friend would say, “I wish that we could predict branches more accurately,” and you’d think, “maybe we can leverage three bits of state per branch to implement a simple saturating counter,” and you’d laugh and declare that such a stupid scheme would never work, but then you’d test it and it would be 94% accurate, and the branches would wake up the next morning and read their newspapers and the headlines would say OUR WORLD HAS BEEN SET ON FIRE. You’d give your buddy a high-five and go celebrate at the bar, and then you’d think, “I wonder if we can make branch predictors even more accurate,” and the next day you’d start XOR’ing the branch’s PC address with a shift register containing the branch’s recent branching history, because in those days, you could XOR anything with anything and get something useful, and you test the new branch predictor, and now you’re up to 96% accuracy, and the branches call you on the phone and say OK, WE GET IT, YOU DO NOT LIKE BRANCHES, but the phone call goes to your voicemail because you’re too busy driving the speed boats and wearing the monocles that you purchased after your promotion at work. You go to work hung-over, and you realize that, during a drunken conference call, you told your boss that your processor has 32 registers when it only has 8, but then you realize THAT YOU CAN TOTALLY LIE ABOUT THE NUMBER OF PHYSICAL REGISTERS, and you invent a crazy hardware mapping scheme from virtual registers to physical ones, and at this point, you start seducing the spouses of the compiler team, because it’s pretty clear that compilers are a thing of the past, and the next generation of processors will run English-level pseudocode directly. Of course, pride precedes the fall, and at some point, you realize that to implement aggressive out-of-order execution, you need to fit more transistors into the same die size, but then a material science guy pops out of a birthday cake and says YEAH WE CAN DO THAT, and by now, you’re touring with Aerosmith and throwing Matisse paintings from hotel room windows, because when you order two Matisse paintings from room service and you get three, that equation is going to be balanced. It all goes so well, and the party keeps getting better. When you retire in 2003, your face is wrinkled from all of the smiles, and even though you’ve been sued by several pedestrians who suddenly acquired rare paintings as hats, you go out on top, the master of your domain. You look at your son John, who just joined Intel, and you rest well at night, knowing that he can look forward to a pliant universe and an easy life.

Unfortunately for John, the branches made a pact with Satan and quantum mechanics during a midnight screening of “Weekend at Bernie’s II.”...

there seems to be a human bias in favor of attributing scientific and technological progress to lone geniuses—call it the Lone Genius Bias...

Yes. Let me add to this...

First, something I said earlier:

General history books compress time so much that they often give the impression that major intellectual breakthroughs result from sudden strokes of insight. But when you read a history of just one breakthrough, you realize how much "chance favors the prepared mind." You realize how much of the stage had been set by others, by previous advances, by previous mistakes, by a soup of ideas crowding in around the central insight made later.

Next, some related Wikipedia articles: Great Man Theory; Heroic theory of invention and scientific development; Multiple Discovery; List of multiple discoveries.

Finally, some papers & books: Merton (1961); Merton (1963); Branningan (1981); Lamb & Easton (1984); Simonton (1988); Park (2000).

[-]V_V130

I think it’s a good methodology, good people selection, letting them do sufficiently blue sky stuff, and so far historically the government has been tremendously bad at producing that kind of progress. (When they have a great big project to try to build something it doesn’t work. When they fund long-term research it works.)

This appears to be factually wrong. Most major breakthroughs in science and technology have been done or funded by governments. This includes big projects, such as the Manhattan project, the space program (both Soviet and American), etc.

Some breakthroughs that have been done by private companies, but they usually were very large and established companies like DuPont, AT&T and IBM.

Progress in mathematics seems to depend a lot more on lone geniuses than progress in science...

It's easy to see, in evolutionary terms...

No. Stop.

I'm not sure I want to defend this particular use of ev-psych reasoning, but Bayesians can do better than a knee-jerk reaction against "just-so stories": Rational vs. Scientific Ev-Psych, A Failed Just-So Story, and Beyond 'Just-So Stories'.

[-]Cyan10

Care to spell that out?

I'm not saying you're wrong -- just that you're too elliptical to tell one way or the other.

I'm guessing that it's because it's easy to see, in evolutionary terms, why anything "might" be true. As easy as this.

Do you have any comments on the content of the article beyond this? The article makes a couple claims:

  1. The existence of Lone Genius Bias -- do you think it exists?
  2. The relevance of Lone Genius Bias -- do you think that, given Lone Genius Bias, you might be underestimating the odds of the US government developing AI and overestimating the odds of some nerd in a basement developing AI?
  3. The source of Lone Genius Bias -- do you think Lone Genius Bias comes from the small group size in the ancestral environment?

Your comment makes it clear that you disagree with the third of these. And that's a pretty fair response: post hoc evo-psych is dangerous, prone to bias, and usually wrong. But your comment ALSO seems to say that you think this entitles you to completely disregard the first two points of the article: the author's arguments for the existence of Lone Genius Bias and his arguments that it's misleading us as to where AI is likely to come from.

If you agree with the rest of the article, but dislike the evo-psych part, you can probably find a more polite way to phrase that. If you disagree with the rest of the article as well, you should be counter-arguing the rest of the article on its own merits, rather than zeroing out the one weakest point and disregarding every other point the author made.

I actually like the rest of the article, and I agree with your first two points (especially in the context of EY's quote). I was just particularly annoyed by this paragraph and felt the need to comment.

I'm unsure whether I should have phrased my point more politely, and this is independent on my opinion of the rest of the piece.

If you agree with the rest of the article, but dislike the evo-psych part, you can probably find a more polite way to phrase that. If you disagree with the rest of the article as well, you should be counter-arguing the rest of the article on its own merits, rather than zeroing out the one weakest point and disregarding every other point the author made.

What if they have no opinion either way about the rest of the article?

If I pointed out a typo without commenting about the rest of the article, you wouldn't tell me that my comment seems to say that I think this entitles me to completely disregard the rest of the article, would you? If you wouldn't, which nits can be picked on their own and which can't?

But nonetheless, basically I’m not very afraid of the government because I don’t think it’s a throw warm bodies at the problem and I don’t think it’s a throw warm computers at the problem.

[...]

Although I expect that this section of my analysis will not be without controversy, it appears to the author to also be an important piece of data to be explained that human science and engineering seem to scale over time better than over population—an extra decade seems much more valuable than adding warm bodies.

Since I'm currently a software engineer for the DoD, I think I have a little bit of insight into this :)

Currently, the government is moving away from throw-more-bodies solutions to improve the engineering process overall type of solution. Throwing more bodies was discovered to be cost inefficient and since the budget of the DoD has been shrinking, they're looking for more cost effective acquisition strategies.

I don't work anywhere near any AI-focused departments, so I don't know what the current state of things would be in that area. But I would assume that they're moving towards better process improvement just like we are. As an anecdote, our organization was passed up on the opportunity to work on a project because a "competitor" organization (within the DoD) had a higher CMMI level than we do. So process improvement is definitely seems to be a big focus in the DoD.

In general, I expect problems that require multiple "epiphanies" - bottlenecks which require a non-obvious-approach - to benefit most from additional researchers, while problems that require holding a deep understanding of a complex and "large" problem in mind to benefit from individual geniuses.

This clearly has the flaw that true collaboration - group comprehension of a complex problem - is occasionally possible, which makes me think that focusing on solving and teaching collaboration might be more useful than focusing on geniuses.

[-][anonymous]00

Novel Prizes don't seem to be nominated for in the best way. Surely they'd get outcompeted by an alternative system. I imagine it's somewhat profitable to be such a prestigious prize awarding body.

I admit, I don't feel like I fully grasp all the reasons for the disagreement between Eliezer and myself on this issue. Some of the disagreement, I suspect, comes from slightly different views on the nature of intelligence, though I'm having the trouble pinpointing what those differences might be. But some of the difference, I'm think, comes from the fact that I've become convinced humans suffer from a Lone Genius Bias—a tendency to over-attribute scientific and technological progress to the efforts of lone geniuses.

David thinks - contrary to all the evidence - that Goliath will lose? Yawn: news is at eleven.