While going through the list of arguments for why to expect human level AI to happen or be impossible I was stuck by the same tremendously weak arguments that kept on coming up again and again. The weakest argument in favour of AI was the perenial:

  • Moore's Law hence AI!

Lest you think I'm exaggerating how weakly the argument was used, here are some random quotes:

  • Progress in computer hardware has followed an amazingly steady curve in the last few decades [16]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Vinge, 1993)
  • Computers aren't terribly smart right now, but that's because the human brain has about a million times the raw power of todays' computers. [...] Since computer capacity doubles every two years or so, we expect that in about 40 years, the computers will be as powerful as human brains. (Eder 1994)
  • Suppose my projections are correct, and the hardware requirements for human equivalence are available in 10 years for about the current price of a medium large computer.  Suppose further that software development keeps pace (and it should be increasingly easy, because big computers are great programming aids), and machines able to think as well as humans begin to appear in 10 years. (Moravec, 1977)

At least Moravec gives a glance towards software, even though it is merely to say that software "keeps pace" with hardware. What is the common scale for hardware and software that he seems to be using? I'd like to put Starcraft II, Excel 2003 and Cygwin on a hardware scale - do these correspond to Penitums, Ataris, and Colossus? I'm not particularly ripping into Moravec, but if you realise that software is important, then you should attempt to model software progress!

But very rarely do any of these predictors try and show why having computers with say, the memory capacity or the FOPS of a human brain, will suddenly cause an AI to emerge.

The weakest argument against AI was the standard:

  • Free will (or creativity) hence no AI!

Some of the more sophisticated go "Gödel, hence no AI!". If the crux of your whole argument is that only humans can do X, then you need to show that only humans can do X - not assert it and spend the rest of your paper talking in great details about other things.

New Comment
34 comments, sorted by Click to highlight new comments since:

The weakest argument against AI was the standard: Free will (or creativity) hence no AI!

I am most appalled by "philosophical externalism about mental content, therefore no AI." Another silly one is "humans can be produced for free with unskilled labor, so AGI will never be cost-effective."

The weakest argument in favour of AI was the perenial: Moore's Law hence AI!

On the other hand, imagine that computer hardware was stagnant at 1970s levels. It would be pretty plausible that the most efficient algorithms for human-level AI we could find would just be too computationally demanding to experiment with or make practical use of. Hardware on its own isn't sufficient, but it's certainly important for the plausibility of human-level AI when we find performance on so many problems scales with hardware, and our only existence proof of human-level intelligence has high hardware demands..

Also, you occasionally see weak arguments for human-level AI by people who are especially interested in some particular narrow AI field, which reaches superhuman performance, that assume the difficulty of that field is highly representative of all the remaining problems in AI.

..."humans can be produced for free with unskilled labor, so AGI will never be cost-effective".

Not only is this argument inductively weak, the premise seems obviously false, since childcare is actually quite expensive.

Yes, it's quite annoying, and also neglects runtime costs.

Also the argument applies equally well to lots of non-intellectual tasks where a cheap human could well be a replacement for an expensive machine.

[-]TimS10

Before the the recent normalization of women in the workforce, I'm not sure that it was intuitive that raising children was expensive since the childcare was not paid in money. From a certain perspective, that makes those offering the premise look bad.

[-][anonymous]50

http://en.wikipedia.org/wiki/AI_effect seems to detail a weak argument against AI. I was going to sum it up, but the Wikipedia page was doing a better job than I was, so I'll just mention a few quotes from the beginning of the article.

The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.

Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was chorus of critics to say, 'that's not thinking'."[1] AI researcher Rodney Brooks complains "Every time we figure out a piece of it, it stops being magical; we say, Oh, that's just a computation."[2]

[-]djcb40

That argument is primarily about what the word AI means, rather than an argument against AI as a phenomenon.

[-][anonymous]10

That's true, and unfortunately you could link it back to a phenomenonal argument relatively straightforwardly by saying something like "AI will never be developed because anything technology does is just a computation, not thinking."

In fact, laying out the argument explicitly just shows how weak it is, since it's essentially just asserting AI is impossible by definition. Yet, there are still people who would still agree with the argument anyway. For instance, I was looking up an example of a debate about the possibility of AI, (linked here http://www.debate.org/debates/Artificial-Intelligence-is-impossible/1/ ) and one side said:

"Those are mere programs, not AI." Now, later, the person said "Yes but in your case, Gamecube or Debate.org is simply programming, not AI. There is a difference between simple programming and human-like AI." and then: "This is not learning. These devices are limited by their programming, they cannot learn."

But I suppose my point is that this gets first summed up with an extremely weak lead in argument which is essentially: "You are wrong by definition!" which then has to be peeled back to get to a content argument like "Learning" "Godel" or "Free Will"

And that it happens so often it has it's own name rather than just being an example of a no true Scotsman.

[-]djcb10

The most famous proponent of this "those are mere programs" view may be John Searle and his Chinese Room. I wouldn't call that the weakest argument against AI, although I think his argument is flawed.

[-]Dentin100

Many years ago when I first became interested in strong AI, my boss encouraged me to read Searle's Chinese Room paper, saying that it was a critically important criticism and that any attempt at AI needed to address it.

To this day, I'm still shocked that anyone considers Searle's argument meaningful. It was pretty clear, even back then with my lesser understanding of debate tactics, that he had simply 'defined away' the problem. That I had been told this was a 'critically important criticism' was even more shocking.

I've since read critical papers with what I would consider a much stronger foundation, such as those claiming that without whole-body and experience simulation, you won't able to get something sufficiently human. But the Searle category of argument still seems to be the most common, in spite of its lack of content.

He didn't define away the problem; his flaw differed from the tautological. The fatal flaw he introduced was creating a computational process and then substituting in himself for that computational process when it came time to evaluate whether that process "understood" Chinese. Since he's a component of the process, it doesn't matter whether -he- understands Chinese, only whether the -process- understands Chinese.

Every time I read something by Searle, my blood pressure rises a couple of standard deviations.

[-]djcb60

One has to commend Searle though from coming up with such a clear example of what he thinks is wrong with the then-current model of AI. I wish all people could formulate their phylosophical ideas, right or wrong, in such a fashion. Even when they are wrong, they can be quite fruitful, as can be seen in the many papers (example still referring to Searle and his Chinese Room, or even more famously in the EPR paradox paper.

[-]Irgy40

You know what I'd like to see? A strong argument for or against human level AI.

I read the SI's brief on the issue, and all I discovered was that they did a spectacularly good job of listing all of the arguments for and against and demonstrating why they're all manifestly rubbish. The overall case seemed to boil down to "There's high uncertainty on the issue, so we should assume there's some reasonable chance". I'm not saying they're wrong, but it's a depressing state of affairs.

[-]djcb30

Perhaps in the introduction (or title?) it should be mentioned that AI in the context of the article means human-level AI.

Thanks, added clarification.

But very rarely do any of these predictors try and show why having computers with say, the memory capacity or the FOPS of a human brain, will suddenly cause an AI to emerge.

Legg put the matter this way:

more computer power makes solving the AGI design problem easier. Firstly, more powerful computers allow us to search larger spaces of programs looking for good algorithms. Secondly, the algorithms we need to find can be less efficient, thus we are looking for an element in a larger subspace.

Trivially true, but beyond "we are closer to AGI today than we were yesterday", what does that give us?

It gives us... not much.

It gives less informed audiences information that is actually novel to them, because they do not already master the previous inferential step of "Doing the same amount/quality of stuff on a more powerful computer is easier", seeing as that depends on understanding the very idea that programs can and do get optimized to work on weaker machines or do things faster on current machines.

That's even assuming the audience already has all the inferential steps before that, e.g. "Programming does not involve using an arcane language to instruct electron-monsters to work harder on maths so that you can be sure they won't make mistakes while doing multiplication" and "Assigning x the same value twice in a row just to make sure the computer did it correctly is not how programming is supposed to work".

For this, I'll refer to insanely funny anecdotal evidence. I've seen cases just as bad as this happen personally, so I'm weighing in favor of those cases being true, which together form relevant evidence that people do, in fact, know very little about this and often omit to close the inferential gap. People like hitting the Ignore button, I suppose.

Well, it illuminates the shape of trajectory - showing a "double whammy" effect of better hardware. Indeed, there's something of a third "whammy" - since these processes apply iteratively as we go along - to produce smarter search algorithms that better prune the junk out of the search space.

Moore's Law hence AI!

How is that such a weak argument? I'm all for smarter algorithms - as opposed to just increasing raw computing power - but given the algorithms that are already in existence (e.g. AIXItl, others) we'd strongly - and based on theoretical results - expect there to exist some hardware threshold, that once crossed, would empower even the current algorithms sufficiently for an AGI-like phenomenon to emerge.

Since we know that exponential growth is, well, quite fast, it seems a sensible conclusion to say "(If) Moore's Law, (then eventually) AGI", without even mandating more efficient programming. That or dispute established machine learning algorithms and theoretical models. While the software-side is the bottle-neck, it is one that scales with computing power and can thus be compensated.

Of course smarter algorithms would greatly lower the aforementioned threshold, but if (admittedly a big if) Moore's Law were to hold true for a few more iterations, that might not be as relevant as we assume it to be.

The number of steps for current algorithms/agents to converge on an acceptable model of their environment may still be very large, but compared to future approaches, we'd expect that to be a difference in degree, not in kind. Nothing that some computronium shouldn't be able to compensate.

This may be important because as long as there's any kind of consistent hardware improvement - not even exponential - that argument would establish that AGI is just a matter of time, not some obscure eventuality.

(e.g. AIXItl,

Moore's Law is not enough to make AIXI-style brute force work. A few more orders of magnitude won't beat combinatorial explosion.

Assuming the worst case on the algorithmic side, a standstill, the computational cost - even that of a combinatorial explosion - remains constant. The gap can only narrow down. That makes it a question of how many doubling cycles it would take to close it. We're not necessarily talking desktop computers here (disregarding their goal predictions).

Exponential growth with such a short doubling time with some unknown goal threshold to be reached is enough to make any provably optimal approach work eventually. If it continues.

There is probably not enough computational power in the entire visible universe (assuming maximal theoretical efficiency) to power a reasonable AIXI-like algorithm. A few steps of combinatorial growth makes mere exponential growth look like standing very very still.

[-]TimS-30

Changing the topic slightly, I always interpreted the Godel argument as saying there weren't good reasons to expect faster algorithms - thus, no super-human AI.

As you implied, the argument that Godel-ian issues prevent human-level intelligence is obviously disprove by the existence of actual humans.

Who would you re-interpret as making this argument?

[-]TimS00

It's my own position - I'm not aware of anyone in the literature making this argument (I'm not exactly up on the literature).

Then why write "I...interpreted the Godel argument" when you were not interpreting others, and had in mind an argument that is unrelated to Godel?

And there you've given a better theory than most AI experts. It's not Moores's law + reasonable explanation hence AI that's weak, it's just Moores's law on its own...

While the internal complexity of software has increased in pace with hardware, the productive complexity has increased only slightly; I am much more impressed by what was done in software twenty years ago than what is being done today, with a few exceptions. Too many programmers have adopted the attitude that the efficiency of their code doesn't matter because hardware will improve enough to offset the issue in the timeframe between coding and release.

At least the Eder quote refers to sheer power - you'll have to provide more for it to come across as an argument for AI.

When someone claims that computers will be as powerful as a brain, what could this be referring to if not intelligence? What "power" has the brain got otherwise?

Raw processing power. In the computer analogy, intelligence is the combination of enough processing power with software that implements the intelligence. When people compare computers to brains, they usually seem to be ignoring the software side.

This is true, but possibly not quite exactly the way you intended. "Most people" (AKA everyone I've talked to about this who is not a programmer or has related IT experience) will automatically associate computing power with "power".

Humans have intellectual "power", since their intellect allows them to build incredibly tools, like computers. If we give computers more ((computing) power => "power" => ability to affect environment, reason and build useful tools), they will "obviously become more intelligent".

It seems to me like a standard symbol problem unfortunately much too common even among people who should know better.