Today's post, Ghosts in the Machine was originally published on 17 June 2008. A summary (taken from the LW wiki):

 

There is a way of thinking about programming a computer that conforms well to human intuitions: telling the computer what to do. The problem is that the computer isn't going to understand you, unless you program the computer to understand. If you are programming an AI, you are not giving instructions to a ghost in the machine; you are creating the ghost.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Grasping Slippery Things, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
11 comments, sorted by Click to highlight new comments since:

AI cannot be just "programmed" as, for example, a chess game. When we talk about computers, programming, languages, hardware, compilers, source code, etc., - we're, essentially implying a Von Neumann architecture. This architecture represents a certain principle of information processing, which has its fundamental limitations. That ghost that makes an intelligence cannot be programmed inside a Von Neumann machine. It requires a different type of information processing, similar to that implemented in humans. The real progress in building AI will be achieved only after we understand the fundamental principal that lies behind information processing in our brains. And it`s not only us, even primitive nervous systems of simple creatures use this principle and benefit from it. A simple kitchen cockroach is infinitely smarter than the most sophisticated robot that we have built so far.

AI cannot be just "programmed" as, for example, a chess game.

Yes it can. It's just harder. An AI can be "just programmed" in Conway's Life if you really want to.

Just to be clear, there isn't strong direct evidence of that, is there? My understanding is that there just isn't evidence of it being impossible and a whole lot of evidence that simulating most things is computable.

Just to be clear, there isn't strong direct evidence of that, is there?

What does 'direct' mean? Does it mean "has already been done"? If so then no. The evidence is more of the kind "either it is possible or everything we know about reductionism, physics and human biology is bullshit".

"either it is possible or everything we know about reductionism, physics and human biology is bullshit"

That seems too strong. If intelligence really did turn out to rely on quantum computing or some other non turing computation, that would mean you couldn't program intelligence on a computer in a remotely efficient way. Though presumably you could program it on a quantum computer (or whatever the special feature of physics is that lets you do this fancy computer). Of course this doesn't seem too likely given what we know about neurons.

Of course this doesn't seem too likely given what we know about neurons.

Yes, for a suitable instantiation of "not too likely" this is a rough translation of what I meant by "either it is possible or everything we know about reductionism, physics and human biology is bullshit".

We agree then.

That's true if and only if some aspect of biological neural architecture (as opposed to the many artificial neural network architectures out there) turns out to be Turing irreducible; all computing systems meeting some basic requirements are able to simulate each other in a pretty strong and general way. As far as I'm aware, we don't know about any physical processes which can't be simulated on a von Neumann (or any other Turing-complete) architecture, so claiming natural neurology as part of that category seems to be jumping the gun just a little bit.

That’s not my point. Of course everything is reducible to Turing machine. In theory. However, it does not mean you can make this reduction practically. Or it would be very inefficient. Von Neumann architecture implies its own hierarchy of information processing, which is good for programming of various kinds of formal algorithms. However, IMHO, it does not support a hierarchy of information processing required for AI, which should be a neural network similar to a human brain. You cannot program each and every algorithm or mode of behavior, a neural network is capable of producing, on a Von Neumann computer. To me, many decades of futile attempts to build AI along these lines have already proven its practical impossibility. Only understanding of how neural networks operate in nature and implementing this type of behavior can finally make a difference. And how Von Neumann architecture fits in here? I see only one possible application, modelling work of neurons. Given the complexity of a human brain (100 billion neurons, 100 trillion connections), this is a challenge for even most advanced modern supercomputers. You can count on further performance improvements, of course, since Moores law is still in effect, but this is not the kind of solution thats going to be practical. Perhaps neuronic circuits printed directly on microchips would be the hardware for future AI brains.

(if you respond by clicking "reply" at the bottom of comments, the person to whom you're responding will be notified and it will organize your comment better)

I am pretty sure that turning one architecture into another one can generally be done with a mere multiplicative penalty. I'm not under the impression that simulating neural networks is terribly challenging.

Also, most neurons are redundant (since there's a lot of noise in a neuron). If you're simulating something along the lines of a human brain, the very first simulations might be very challenging when you don't know what the important parts are, but I think there's good reason to expect dramatic simplification once you understand what the important parts are.

I would be cautious regarding noise or redundancy until we know exactly whats going on in there. Maybe we dont understand some key aspects of neural activity and think of it as just a noise. I read somewhere that the old idea about only a fraction of brain capacity being used is not actually true. I partially agree with you, modern computers can cope with neural network simulations, but IMO only of limited network size. But I don`t expect dramatic simplifications here (rather complications :) ). It all will start with simple neuronal networks modeled on computers. Forget about AI for now, it is a rather distant future, the first robots will be insect-like creatures. As they grow in complexity, real time performance problems will become an issue. And that will be a driving force to consider other architectures to improve performance. Non von Neumann solutions will emerge, paving the way for further progress. This is what, I think, is going to happen.