I've just been through the proposal for the Dartmouth AI conference of 1956, and it's a surprising read. All I really knew about it was its absurd optimism, as typified by the quote:

An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

But then I read the rest of the document, and was... impressed. Go ahead and read it, and give me your thoughts. Given what was known in 1955, they were grappling with the right issues, and seemed to be making progress in the right directions and have plans and models for how to progress further. Seeing the phenomenally smart people who were behind this (McCarthy, Minsky, Rochester, Shannon), and given the impressive progress that computers had been making in what seemed very hard areas of cognition (remember that this was before we discovered Moravec's paradox)... I have to say that had I read this back in 1955, I think the rational belief would have been "AI is probably imminent". Some overconfidence, no doubt, but no good reason to expect these prominent thinkers to be so spectacularly wrong on something they were experts in.

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 4:32 AM

If one wanted to write an interesting counterfactual story it could be about how the 1956 conference succeeded beyond anyone's wildest dreams but before they implemented the theory they realized UFAI would destroy them all (or maybe the limited hardware of the day didn't allow for a hard takeoff and they managed to stop it; more dramatic!) and they had to plan a vast conspiracy to bury the results and lead other researchers down blind alleys for decades. These were many of the best minds in the field, after all, and they had the hindsight of Einstein and atomic energy.

Right, you're the next target!

[-]Larks11y150

And, perhaps unsurprisingly, about as little concern about the impacts of AGI as concern in the ability of a few grad students to crack vision over the summer.

Ah, but of course. It's a technical challenge; hence there's no moral dimension at all ;-)

Interesting note: by the time the conference was taking place, Goedel already had articulated the correct approach to item 4 in his letter to Von Neumann (http://rjlipton.wordpress.com/the-gdel-letter/).

An entire field of statistics, dealing with uncertainty, and learning from data was already in existence for two hundred years. This field already knew logic was hopeless for addressing the complexities of the real world. Physicists had already invented primitive graphical models by then (http://en.wikipedia.org/wiki/Ising_model), a working theory of "neural nets" already existed in the guise of theory of non-linear regression (http://www.tinbergen.nl/discussionpapers/02119.pdf), etc.

The lesson seems to me to be this: big gains lurk in having the humility to engage in comprehensive scholarship and integration of existing advances. Being widely read, and speaking and translating many specialized languages.

Interesting data points, thanks!

What did people really have in mind as a "significant advance" though?

Well, reading the proposal, it seems they were really hoping for significant advances - proper theories of abstractions, theories of natural language use, resiliency to randomness and error...

And remember that they had had great success in teaching computers to perform what had previously been seen as highly skilled tasks - mental arithmetic and calculating the values of advanced functions.

low paid labourers

The jobs were relatively high status (especially for women).

They were at the beginning of the field and didn't know how to make the computer do much of anything. They were focusing on abstraction and language and were optimistic on accomplishing things in there; they would probably have seen `acting like a cat' as a strictly easier problem that they hadn't had the time or interest to work on.

It should be more troubling is if it looks like they were grappling with the right issues.

With regards to the Moravec's paradox, if you can't see how your AI idea will replicate behaviour of a cat (provided you have a cat), your notion of 'general intelligence' is probably thoroughly confused. If your AI needs to do something very superhuman to survive as a cat in the wild, likewise so. The next step could be invention of a stone tied to a stick, from scratch. If you think of AI doing advanced superhuman technology, your standard of understanding is too low.

if you can't see how your AI idea will replicate behaviour of a cat (provided you have a cat), your notion of 'general intelligence' is probably thoroughly confused

If you didn't know Moravec's paradox, and ranked the difficulty of cognitive tasks by their perceived human difficulty, then you'd conclude that any AI that could play chess could trivially behave as a cat, once you gave it the required body.

That's wrong, but there wasn't evidence for it being wrong, back in 1955.

Moravec's “paradox” has always been obvious to me, even before I knew it had a name. Now, I did get 25 on the AQ test, and I don't think that Moravec's paradox would also be obvious to a more neurotypical person (otherwise it wouldn't be called a paradox in the first place), but we're talking about an AI conference, so I would've expected that at least some participants would have sensed that.

How do you separate this from hindsight bias?

even before I knew it had a name.

Now I might be mis-remembering things, but...

The world is a different place now. Unless your time frame for "before I knew it had a name" is ~1970?

The world is a different place now.

In terms how of complex natural languages etc. are in an abstract sense? I'd expect that to have been more or less the same for the past few tens of millennia... And the argument that catching a baseball (or something like that) is easy for humans but explicitly writing down and solving the differential equations that govern its motion would be much harder is something that IIRC dates back to the mid-20th century.

EDIT: And BTW...

If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is.

-- John von Neumann in 1947. (Actually I was looking for a different quote, but this one will do. EDIT 2: That was “You insist that there is something that a machine can't do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that.”)

The world is a different place now.

In terms how of complex natural languages etc. are in an abstract sense?

I suspect he means that the knowledge base around Moravec's paradox has seeped into many sciences and stories and our implicit understanding of the world.

In a sense, all that we had to know what that after 50 years of trying, there were no marching robots but there were decent chess computer programs... to suspect that something non-intuitive was going on here.