In June 2012, the Association for Computing Machinery—a professional society of computer scientists, best known for hosting the prestigious ACM Turing Award, commonly referred to as the "Nobel Prize of Computer Science"—celebrated the 100th birthday of Alan Turing. 

The event was attended by luminaries like, oh, in no particular order: Donald Knuth, Vint Cerf, Bob Kahn, Marvin Minsky, Judea Pearl, Ron Rivest, Adi Shamir, Leonard Adleman, and of extra relevance here; Ken Thompson, inventor of the UNIX operating system, co-inventor of the C programming language, and computer chess pioneer.

In all, some 33 Turing Award winners were scheduled to be in attendance. There were no parallel tracks or simultaneous panels going on.

Image credit: Joshua Lock


So today, randomly watching one of the panel debates on Youtube, I was amazed by the amusing / horrifying example of failure of foresight and predictive accuracy, by these world leading computer scientists, regarding the advancement of the state of the art in artificial intelligence.

In this case the as it pertains to the ancient board game "go".

https://www.youtube.com/watch?v=dsMKJKTOte0&t=54m

"When does a computer crack go?"
"And I will start by a 100 years, and then count down by ten year intervals."
And [by] 90 years? I count about 4% of the audience. (...)

Perhaps my internet searching skills are weak, but as best as I can tell, the incident has not been noted other than a few bemused Youtube comments in the video linked above.


Given ten options, ten buckets in which to place their bet, world-leading experts in computer science, as a group, managed to perform much worse than one would expect given their vast and wide-ranging expertise.

Worse even, than one would expect of a group of complete ignorants.

Given ten options, one would expect one out of every ten to land on the right answer, if nobody knew anything and everyone made a blind guess.

Three years and three months later, three-time European champion Fan Hui, was defeated. Half a year after that, world champion Lee Sedol was defeated by AlphaGo.


Interestingly, Ken Thompson, to whom the query was first directed, starts his answer by sharing an experience, from a World Computer Chess Championship, around 1980. Participants were asked if and when computers would beat the world champion at the game of chess. And, he explains, everyone except him alone, had been exceedingly optimistic.

Thereby priming the audience in several ways for the informal poll which followed.

By demonstrating authority, by proving a track record of sorts, by providing an anchor of sorts, and by warning against optimism. (Thompson had predicted that computers would beat human champions at chess by 2011.)

"Most of the predictions were like next year or five years.
You know, way, way optimistic.
If you look at Moore's law, on computers, and you look at the increase in speed. Ah, with strength, with speed, on computer chess,
never is just, you know, you can't predict never.
You just can't predict never. 
It had to happen."
Moderator: "Do you think go then, is in the targets, of computers?"
Ah no, I don't think go is in... Ahh... If I had to predict go, I'd predict way, way out.

And then, the audience was polled, by simple show of hands.

One member of the audience stood alone, red-faced, to laughter and ridicule from the most esteemed peers in his field.

One single member out of the whole audience got it right.


New Comment
13 comments, sorted by Click to highlight new comments since:

In June 2012, Zen19 had a rating of 5 dan (on KGS), from Katja's paper.

Go algorithms had been improving about 1 stone / year for decades.

The difference between 5 dan and the best player in the world is less than 5 stones---professional dan are much closer together than amateur dan, whether you measure by handicap or elo. (In march 2012 Zen also won a game with a 4 stone handicap against one of the world's best players, with 30m time controls, which basically lines up.)

There are 1-2 stones of error in those estimates, e.g. because of the different interpretation of different rating systems.

If you extrapolated that trend literally, your prediction would be ~5 years for Zen19 to beat the best humans. I think the truth is more like 6 years for Zen, and 4 years for AlphaGo.

I believe Zen19 is implemented by one person hacking away at the problem.

(All that said, I can easily imagine someone in 2040 making a similar comment about AGI timelines.)

If taken literally as a scale of Go skill, the stone difference interestingly implies that pros are clustered near a "ceiling"of (human) play. Given that Zen19 has in fact kept getting better, maybe that's interpretation has more merit than I would have thought.

The pro's certainly believe that they are clustered that way.

It would be interesting to see how much stones AlphaGo can currently give a pro and still win but unfortunately Google doesn't seem to be interested in finding out.

[-]gjm50

It seems, from looking at the video, that there was some misunderstanding about exactly what question was being asked -- e.g., I think that when the person asking the question said "90 years" he meant "within the next 90 years" but at least some in the audience interpreted it as something more like "about 90 years from now". (Observe the reactions when some new hands go up at the "50 years" mark.) Then they went down in 10 year intervals, as far as "10 years"; a reasonable number of people picked 10 years, and again I think some took it to mean "about 10 years" and some to mean "within 10 years". And then he asked "who thinks less than 10 years?". If I'd been in the audience, at this point I'd have been quite confused about how any given answer was being interpreted.

Although it doesn't really make much sense, I strongly suspect that at least some people in the audience (1) thought the likely time before computers started beating good humans at go was <=10 years, (2) raised their hands at the "10 years" mark, and then (3) put them down again when the questioner asked "less than 10 years", perhaps thinking that he must really mean something like "too soon to be rounded off to 10 years".

The audience was, even so, clearly too pessimistic, and I think OP is right to suggest that that's at least partly because Ken Thompson went out of his way to remind people that some predictions for this kind of thing are overoptimistic, and to give his own highly pessimistic opinion -- but I think confusion as well as pessimism is responsible for the specific outcome reported here.

(Also, it looks to me as if a lot of people in the audience never raised their hands. That might mean that they thought computers wouldn't be competitive in go even after a century of progress -- but it might also mean that for whatever reason they didn't care to make a public prediction.)

thought the likely time before computers started beating good humans at go was <=10 years,

Crazy Stone was already beating good humans at the time the event was held. It just wasn't beating professionals.

Yeah--when the person asking the question said, "90 years," and the Turing award winners raised some hands, couldn't they be interpreted to be specifying a wide confidence interval, which is what you should do when you know you don't have domain expertise with which to predict the future?

I also want to register that this kind of auction is known to bias upward for human participants. (I think it's a Dutch auction but not sure, and currently traveling, so can't properly look it up). There still seems to be an effect here, but I do think there were also a lot of biases upwards coming together.

[-]gjm50

Do we know who that one member of the audience was?

I listened closely to the youtube snippet. Here is the transcript:

[moderator] so, tell us who you are and why

[alan] My name is alan cima. i just believe it'll be solved. I put together the things we are seeing, in terms of analysis of facial expressions, and all of those things, so it's not purely the logical problem but it's also a human and logical problem. I think it'll be solved in less than 10 years.

[moderator] Spoken like a true poker player.

Obviously he didn't spell his name, but he had white hair and I could tell his general age. Based on that, plus a bit of googling (various spellings of "seema" plus ACM), this linkedin profile came up, which I think is probably him with 95% certainty: https://www.linkedin.com/in/alancima/

That guy was pretty spot on!

I had expected the game to remain intractable for decades.

Why? To me it seems that there was steady progress on the Go-programs getting better. In January 2012 Crazy Stone was already 5 dan. From the 2012 vantage point I find any prediction that this will happen post 2030 to be very strange.

From memory I think simply linear interpolation would have given you a date maybe five years past AlphaGo's actual success.

(I did play a lot of go in the past and likely relate to it differently as a result, 5 dan is already more than any human I played with)

Hopefully, AGI is at least "90 years" out.

AlphaGo's victory over World Champion Lee Sedol made a (seemingly) deep impression on me at the time.

I had just NOT expected that. I had expected the game to remain intractable for decades. But the initial excitement and mild sense of doom that followed soon faded. I'm not a computer scientist, just a civilian interested for philosophical reasons.

But many people in attendance at the Alan Turing centenary celebration were World Champions of computer science. And either none of them knew any better, or if any did, or even suspected. Then, it seems that any suspicion that humans would be, uh, defeated at go, in the next decade, was defeated by subtle snickering and mild peer pressure.