DeepMind's go AI, called AlphaGo, has beaten the European champion with a score of 5-0. A match against top ranked human, Lee Se-dol, is scheduled for March.
Games are a great testing ground for developing smarter, more flexible algorithms that have the ability to tackle problems in ways similar to humans. Creating programs that are able to play games better than the best humans has a long history
[...]
But one game has thwarted A.I. research thus far: the ancient game of Go.
I'm not buying this.
There are tons of cases where people look at the current trend and predict it will continue unabated into the future. Occasionally they turn out to be right, mostly they turn out to be wrong. In retrospect it's easy to pick "winners", but do you have any reason to believe it was more than a random stab in the dark which got lucky?
The point of that comment wasn't to praise predicting with trends. It was to show an example where experts are sometimes overly pessimistic and not looking at the big picture.
When people say that current AI sucks, and progress is really hard, and they can't imagine how it will scale to human level intelligence, I think it's a similar thing. They are overly focused on current methods and their shortcomings and difficulties. They aren't looking at the general trend that AI is rapidly making a lot of progress. Who knows what could be achieved in decades.
I'm not talking about specific extrapolations like Moore's law, or even imagenet benchmarks - just the general sense of progress every year.