Demis Hassabis gives a great presentation on the state of Deepmind's work as of April 20, 2016. Skip to 23:12 for the statement of the goal of creating a rat-level AI -- "An AI that can do everything a rat can do," in his words. From his tone, it sounds like this is more a short-term, not a long-term goal.

I don't think Hassabis is prone to making unrealistic plans or stating overly bold predictions. I strongly encourage you to scan through Deepmind's publication list to get a sense of how quickly they're making progress. (In fact, I encourage you to bookmark that page, because it seems like they add a new paper about twice a month.) The outfit seems to be systematically knocking down all the "Holy Grail" milestones on the way to GAI, and this is just Deepmind. The papers they've put out in just the last year or so concern successful one-shot learning, continuous control, actor-critic architectures, novel memory architectures, policy learning, and bootstrapped gradient learning, and these are just the most stand-out achievements. There's even a paper co-authored by Stuart Armstrong concerning Friendliness concepts on that list.

If we really do have a genuinely rat-level AI within the next couple of years, I think that would justify radically moving forward expectations of AI development timetables. Speaking very naively, if we can go from "sub-nematode" to "mammal that can solve puzzles" in that timeframe, I would view it as a form of proof that "general" intelligence does not require some mysterious ingredient that we haven't discovered yet.

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 4:31 PM

Note that DeepMind's two big successes (Atari and Go) come from scenarios that are perfectly simulable in a computer. That means they can generate an arbitrarily large number of data points to train their massive neural networks. Real world ML problems almost all have strict limitations on the amount of training data that is available.

That is true. However, since they released those papers, they've published some results demonstrating learning from only a handful of samples in certain contexts by using specialized memory networks which seem to be more analogous to human memory.

I'm not sure this is true. The internet contains billions of hours of video, trillions of images, and libraries worth of text. If they can use unsupervised, semi-supervised, or weakly-supervised learning, they could take advantage of nearly limitless data. And neural networks can do unsupervised learning well, by learning features for one task and then transferring those to another task.

Deepmind has also had a paper on approximate bayesian learning for neural net parameters. That would make them much more able to learn from limited amounts of data, instead of overfitting.

Anyway deep nets are not really going to take over traditional ML methods, but rather open up a whole new set of problems that traditional methods can't handle. Like processing audio and video data, or reinforcement learning.

On the other hand, it's simple to generate AI-complete problems where you can generate training data.

I'm vastly skeptical, but let's see where this goes.