Houshalter comments on Nick Bostrom says Google is winning the AI arms race - Less Wrong

3 Post author: polymathwannabe 05 October 2016 06:50PM

Comments (31)

You are viewing a single comment's thread. Show more comments above.

Comment author: Houshalter 05 October 2016 08:43:00PM 2 points [-]

That's not really surprising. Google employs by far the most AI researchers and they have general AI as an actual goal. Deepmind in particular has been pushing for reinforcement learning and general game playing. Which is the first step towards building AI agents that optimize utility functions in complex real world environments, instead of just classifying images or text.

What specific corporation is winning at the moment isn't that relevant. Facebook isn't far behind and has more of a focus on language learning, memory, and reasoning, which are possibly the critical pieces to reaching general intelligence. Microsoft just made headlines for founding a new AI division. Amazon just announced a big competition for the best conversational AIs. Almost every major tech company is trying to get in on this game.

I don't think we are that far away from AGI.

Comment author: rhaps0dy 06 October 2016 09:50:25AM 0 points [-]

I don't think we are that far away from AGI.

At the very least 20 years. And yes Alphabet are the closest, but in 20 years a lot of things can change.

Comment author: Houshalter 06 October 2016 06:06:13PM *  5 points [-]

I think it's well within the realm of possibility it could happen a lot sooner than that. 20 years is a long time. 20 years ago the very first crude neural nets were just getting started. It was only the past 5 years that the research really took off. And the rate of progress is only going to increase with so much funding and interest.

I recall notable researchers like Hinton making predictions that "X will take 5 years" and it being accomplished within 5 months. Go is a good example. Even a year ago, I think many experts thought it would be beaten in 10 years, but not many thought it would be beaten by 2016. In 2010 machine vision was so primitive it was a joke at how far AI has to come:

Testing embedded image.

In 2015 the best machine vision systems exceeded humans by a significant amount at object recognition.

Google recently announced a neural net chip that is 7 years ahead of Moore's law. Granted only in terms of power consumption, and it only runs already trained models. But nevertheless it is an example of the kind of sudden leap forward in ability. Before that Google started using farms of GPUs that are hundreds of times larger than what university researchers have access to.

That's just hardware though. I think the software is improving remarkably fast as well. We have tons of very smart people working on these algorithms. Tweaking them, improving them bit by bit, gaining intuition about how they work, and testing crazy ideas to make them better. If evolution can develop human brains by just some stupid random mutations, then surely this process can work much faster. It feels like every week there is some amazing new advancement made. Like recently, Google's synthetic gradient paper or hypernetworks.

I think one of the biggest things holding the field back is that it's all focused on squeezing small improvements out of well studied benchmarks like imagnet. Machine vision is very interesting of course. But at some point the improvements they are making don't generalize to other tasks. But that is starting to change, as I mentioned in my above comment. Deepmind is focusing on playing games like starcraft. This requires more focus on planning, recurrency, and reinforcement learning. There is more focus now on natural language processing, which also involves a lot of general intelligence features.

Comment author: gjm 06 October 2016 06:17:54PM -1 points [-]

20 years ago the very first crude neural nets were just getting started

The very first artificial neural networks were in the 1940s. Perceptrons 1958. Backprop 1975. That was over 40 years ago.

In 1992 Gerry Tesauro made a neural-network-based computer program that played world-class backgammon. That was 25 years ago.

What's about 20 years old is "deep learning", which really just means neural networks of a kind that was generally too expensive longer ago and that has become practical as a result of advances in hardware. (That's not quite fair. There's been plenty of progress in the design and training of these NNs, as a result of having fast enough hardware for them to be worth experimenting with.)

Comment author: waveman 09 October 2016 08:40:56AM 1 point [-]

Following this for 40 years things definitely seem to have sped up. Problems that seemed intractable like the dog/cat problem are now passe.

I see a confluence of three things: more powerful hardware allows more powerful algorithms to run, and makes testing possible and once possible, much faster.

Researchers still don't have access to anywhere near the 10^15 flops that is roughly the human brain. Exciting times ahead.