Latest AI success implies that strong AI may be near.
"There's something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I've in fact reached the opposite conclusion). Fast forward about a year: I'm training RNNs all the time and I've witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you.
We'll train RNNs to generate text character by character and ponder the question "how is that even possible?"
By the way, together with this post I am also releasing code on Github that allows you to train character-level language models based on multi-layer LSTMs. You give it a large chunk of text and it will learn to generate text like it one character at a time. You can also use it to reproduce my experiments below. But we're getting ahead of ourselves; What are RNNs anyway?"
https://karpathy.github.io/2015/05/21/rnn-effectiveness/
Edited: formating
If you're being generous, you might take the apparent wide applicability of simple techniques and moderate-to-massive computing power as a sign (given that it's the exact opposite of old-style approaches) that AGI might not be as hard as we think. It does match better with how brains work.
But this particular result is in no way a step towards AI, no. It's one guy playing around with well-known techniques, that are being used vastly more effectively with e.g. Google's image labelling. This article should only push your posteriors around if you were unaware of previous work.