If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
This is a selection of papers put out by DeepMind in just the first half of this year.
One-shot learning with Memory Augmented Neural Networks:
By using a specialized memory node, a network is trained which can learn to recognize arbitrary new symbols after just a handful of examples. This by itself is landmark. "One-shot learning" is one of the holy grails on the path to GAI.
Continuous Control with Deep Reinforcement Learning:
Extension of Q-learning to continuous state spaces.
Unifying Count-Based Exploration and Intrinsic Motivation:
Q-learning modified to include an incentive for exploration allows greater progress on the previously intractable Montezuma's Revenge Atari game.
Asynchronous Methods for Deep Reinforcement Learning:
Actor-critic architectures yield improved performance over previous Q-learning architecture.
Learning gradient descent by gradient descent:
An LSTM network is used to learn the "learning algorithm" rather than using gradient descent or some other default algorithm; obtains what appears to be remarkably superior performance.
I'm not even going to bother linking the Go and general Atari papers. And the big Atari one was last year, anyway.
I'm getting a little bit concerned, folks.
One person on IRC asked me if this is what the Singularity might look like from inside. I asked them, 'if this wasn't, how would the world look different?' Neither they nor I knew.