OpenAI recently announced progress in NLP, using a large transformer-based language model to tackle a variety of tasks and breaking performance records in many of them. It also generates synthetic short stories, which are surprisingly good.
How surprising are these results, given past models of how difficult language learning was and how far AI had progressed? Should we be significantly updating our estimates of AI timelines?
Something you learn pretty quickly in academia: don't trust the demos. Systems never work as well when you select the inputs freely (and, if they do, expect thorough proof). So, I wouldn't read too deeply into this yet; we don't know how good it actually is.
https://transformer.huggingface.co/ is a nice demonstration of GPT2 that allows you to select the inputs freely.