OpenAI recently announced progress in NLP, using a large transformer-based language model to tackle a variety of tasks and breaking performance records in many of them. It also generates synthetic short stories, which are surprisingly good.
How surprising are these results, given past models of how difficult language learning was and how far AI had progressed? Should we be significantly updating our estimates of AI timelines?
It doesn't move much probability mass to the very near term (i.e. 1 year or less), because both this and AlphaStar aren't really doing consequentialist reasoning, they're just able to get a surprising performance with simpler tricks (the very Markovian nature of human writing, a good position evaluation function) given a whole lot of compute.
However, it does shift my probabilities forward in time, in the sense that one new weird trick to do deductive or consequentialist reasoning, plus a lot of compute, might get you there really quickly.