The recent publication of Gato spurred a lot of discussion on wheter we may be witnessingth the first example of AGI. Regardless of this debate, Gato's makes use of recent developments in reinforcement learning, that is using supervised learning on reinforcement learning trajectories by exploiting the ability of transformer architectures to proficiently handle sequential data.
Reading the comments it seems that this point created some confusion to readers not familiar with these techniques. Some time ago I wrote an introductory article to how transformers can be used in reinforcement learning which may be helpful to clarify some of these doubts: https://lorenzopieri.com/rl_transformers/
Wonderful – I'll keep that in mind when I get around to reviewing/skimming that outline. Thanks for sharing it.
I have a particularly idiosyncratic set of reasons for the particular kind of 'yak shaving' I'm thinking of, but your advice, i.e. to NOT do any yak shaving, is noted and appreciated.