More evidence for the point "generative models can contain agents", or specifically "generative models trained to imitation agents can learn to behave agentically". However, not more evidence for the claim "generative models trained to be generators / generative models trained to be useful tools will suddenly learn an internal agent". Does that seem right?
This post argues there is no inner mesa-optimizer here:
This model is a proof of concept of powerful implicit mesa optimizer, which is evidence towards "current architectures could be easily inner misaligned".
Notably the model was trained across multiple episodes to pick up on RL improvement.
Though the usual inner misalignment means that it’s trying to gain more reward in future episodes by forgoing reward in earlier ones, but I don’t think this is evidence for that.
Authors train transformers to imitate the trajectory of reinforcement learning (RL) algorithms. Find that the transformers learn to do in-context RL (that is, the transformers implement an RL algorithm)---the authors check this by having the transformers solve new RL tasks. Indeed, the transformers can sometimes do better than the RL algorithms they're trained to imitate.
Seems like more evidence for the "a generative model contain agents" point.
Abstract: