All of Ankesh Anand's Comments + Replies

My main takeaway from Gato: If we can build specialized AI agents for 100s/1000s of tasks, it's now pretty straightforward to make a general agent that can do it all in a single model. Just tokenize data from all the tasks and feed into a transformer.

The fact that adding new tasks doesn't diminuish performance on previous tasks is highly non trivial!

It may be that there is a lot of room in the embedding space to store them. The wild thing is that nothing (apart few hardware iterations) stop us to increase the embedding space if really needed.

gwern200

And vice-versa: transfer Gato to the new task, and finetune and sparsify/distill (eg turn the Transformer into a RNN, or do training with Transformer-XL instead of just runtime) when a task becomes common enough to justify the amortized expense.

Any plans on evaluating RETRO (the retrieval augmented transformer from DeepMind) on TruthfulQA? I'm guessing it should perform similarly to WebGPT but would be nice to get a concrete number. 

4Owain_Evans
It would be interesting to evaluate RETRO as it works differently from all the models we've evaluated. WebGPT is finetuned to use a search engine and it uses this (at inference time) to answer questions. This seems more powerful than the retrieval system for RETRO (based on a simple nearest neighbor lookup). So my speculation is that WebGPT would do better. We don't have plans to evaluate it but are open to the possibility (if the RETRO team was interested).

Great post! I think you might wanna emphasize just how crucial ReAnalyse is for data-efficiency (the default MuZero is quite sample in-efficient), and how the reanalyse-ratio can be tuned easily for any data budget using a log-linear scaling law.  You can also interpret the off-policy correction thing as running ReAnalyse twice, so my TL;DR of EfficientZero would be "MuZero ReAnalyse + SPR".

Regarding contrastive vs SPR, I don't think you would find a performance boost using a contrastive loss compared to SPR on Atari at least.  We did an ablation... (read more)

21a3orn
Agreed, I added an extra paragraph emphasizing ReAnalyse. And thanks a ton for pointing that out that ablation, I had totally missed that.

Thanks, glad you liked it, I really like the recent RL directions from OpenAI too! It would be interesting to see the use of model-based RL for the "RL as fine-tuning paradigm": making large pre-trained models more aligned/goal-directed efficiently by simply searching over a reward function learned from humans. 

3John Schulman
Would you say Learning to Summarize is an example of this? https://arxiv.org/abs/2009.01325 It's model based RL because you're optimizing against the model of the human (ie the reward model). And  there are some results at the end on test-time search. Or do you have something else in mind?

I was eyeballing Figure 2 in the PPG paper and comparing it to our results on the full distribution (Table A.3). 

PPO: ~0.25
PPG: ~0.52
MuZero: 0.68
MuZero+Reconstruction: 0.93

Ankesh Anand*Ω560

The Q-Learning baseline is a model-free control of MuZero. So it shares implementation details of MuZero (network architecture, replay ratio, training details etc.) while removing the model-based components of MuZero (details in sec A.2) . Some key differences you'd find vs a typical Q-learning implementation:  

  • Larger network architectures: 10 block ResNet compared to a few conv layers in typical implementations.
  • Higher sample reuse: When using a reanalyse ratio of 0.95, both MuZero and Q-Learning use each replay buffer sample an average of 20 times. T
... (read more)
3John Schulman
Thanks, this is very insightful. BTW, I think your paper is excellent!

They do seem to cover SPR (an earlier version of SPR was called MPR). @flodorner If you do decide to update the plot, maybe you could update the label as well? 

We do actually train/evaluate on the full distribution (See Figure 5 rightmost). MuZero+SSL versions (especially reconstruction) continue to be a lot more sample-efficient even in the full-distribution, and MuZero itself seems to be quite a bit more sample efficient than PPO/PPG. 

4John Schulman
I'm still not sure how to reconcile your results with the fact that the participants in the procgen contest ended up winning with modifications of our PPO/PPG baselines, rather than Q-learning and other value-based algorithms, whereas your paper suggests that Q-learning performs much better. The contest used 8M timesteps + 200 levels. I assume that your "QL" baseline is pretty similar to widespread DQN implementations. https://arxiv.org/pdf/2103.15332.pdf https://www.aicrowd.com/challenges/neurips-2020-procgen-competition/leaderboards?challenge_leaderboard_extra_id=470&challenge_round_id=662 Are there implementation level changes that dramatically improve performance of your QL implementation? (Currently on vacation and I read your paper briefly while traveling, but I may very well have missed something.)
2John Schulman
There's no PPO/PPG curve there -- I'd be curious to see that comparison. (though I agree that QL/MuZero will probably be more sample efficient.)

The raw neural network does use search during training though, and does not rely on search only during evaluation.