All of Mark Schröder's Comments + Replies

Ah great point, regarding the comment you link to:

  • yes, some reward hacking is going on but at least in Claude (which I work with) this is a rare occurrence in daily practice, and usually follows repeated attempts to actually solve the problem.
  • I believe that both Deepseek R1-Zero as well as Grok thinking were RL-trained solely on math and code yet their reasoning seems to generalise somewhat to other domains as well.
  • So, while you’re absolutely right that we can’t do RL directly on the most important outcomes (research progress), I believe there will be si
... (read more)

Why specifically would you expect that RL on coding wouldn’t sufficiently advance coding abilities of LLM‘s to significantly accelerate the search for a better learning algorithm or architecture?


 

5Thane Ruthenis
Because "RL on passing precisely defined unit tests" is not "RL on building programs that do what you want", and is most definitely not "RL on doing novel useful research".

That seems to imply that:

  • If current levels are around GPT-4.5, the compute increase from GPT-4 would be either 10× or 50×, depending on whether we use a log or linear scaling assumption.
  • The completion of Stargate would then push OpenAI’s compute to around GPT-5.5 levels. However, since other compute expansions (e.g., Azure scaling) are also ongoing, they may reach this level sooner.
  • Recent discussions have suggested that better base models are a key enabler for the current RL approaches, rather than major changes in RL architecture itself. This suggests tha
... (read more)