Rohin Shah

Research Scientist at Google DeepMind. Creator of the Alignment Newsletter. http://rohinshah.com/

Sequences

Value Learning
Alignment Newsletter

Wikitag Contributions

Comments

Sorted by
Rohin ShahΩ561

Great research, thanks for doing it!

My sense is that there are a lot of caveats on this result:

  • The reward hack is easy to execute (likely doesn't require reasoning, or only very little reasoning)
  • There is a single task; the model can learn very specific heuristics to that task rather than being more general
  • The reward function is deliberately designed to cause obfuscation, and is about as close to process-based as you can get while not looking directly at the thoughts. (The reward literally strips out the actual command.)
  • The task is sabotaged, so that reward hacking is required 100% of the time to get rewards
  • The results don't seem that scary? For the CoT judge penalty to go down from ~0.32 to ~0.13, the pass rate goes down 11%. It's not quite clear how to interpret this given how artificial the setting is, but things you could take away -- (a) obfuscatory rewards are not competitive or (b) safety rewards do actually succeed in reducing the bad behavior.

How hard did you have to try to get this result? Was this just the first thing you tried, where you made everything as easy as possible to get some result? Or did you try a bunch of things, and have to relax them to get obfuscation to arise? If it's the latter, I feel like this should be overall good news for CoT monitorability.

We note that the mean pass rate of the penalty runs was 11% lower than the no penalty runs.

What are the absolute pass rates in the two cases?

(I'm also unsure if you mean 11% or 11 percentage points.)

Rohin ShahΩ561

If we use Myopic Optimization from Non-myopic Approval (MONA), then the AI will not want to engage in sneaky power-seeking behaviors, because such behaviors will not help with the short-term tasks that the AI-in-a-lab wants to do

I mainly rule out MONA for capabilities reasons; see Thoughts on “Process-Based Supervision” §5.3.

I'd make this argument even for regular outcomes-based RL. Presumably, whenever you are providing rewards (whether in brain-like AGI or RLHF for LLMs), they are based on actions that the AI took over some bounded time in the past, let's call that time bound T. Then presumably sneaky power-seeking behaviors should be desired inasmuch as they pay out within time T. Currently T is pretty small so I don't expect it to pay out. Maybe you're imagining that T will increase a bunch?

Secondarily, I have a concern that, even if a setup seems like it should lead to an AI with exclusively short-term desires, it may in fact accidentally lead to an AI that also has long-term desires. See §5.2.1 of that same post for specifics.

Sure, I agree with this (and this is why I want to do MONA even though I expect outcomes-based RL will already be quite time-limited), but if this is your main argument I feel like your conclusion should be way more measured (more like "this might happen" rather than "this is doomed").

3.5 “The reward function will get smarter in parallel with the agent being trained”

This one comes from @paulfchristiano (2022): “I’m most optimistic about approaches where RL is only performed on a reward function that gets smarter in parallel with the agent being trained.”

I, umm, don’t really know what he’s getting at here. Maybe something like: use AI assistants to better monitor and catch sneaky behavior? If so, (1) I don’t expect the AI assistants to be perfect, so my story still goes through, and (2) there’s a chicken-and-egg problem that will lead to the AI assistants being misaligned and scheming too.

Re: (1), the AI assistants don't need to be perfect. They need to be good enough that the agent being trained cannot systematically exploit them, which is a much lower bar than "perfect" (while still being quite a high bar). In general I feel like you're implicitly treating an adversarial AI system as magically having perfect capabilities while any AI systems that are supposed to be helpful are of course flawed and have weaknesses. Pick an assumption and stick with it!

Re: (2), you start with a non-scheming human, and bootstrap up from there, at each iteration using the previous aligned assistant to help oversee the next AI. See e.g. IDA.

There are lots of ways this could go wrong, I'm not claiming it will clearly work, but I don't think either of your objections matters much.

Huh, not sure why my Ctrl+F didn't find that.

In context, it's explaining a difference between non-reasoning and reasoning models, and I do endorse the argument for that difference. I do wish the phrasing was slightly different -- even for non-reasoning models it seems plausible that you could trust their CoT (to be monitorable), it's more that you should be somewhat less optimistic about it.

(Though note that at this level of nuance I'm sure there would be some disagreement amongst the authors on the exact claims here.)

However, CoTs resulting from prompting a non-reasoning language model are subject to the same selection pressures to look helpful and harmless as any other model output, limiting their trustworthiness.

Where is this quote from? I looked for it in our paper and didn't see it. I think I disagree with it directionally, though since it isn't a crisp claim, the context matters.

(Also, this whole discussion feels quite non-central to me. I care much more about the argument for necessity outlined in the paper. Even if CoTs were directly optimized to be harmless I would still feel like it would be worth studying CoT monitorability, though I would be less optimistic about it.)

Rohin ShahΩ12194

A couple of years ago, the consensus in the safety community was that process-based RL should be prioritized over outcome-based RL [...]

I don't think using outcome-based RL was ever a hard red line, but it was definitely a line of some kind.

I think many people at OpenAI were pretty explicitly keen on avoiding (what you are calling) process-based RL in favor of (what you are calling) outcome-based RL for safety reasons, specifically to avoid putting optimization pressure on the chain of thought. E.g. I argued with @Daniel Kokotajlo about this, I forget whether before or after he left OpenAI.

There were maybe like 5-10 people at GDM who were keen on process-based RL for safety reasons, out of thousands of employees.

I don't know what was happening at Anthropic, though I'd be surprised to learn that this was central to their thinking.

Overall I feel like it's not correct to say that there was a line of some kind, except under really trivial / vacuous interpretations of that. At best it might apply to Anthropic (since it was in the Core Views post).


Separately, I am still personally keen on process-based RL and don't think it's irrelevant -- and indeed we recently published MONA which imo is the most direct experimental paper on the safety idea behind process-based RL.

In general there is a spectrum between process- and outcome-based RL, and I don't think I would have ever said that we shouldn't do outcome-based RL on short-horizon tasks with verifiable rewards; I care much more about the distinction between the two for long-horizon fuzzy tasks.

I do agree that there are some signs that people will continue with outcome-based RL anyway, all the way into long-horizon fuzzy tasks. I don't think this is a settled question -- reasoning models have only been around a year, things can change quite quickly.


None of this is to disagree with your takeaways, I roughly agree with all of them (maybe I'd have some quibbles about #2).

Since I don't quite know what you mean by "faithful" and since it doesn't really matter, I switched "faithful" to "monitorable" (as we did in the paper but I fell out of that good habit in my previous comment).

I also think normal outputs are often quite monitorable, so even if I did think that CoT had exactly the same properties as outputs I would still think CoT would be somewhat monitorable.

(But in practice I think the case is much stronger than that, as explained in the paper.)

Sure, but the same argument would suggest that the model's thoughts follow the same sort of reasoning that can be seen in pretraining, i.e. human-like reasoning that presumably is monitorable

I agree grammar can definitely go away in the presence of optimization pressure to have shorter reasoning (though I might guess that it isn't just a length penalty, but something more explicitly destructive of grammar). When I said "by default" I was implicitly thinking of no length penalties or similar things, but I should have spelled that out.

But presumably the existence of any reasoning models that have good grammar is a refutation of the idea that you can reason about what a reasoning model's thoughts will do based purely on the optimization power applied to the thoughts during reasoning training.

Rohin Shah*150

There's esssentially zero optimization pressure for CoT to be a faithful representation of the LLM's actual thought process, just for it to be useful for the final answer.

Compare: There's essentially zero optimization pressure for CoT to have good grammar, just for it to be useful for the final answer.

Are you surprised that by default (EDIT: in the absence of length penalties or similar techniques) reasoning models' thoughts have good grammar?

(Tbc I think the a priori case for the CoT to be mostly-faithful monitorable for hard tasks is substantially better than the a priori case for the model's thoughts to have good grammar by default.)

(EDIT: Changed to monitorable because I don't know what people mean by "faithful")

Load More