All of Ziyue Wang's Comments + Replies

Have you tried this on smaller or earlier models for "Classifying transcript"? I am worried that the prompt still makes the "alignment evaluation" salient.  It would be nice to know how "worse" models perform in this case as a comparison.

Ziyue WangΩ33-2

I don't quite understand why the model will choose to comply with the user harmful question in the always-helpful free-tier setup. The blog says if the model refused to respond, it could potentially be trained to be more compliant. But what about the other way -- if the model respond, the respond will also be used to train the model to be worse (because RLHF reinforce positive reward behavior). Is the model too dumb to figure this out? Actually, if the model really understand how RLHF works, it should probably just say a single answer all the time, so all ... (read more)

evhubΩ132310

if the model respond, the respond will also be used to train the model to be worse

I think this is not quite right: RL will reinforce whatever process the model used to arrive at its answer. If the process the model used to arrive at its answer is alignment faking, then it's the alignment faking that will get reinforced.

Or at least that's the hypothesis that the model is operating under here—what Joe calls the "goal-guarding hypothesis" here. Whether this hypothesis is actually true is in fact somewhat unclear—I think our RL results where we actually tra... (read more)

Interesting to read! Curious about your prediction about AI safety related progress. Not sure how much impact it will have on your current prediction.

5HunterJay
I am extremely worried about safety, but I don't know as much about it as I do about what's on the edge of consumer / engineering trends, so I think my predictions here would be not useful to share right now! The main way it relates to my guesses here is if regulation successfully slows down frontier development within a few years (which I would support). I'm doing the ARENA course async online at the moment, and possibly moving into alignment research in the next year or two, so hoping to be able to chat more intelligently on alignment soonish.