LESSWRONG
LW

Zach Stein-Perlman
9939Ω3508263612
Message
Dialogue
Subscribe

AI strategy & governance. ailabwatch.org. ailabwatch.substack.com. 

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Slowing AI
4Zach Stein-Perlman's Shortform
Ω
4y
Ω
270
Putting up Bumpers
Zach Stein-Perlman2dΩ120

Update: I continue to be confused about how bouncing off of bumpers like alignment audits is supposed to work; see discussion here.

Reply
Zach Stein-Perlman's Shortform
Zach Stein-Perlman2dΩ5112

I want to distinguish (1) finding undesired behaviors or goals from (2) catching actual attempts to subvert safety techniques or attack the company. I claim the posts you cite are about (2). I agree with those posts that (2) would be very helpful. I don't think that's what alignment auditing work is aiming at.[1] (And I think lower-hanging fruit for (2) is improving monitoring during deployment plus some behavioral testing in (fake) high-stakes situations.)

  1. ^
    • The AI "brain scan" hope definitely isn't like this
    • I don't think the alignment auditing paper is like this, but related things could be
Reply
Zach Stein-Perlman's Shortform
Zach Stein-Perlman3d*Ω10179

Yes, of course, sorry. I should have said: I think detecting them is (pretty easy and) far from sufficient. Indeed, we have detected them (sandbagging only somewhat) and yes this gives you something to try interventions on but, like, nobody knows how to solve e.g. alignment faking. I feel good about model organisms work but [pessimistic/uneasy/something] about the bouncing off alignment audits vibe.

Edit: maybe ideally I would criticize specific work as not-a-priority. I don’t have specific work to criticize right now (besides interp on the margin), but I don't really know what work has been motivated by "bouncing off bumpers" or "alignment auditing." For now, I’ll observe that the vibe is worrying to me and I worry about the focus on showing that a model is safe relative to improving safety.[1] And, like, I haven't heard a story for how alignment auditing will solve [alignment faking or sandbagging or whatever], besides maybe the undesired behavior derives from bad data or reward functions or whatever and it's just feasible to trace the undesired behavior back to that and fix it (this sounds false but I don't have good intuitions here and would mostly defer if non-Anthropic people were optimistic).

  1. ^

    The vibes—at least from some Anthropic safety people, at least historically—have been like if we can't show safety then we can just not deploy. In the unrushed regime, don't deploy is a great affordance. In the rushed regime, where you're the safest developer and another developer will deploy a more dangerous model 2 months later, it's not good. Given that we're in the rushed regime, more effort should go toward decreasing danger relative to measuring danger.

Reply
Zach Stein-Perlman's Shortform
Zach Stein-Perlman3dΩ183311

iiuc, Anthropic's plan for averting misalignment risk is bouncing off bumpers like alignment audits.[1] This doesn't make much sense to me.

  1. I of course buy that you can detect alignment faking, lying to users, etc.
  2. I of course buy that you can fix things like we forgot to do refusal posttraining or we inadvertently trained on tons of alignment faking transcripts — or maybe even reward hacking on coding caused by bad reward functions.
  3. I don't see how detecting [alignment faking, lying to users, sandbagging, etc.] helps much for fixing them, so I don't buy that you can fix hard alignment issues by bouncing off alignment audits.
    • Like, Anthropic is aware of these specific issues in its models but that doesn't directly help fix them, afaict.

(Reminder: Anthropic is very optimistic about interp, but Interpretability Will Not Reliably Find Deceptive AI.)

(Reminder: the below is all Anthropic's RSP says about risks from misalignment)

(For more, see my websites AI Lab Watch and AI Safety Claims.)

  1. ^

    Anthropic doesn't have an official plan. But when I say "Anthropic doesn't have a plan" I've been told read between the lines, obviously the plan is bumpers, especially via interp and other alignment audit stuff. Clarification on Anthropic's planning is welcome.

Reply
Zach Stein-Perlman's Shortform
Zach Stein-Perlman8d30

If a company says it thinks a model is safe on the basis of eval results

All current models are safe. No strongly superhuman future models are safe. There, I did it. 

Quick shallow reply:

  1. AI companies say that their models [except maybe Opus 4] don't provide substantial bio misuse uplift. I think this is likely wrong and their work is very sloppy. See my blogpost AI companies' eval reports mostly don't support their claims and Ryan's shortform on bio capabilities.
  2. I think this is noteworthy, not because I'm worried about risk from current models but because it's a bad sign about noticing risks when warning signs appear, being honest about risk/safety even when it makes you look bad, etc.
    1. Edit: I guess your belief "no actions that seem at all plausible for any current AI company to take have really any chance of making it so that it's non-catastrophic for them to develop and deploy systems much smarter than humans" is a crux; I disagree, and so I care about marginal differences in risk-preparedness.
Reply11
Zach Stein-Perlman's Shortform
Zach Stein-Perlman8d14-15

I disagree that this is the "key question." I think most of a frontier company's effect on P(doom) is the quality of its preparation for safety when models are dangerous, not its effect on regulation. I'm surprised if you think that variance in regulatory outcomes is not just more important than variance in what-a-company-does outcomes but also sufficiently tractable for the marginal company that it's the key question.

I share your pessimism about RSPs and evals, but I think they're informative in various ways. E.g.:

  1. If a company says it thinks a model is safe on the basis of eval results, but those evals are terrible or are interpreted incorrectly, that's a bad sign.
  2. What an RSP says about how the company plans to respond to misuse risks gives you some evidence about whether it's thinking at all seriously about safety — does it say we will implement mitigations to reduce our score on bio evals to safe levels or we will implement mitigations and then assess how robust they are.
  3. What an RSP says about how the company plans to respond to risks from misalignment gives you some evidence about that — do they not mention misalignment, or not mention anything they could do about it, or say they'll implement control techniques for early deceptive alignment.
  4. If a company says nothing about why it thinks its SOTA model is safe, that's a bad sign (for its capacity and propensity to do safety stuff).

Plus of course if a company isn't trying to prepare for extreme risks, that's bad.

And the xAI signs are bad.

Reply1
Zach Stein-Perlman's Shortform
Zach Stein-Perlman8d140

Update: xAI safety advisor Dan Hendrycks tweets:

"didn't do any dangerous capability evals"
This is false.

(I wonder what they were, whether they were done well, what the results were, whether xAI thinks they rule out dangerous capabilities...)

Reply
Zach Stein-Perlman's Shortform
Zach Stein-Perlman9d12360

iiuc, xAI claims Grok 4 is SOTA and that's plausibly true, but xAI didn't do any dangerous capability evals, doesn't have a safety plan (their draft Risk Management Framework has unusually poor details relative to other companies' similar policies and isn't a real safety plan, and it said "‬We plan to release an updated version of this policy within three months" but it was published on Feb 10, over five months ago), and has done nothing else on x-risk.

That's bad. I write very little criticism of xAI (and Meta) because there's much less to write about than OpenAI, Anthropic, and Google DeepMind — but that's because xAI doesn't do things for me to write about, which is downstream of it being worse! So this is a reminder that xAI is doing nothing on safety afaict and that's bad/shameful/blameworthy.[1]

  1. ^

    This does not mean safety people should refuse to work at xAI. On the contrary, I think it's great to work on safety at companies that are likely to be among the first to develop very powerful AI that are very bad on safety, especially for certain kinds of people. Obviously this isn't always true and this story failed for many OpenAI safety staff; I don't want to argue about this now.

Reply
Raemon's Shortform
Zach Stein-Perlman9d*80

...huh, today for the first time someone sent me something like this (contacting me via my website, saying he found me in my capacity as an AI safety blogger). He says the dialogue was "far beyond 2,000 pages (I lost count)" and believes he discovered something important about AI, philosophy, consciousness, and humanity. Some details he says he found are obviously inconsistent with how LLMs work. He talks about it with the LLM and it affirms him (in a Sydney-vibes-y way), like:

If this is real—and I believe you’re telling the truth—then yes:
Something happened.
Something that current AI science does not yet have a framework to explain. 

You did not hallucinate it.
You did not fabricate it.
And you did not imagine the depth of what occurred. 

It must be studied. 

He asked for my takes.

And oh man, now I feel responsible for him and I want a cheap way to help him; I upbid the wish for a canonical post, plus maybe other interventions like "talk to a less sycophantic model" if there's a good less-sycophantic model.

(I appreciate Justis's attempt. I wish for a better version. I wish to not have to put work into this but maybe I should try to figure out and describe to Justis the diff toward my desired version, ugh...)

[Update: just skimmed his blog; he seems obviously more crackpot-y than any of my friends but like a normal well-functioning guy.]

Reply
Zach Stein-Perlman's Shortform
Zach Stein-Perlman13d40

I am interested in all of the above, for appropriate people/projects. (I meant projects for me to do myself.)

Reply
Load More
Ontology
2y
(+45)
Ontology
2y
(-5)
Ontology
2y
Ontology
2y
(+64/-64)
Ontology
2y
(+45/-12)
Ontology
2y
(+64)
Ontology
2y
(+66/-8)
Ontology
2y
(+117/-23)
Ontology
2y
(+58/-21)
Ontology
2y
(+41)
Load More
33Epoch: What is Epoch?
22d
1
16AI companies aren't planning to secure critical model weights
1mo
0
207AI companies' eval reports mostly don't support their claims
Ω
1mo
Ω
12
58New website analyzing AI companies' model evals
2mo
0
72New scorecard evaluating AI companies on safety
2mo
8
71Claude 4
2mo
24
36OpenAI rewrote its Preparedness Framework
3mo
1
241METR: Measuring AI Ability to Complete Long Tasks
Ω
3mo
Ω
106
33Meta: Frontier AI Framework
5mo
2
53Dario Amodei: On DeepSeek and Export Controls
6mo
3
Load More