Zach Stein-Perlman

AI strategy & governance. ailabwatch.org. Looking for new projects.

Sequences

Slowing AI

Wiki Contributions

Load More

Comments

The commitment—"20% of the compute we've secured to date" (in July 2023), to be used "over the next four years"—may be quite little in 2027, with compute use increasing exponentially. I'm confused about why people think it's a big commitment.

Full quote:

We’ve evaluated GPT-4o according to our Preparedness Framework and in line with our voluntary commitments. Our evaluations of cybersecurity, CBRN, persuasion, and model autonomy show that GPT-4o does not score above Medium risk in any of these categories. This assessment involved running a suite of automated and human evaluations throughout the model training process. We tested both pre-safety-mitigation and post-safety-mitigation versions of the model, using custom fine-tuning and prompts, to better elicit model capabilities.

GPT-4o has also undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities. We used these learnings to build out our safety interventions in order to improve the safety of interacting with GPT-4o. We will continue to mitigate new risks as they’re discovered.

[Edit after Simeon replied: I disagree with your interpretation that they're being intentionally very deceptive. But I am annoyed by (1) them saying "We’ve evaluated GPT-4o according to our Preparedness Framework" when the PF doesn't contain specific evals and (2) them taking credit for implementing their PF when they're not meeting its commitments.]

How can you make the case that a model is safe to deploy? For now, you can do risk assessment and notice that it doesn't have dangerous capabilities. What about in the future, when models do have dangerous capabilities? Here are four options:

  1. Implement safety measures as a function of risk assessment results, such that the measures feel like they should be sufficient to abate the risks
    1. This is mostly what Anthropic's RSP does (at least so far — maybe it'll change when they define ASL-4)
  2. Use risk assessment techniques that evaluate safety given deployment safety practices
    1. This is mostly what OpenAI's PF is supposed to do (measure "post-mitigation risk"), but the details of their evaluations and mitigations are very unclear
  3. Do control evaluations
  4. Achieve alignment (and get strong evidence of that)

Related: RSPs, safety cases.

Maybe lots of risk comes from the lab using AIs internally to do AI development. The first two options are fine for preventing catastrophic misuse from external deployment but I worry they struggle to measure risks related to scheming and internal deployment.

Safety-wise, they claim to have run it through their Preparedness framework and the red-team of external experts.

I'm disappointed and I think they shouldn't get much credit PF-wise: they haven't published their evals, published a report on results, or even published a high-level "scorecard." They are not yet meeting the commitments in their beta Preparedness Framework — some stuff is unclear but at the least publishing the scorecard is an explicit commitment.

(It's now been six months since they published the beta PF!)

[Edit: not to say that we should feel much better if OpenAI was successfully implementing its PF -- the thresholds are way too high and it says nothing about internal deployment.]

There should be points for how the organizations act wrt to legislation. In the SB 1047 bill that CAIS co-sponsored, we've noticed some AI companies to be much more antagonistic than others. I think [this] is probably a larger differentiator for an organization's goodness or badness.

If there's a good writeup on labs' policy advocacy I'll link to and maybe defer to it.

Adding to the confusion: I've nonpublicly heard from people at UKAISI and [OpenAI or Anthropic] that the Politico piece is very wrong and DeepMind isn't the only lab doing pre-deployment sharing (and that it's hard to say more because info about not-yet-deployed models is secret). But no clarification on commitments.

But everyone has lots of duties to keep secrets or preserve privacy and the ones put in writing often aren't the most important. (E.g. in your case.)

I've signed ~3 NDAs. Most of them are irrelevant now and useless for people to know about, like yours.

I agree in special cases it would be good to flag such things — like agreements to not share your opinions on a person/org/topic, rather than just keeping trade secrets private.

Related: maybe a lab should get full points for a risky release if the lab says it's releasing because the benefits of [informing / scaring / waking-up] people outweigh the direct risk of existential catastrophe and other downsides. It's conceivable that a perfectly responsible lab would do such a thing.

Capturing all nuances can trade off against simplicity and legibility. (But my criteria are not yet on the efficient frontier or whatever.)

Thanks. I agree you're pointing at something flawed in the current version and generally thorny. Strong-upvoted and strong-agreevoted.

Generally, the deployment criteria should be gated behind "has a plan to do this when models are actually powerful and their implementation of the plan is credible".

I didn't put much effort into clarifying this kind of thing because it's currently moot—I don't think it would change any lab's score—but I agree.[1] I think e.g. a criterion "use KYC" should technically be replaced with "use KYC OR say/demonstrate that you're prepared to implement KYC and have some capability/risk threshold to implement it and [that threshold isn't too high]."

Don't pass cost benefit for current models which pose low risk. (And it seems the criteria is "do you have them implemented right now?) . . . .

(A general problem with this project is somewhat arbitrarily requiring specific countermeasures. I think this is probably intrinsic to the approach I'm afraid.)

Yeah. The criteria can be like "implement them or demonstrate that you could implement them and have a good plan to do so," but it would sometimes be reasonable for the lab to not have done this yet. (Especially for non-frontier labs; the deployment criteria mostly don't work well for evaluating non-frontier labs. Also if demonstrating that you could implement something is difficult, even if you could implement it.)

I get the sense that this criteria doesn't quite handle the necessarily edge cases to handle reasonable choices orgs might make.

I'm interested in suggestions :shrug:

  1. ^

    And I think my site says some things that contradict this principle, like 'these criteria require keeping weights private.' Oops.

Load More