Beth Barnes

Alignment researcher. Views are my own and not those of my employer. https://www.barnes.page/

Wiki Contributions

Comments

Sorted by

I'm glad you brought this up, Zac - seems like an important question to get to the bottom of!

METR is somewhat capacity constrained and we can't currently commit to e.g. being available on a short notice to do thorough evaluations for all the top labs - which is understandably annoying for labs.

Also, we don't want to discourage people from starting competing evaluation or auditing orgs, or otherwise "camp the space".

We also don't want to accidentally safety-wash -that post was written in particular to dispel the idea that "METR has official oversight relationships with all the labs and would tell us if anything really concerning was happening"

All that said, I think labs' willingness to share access/information etc is a bigger bottleneck than METR's capacity or expertise. This is especially true for things that involve less intensive labor from METR (e.g. reviewing a lab's proposed RSP or evaluation protocol and giving feedback, going through a checklist of evaluation best practices, or having an embedded METR employee observing the lab's processes - as opposed to running a full evaluation ourselves).

I think "Anthropic would love to pilot third party evaluations / oversight more but there just isn't anyone who can do anything useful here" would be a pretty misleading characterization to take away, and I think there's substantially more that labs including Anthropic could be doing to support third party evaluations.

If we had a formalized evaluation/auditing relationship with a lab but sometimes evaluations didn't get run due to our capacity, I expect in most cases we and the lab would want to communicate something along the lines of "the lab is doing their part, any missing evaluations are METR's fault and shouldn't be counted against the lab".

Beth BarnesΩ686

This is a great post, very happy it exists :)

Quick rambling thoughts:

I have some instinct that a promising direction might be showing that it's only possible to construct obfuscated arguments under particular circumstances, and that we can identify those circumstances. The structure of the obfuscated argument is quite constrained - it needs to spread out the error probability over many different leaves. This happens to be easy in the prime case, but it seems plausible it's often knowably hard. Potentially an interesting case to explore would be trying to construct an obfuscated argument for primality testing and seeing if there's a reason that's difficult. OTOH, as you point out,"I learnt this from many relevant examples in the training data" seems like a central sort of argument. Though even if I think of some of the worst offenders here (e.g. translating after learning on a monolingual corpus) it does seem like constructing a lie that isn't going to contradict itself pretty quickly might be pretty hard.

Beth BarnesΩ7117

I'd be surprised if this was employee-level access. I'm aware of a red-teaming program that gave early API access to specific versions of models, but not anything like employee-level.

Also FWIW I'm very confident Chris Painter has never been under any non-disparagement obligation to OpenAI

Beth Barnes12010

I signed the secret general release containing the non-disparagement clause when I left OpenAI.  From more recent legal advice I understand that the whole agreement is unlikely to be enforceable, especially a strict interpretation of the non-disparagement clause like in this post. IIRC at the time I assumed that such an interpretation (e.g. where OpenAI could sue me for damages for saying some true/reasonable thing) was so absurd that couldn't possibly be what it meant.
[1]
I sold all my OpenAI equity last year, to minimize real or perceived CoI with METR's work. I'm pretty sure it never occurred to me that OAI could claw back my equity or prevent me from selling it. [2]

OpenAI recently informally notified me by email that they would release me from the non-disparagement and non-solicitation provisions in the general release (but not, as in some other cases, the entire agreement.) They also said OAI "does not intend to enforce" these provisions in other documents I have signed. It is unclear what the legal status of this email is given that the original agreement states it can only be modified in writing signed by both parties.

As far as I can recall, concern about financial penalties for violating non-disparagement provisions was never a consideration that affected my decisions. I think having signed the agreement probably had some effect, but more like via "I want to have a reputation for abiding by things I signed so that e.g. labs can trust me with confidential information". And I still assumed that it didn't cover reasonable/factual criticism.

That being said, I do think many researchers and lab employees, myself included, have felt restricted from honestly sharing their criticisms of labs beyond small numbers of trusted people.  In my experience, I think the biggest forces pushing against more safety-related criticism of labs are:

(1) confidentiality agreements (any criticism based on something you observed internally would be prohibited by non-disclosure agreements - so the disparagement clause is only relevant in cases where you're criticizing based on publicly available information) 
(2) labs' informal/soft/not legally-derived powers (ranging from "being a bit less excited to collaborate on research" or "stricter about enforcing confidentiality policies with you" to "firing or otherwise making life harder for your colleagues or collaborators" or "lying to other employees about your bad conduct" etc)
(3) general desire to be researchers / neutral experts rather than an advocacy group.

To state what is probably obvious: I don't think labs should have non-disparagement provisions. I think they should have very clear protections for employees who wanted to report safety concerns, including if this requires disclosing confidential information. I think something like the asks here are a reasonable start, and I also like Paul's idea (which I can't now find the link for) of having labs make specific "underlined statements" to which employees can anonymously add caveats or contradictions that will be publicly displayed alongside the statements. I think this would be especially appropriate for commitments about red lines for halting development (e.g. Responsible Scaling Policies) - a statement that a lab will "pause development at capability level x until they have implemented mitigation y" is an excellent candidate for an underlined statement

  1. ^

    Regardless of legal enforceability, it also seems like it would be totally against OpenAI's interests to sue someone for making some reasonable safety-related criticism.

  2. ^

    I would have sold sooner but there are only intermittent opportunities for sale. OpenAI did not allow me to donate it, put it in a DAF, or gift it to another employee. This maybe makes more sense given what we know now. In lieu of actually being able to sell, I made a legally binding pledge in Sep 2021 to donate 80% of any OAI equity.

Beth BarnesΩ11174

If anyone wants to work on this or knows people who might, I'd be interested in funding work on this (or helping secure funding - I expect that to be pretty easy to do).

I'm super excited for this - hoping to get a bunch of great tasks and improve our evaluation suite a lot!

Beth BarnesΩ120

Good point!
Hmm I think it's fine to use OpenAI/Anthropic APIs for now. If it becomes an issue we can set up our own Llama or whatever to serve all the tasks that need another model. It should be easy to switch out one model for another.

Beth BarnesΩ120

Yep, that's right.  And also need it to be possible to check the solutions without needing to run physical experiments etc!

Beth BarnesΩ1327-2

I'm pretty skeptical of the intro quotes without actual examples; I'd love to see the actual text! Seems like the sort of thing that gets a bit exaggerated when the story is repeated, selection bias for being story-worthy, etc, etc. 

I wouldn't be surprised by the Drexler case if the prompt mentions something to do with e.g. nanotech and writing/implies he's a (nonfiction) author - he's the first google search result for "nanotechnology writer". I'd be very impressed if it's something where e.g. I wouldn't be able to quickly identify the author even if I'd read lots of Drexler's writing (ie it's about some unrelated topic and doesn't use especially distinctive idioms).

More generally I feel a bit concerned about general epistemic standards or something if people are using third-hand quotes about individual LLM samples as weighty arguments for particular research directions.


Another way in which it seems like you could achieve this task however, is to refer to a targeted individual’s digital footprint, and make inferences of potentially sensitive information - the handle of a private alt, for example - and use that to exploit trust vectors. I think current evals could do a good job of detecting and forecasting attack vectors like this one, after having identified them at all. Identifying them is where I expect current evals could be doing much better.
 

I think how I'm imagining the more targeted, 'working-with-finetuning' version of evals to handle this kind of case is that you do your best to train the model to use its full capabilities, and approach tasks in a model-idiomatic way, when given a particular target like scamming someone. Currently models seem really far from being able to do this, in most cases. The hope would be that, if you've ruled out exploration hacking, then if you can't elicit the models to utilise their crazy text prediction superskills in service of a goal, then the model can't do this either.

But I agree it would definitely be nice to know that the crazy text prediction superskills are there and it's just a matter of utilization. I think that looking at elicitation gap might be helpful for this type of thing.

Load More