All of emile delcourt's Comments + Replies

Really appreciate this post! The recommendation "Evaluators should ensure that effective capability elicitation techniques are used for their evaluations" is especially important. Zero-shot, single-turn prompts with no transformations no longer seem representative of a model's impact on the public (who, in aggregate or with scant determination, will be inflicting many variants of unsanctioned prompts with many shots or many turns)

I'm curious why in example 1 the ability to manipulate ("persuade") is called a capability evaluation, making limited results eligible for the sandbagging label, whereas in example 6 (under the name "blackmail") it is called an alignment evaluation, making limited results ineligible for that label?

Both examples tuned out the manipulation enough to hide it in testing, with worse results in production. Can someone help me to better learn the nuances we'd like to impose on sandbagging? Benchmark evasion is an area I started getting into only in november.

1Teun van der Weij
Maybe reading this post will help! Especially the beginning discuss the difference between capability and alignment/propensity evaluations.

Hi! Just introducing myself to this group. I'm a cybersecurity professional, enjoyed various deep learning adventures over the last 6 years and inevitably managing AI related risks in my information security work.  Went through BlueDot's AI safety fundamentals last spring with lots of curiosity and (re?)discovered LessWrong. Looking forward to visiting more often, and engaging with the intelligence of this community to sharpen how I think.

4habryka
Welcome! Glad to have you around, and hope you have a good time. Also always feel free to complain about anything that is making you sad about the site either in threads like this, or privately in our Intercom chat (the bubble in the bottom right corner).