My original background is in mathematics (analysis, topology, Banach spaces) and game theory (imperfect information games). Nowadays, I do AI alignment research (mostly systemic risks, sometimes pondering about "consequentionalist reasoning").
An excerpt from an older grant application, in case anybody wants to get inspired. Feel free to scoop me on any of this, I have other things to do as well. Well, perhaps let me know if you do, to avoid duplicating effort :) .
Part (A): Testing Effort as a Function of Task Complexity and AI Capability
A central concern in evaluating safety and reliability of advanced AI systems is that sufficiently advanced strategic AIs may be able to infer when they are in an evaluation setting and adjust their behavior accordingly, making evaluations unreliable. For example, even the current large language models often display behaviors that indicate awareness of being tested, even unprompted, which alters their behavior. This problem becomes especially serious in settings where the AI has incentives to obscure its true capabilities or intentions, such as evaluations of deceptive or scheming behavior. If tests can be distinguished from real-world deployment, then effective evaluation may require embedding ever more realistic detail into the test – potentially driving the cost of evaluation beyond feasibility, at least in some settings.
This project aims to investigate how the effort required for effective AI testing scales with task complexity and model capability. The core hypothesis is that there is some minimal amount of information that must be embedded in a test for it to not be distinguished from deployment, and that this threshold grows with both the complexity of the task and the capability of the AI. If true, this would imply that as tasks become more complex (and economically valuable), the effort needed to create indistinguishable tests grows. We conjecture that in the limit of extremely complex tasks and superintelligent AI, effective indistinguishable testing becomes prohibitively costly. However, what matters is whether effective testing will be possible in practice. For this reason, we need to gain more clarity regarding the quantitative relationship between task complexity, AI capability, evaluation complexity and cost, and the vulnerability to alignment faking.
The research plan is iterative. First, we will explore “good-enough” proxies using existing datasets and evaluations, to check whether the conjectured relationship shows up in practice. This will serve as a sanity check of the project, and will be published as a blogpost or a workshop paper. Next, we will attempt to build more principled (even if initially impractical) definitions of (a) testing effort, (b) task complexity, (c) AI capability, and (d) test detection. Then, we will develop measurable approximations of these definitions, apply them to empirical studies – likely starting with current large language models and evaluation benchmarks, and investigate the relationship between (a-d). The intended output is a full report or paper describing the formal framework, the proxies, and empirical results. We hope that this work will reveal whether AI evaluation will remain tractable under the current paradigm or whether a fundamentally new approach will be required for advanced AI. In an ideal scenario, this would allow us to derive “scaling laws” for the effort necessary to create informative evaluations.
Part (B): Sandbox of the Future: Can Pre-deployment Testing Work for AIs with Internet Access?
This project investigates whether and when it is realistically possible to run informative pre-deployment tests on AIs that will later have internet access – that is, to convince the AI that it is deployed when in fact it is still under evaluation. Concretely, we will explore scenarios where an AI agent can query the web, but responses are doctored to simulate a future state (e.g., one week later). The key question is whether such a system can withstand scrutiny from increasingly capable “discriminator” agents designed to detect the ruse. This matters because many proposed safety approaches rely on pre-deployment testing – but if models can reliably distinguish between testing and deployment, those approaches may fail much earlier than expected.
The project will deliver (1) a baseline set of “generator” sandboxes and “discriminator” agents, hosted as GitHub repository; (2) an interface that allows others to contribute new generators and discriminators; and (3) an organized set of difficulty levels and scopes, ranging from narrow (e.g., a single GitHub repository) to broad (full internet text) and from naive (checking if an LLM notices inconsistencies as a matter of pursing an unrelated task) to adversarial (deliberately trying to infer date from blockchain, news articles, or custom websites). Milestones include setting up the initial infrastructure and collecting community submissions (starting with students, MATS scholars, etc). Ideally, we would also secure partnership with some larger organisations , which would allow for offering prize money for submissions and eliciting higher-profile submissions (e.g., from some of the labs). This should highlight where and how pre-deployment evaluations fail, but also provide concrete demonstrations that can inform regulators, evaluators, and AI developers about the limits of “test-then-deploy” safety strategies.
I agree, these are very far from being in a shovel-ready format. I might something in that format later, but to be safe, I wanted to include the placeholders right away, in case somebody wants to build on them.
In case it helps, I have an excerpt from an older grant application, which might perhaps be used as inspiration. Adding that in the next comment.
I should write a proper version of this, but in the meantime, a placeholder:
The field of evaluation awareness would benefit from clearer conceptual grounding & theory. EG, to make it easier to identify the right conjectures, give better intuitions for how existing results will generalise, and make it easier to discuss things.
Some examples of things that might be good to explore via theory that then gets backed up by experiments:
Not sure if you mean "giving the model a false belief that we are watching it" or "watching the model in deployment (ie, monitoring), and making it aware of this". Either way, I am not aware of any recent line of work that would try to implement and study this. That said:
FWIW, here are my two cents on this. Still seems relevant 2 years later.
My Alignment "Plan": Avoid Strong Optimisation and Align Economy.
To rephrease/react: Viewing the AI's instrumental goal as "avoid being shut down" is perhaps misleading. The AI wants to achieve its goals, and for most goals, that is best achieved by ensuring that the environment keeps on containing something that wants to achieve the AI's goals and is powerful enough to succeed. This might often be the same as "avoid being shut down", but definitely isn't limited to that.
I think [AI within the range would be smart enough to bide its time and kill us only once it has become intelligent enough that success is assured] is clearly wrong. An AI that *might* be able to kill us is one that is somewhere around human intelligence. And humans are frequently not smart enough to bide their time
Flagging that this argument seems invalid. (Not saying anything about the conclusion.) I agree that humans frequently act too soon. But the conclusion about AI doesn't follow -- because the AI is in a different position. For a human, it is very rarely the case that they can confidently expect to increase in relative power. That the the "bide your time" strategy is such a clear win. For AI, this seems different. (Or at the minimum, the book assumes this when making the argument criticised here.)
> I imagine that the fields like fairness & bias have to encounter this a lot, so they might be some insights.
It makes sense to me that the implicit pathways for these dynamics would be an area of interest to the fields of fairness and bias. But I would not expect them to have any better tools for identifying causes and mechanisms than anyone else[1]. What kinds of insights would you expect those fields to offer?
[Rephrasing to require less context.] Consider questions "Are companies 'trying to' override the safety concerns of their employees?" and "Are companies 'trying to' hire fewer women or paying them less?". I imagine that both of these will suffer from the similar issues: (1) Even if a company is doing the bad thing, it might not be doing it through explicit means like having an explicit hiring policy. (2) At the same time, you can probably always postulate one more level of indirection, and end up going on witch hunt even in places where there are no witches.
Mostly, it just seemed to me that fairness & bias might be the biggest examples of where (1) and (2) are in tension. (Which might have something to do with there being both strong incentives to discriminate and strong taboos against it.) So it seems more likely that somebody there would have insights about how to fight it, compared to other fields like physics or AI, and perhaps even more than economics and politics. (Of course, "somebody having insights" is consistent with "most people are doing it wrong".)
As to how those insights would look like: I don't know :-( . Could just be a collection of clear historical examples where we definitely know that something bad was going on but naive methods X, Y, Z didn't show it, together with some opposite examples of where nothing was wrong but people kept looking until they found some signs? Plus some heuristics for how to distinguish these.
With AI, you can't not care about ASI or extinction or catastrophe or dictatorship or any of the words thrown around, as they directly affect your life too.
I think it is important to distinguish between caring in the sense of "it hugely affects your life" and caring in the sense of, for example, "having opinions on it, voicing them, or even taking costlier actions". Presumably, if something affects you hugely, you should care about it in the second sense, but that doesn't mean people do. My uninformed intuition is that this would apply to most employees of AI companies.
As a side note, another thing that is adjacent to your question -- but not specific to evaluation awareness per se -- is game theoretical modelling of evaluations & their effects. Some theoretical projects here would include formally studying how evaluation interacts with other tools for controlling AI -- sort of "deeper dive into some of the things discussed in Section 3.4.2 of Evaluation as a (Cooperation-Enabling?) Tool". (These might be suitable as topics for, eg, PhD students working on game theory / econ.)