LESSWRONG
LW

Geoffrey Irving
846Ω1455300
Message
Dialogue
Subscribe

Chief Scientist at the UK AI Safety Institute (AISI). Previously, DeepMind, OpenAI, Google Brain, etc.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
The need to relativise in debate
Geoffrey Irving5dΩ110

I think both protocols mentioned (MIP* = RE and the pointers one) already do what you want here. In the background the provers have to do unbounded work to prepare for the stuff they show the verifier, but the verifier's work is limited to a fixed polynomial in the input size.

And more strongly: in the pointer version where we have two competing provers, a malicious prover can't force an honest prover to do significantly more work than would be required in an honest case.

Reply
Prover-Estimator Debate: A New Scalable Oversight Protocol
Geoffrey Irving13dΩ230

I would love the two of you (Beth and @Jacob Pfau) to talk about this in detail, if you're up for it! Getting the experimental design right is key is we want to get more human participant experiments going and learn from them. The specific point of "have a high standard for efficacy" was something I was emphasising to Jacob a few weeks ago as having distinguished your experiments from some of the follow-ons.

Reply
Prover-Estimator Debate: A New Scalable Oversight Protocol
Geoffrey Irving14dΩ112

I think this maybe a forest vs. trees thing where we failed to say the thing explicitly because we've been intentionally living in the "Alice and Bob have similar heuristics and computational power" for years: the A set in the paper models the set of heuristics, but I am surprised that the term "heuristic" ended up not in the final text (we just speak a lot about relative computational resources).

Fundamentally if you let B attend to either A's thoughts or to A's context and also ensure A and B are similar, we think A shouldn't be able to hide things from B. If A is a very different model, or a stronger model, this won't hold, and we should expect arguments which A can navigate in ways B can't defend against. So in order to get anything like this to go through you need some kind of "B knows what A knows" invariant.

Reply
Prover-Estimator Debate: A New Scalable Oversight Protocol
Geoffrey Irving14dΩ112

The setting is where A and B have access to the same set of heuristics. This is modeled explicitly in the paper as a shared set of functions they can call, but corresponds to them being the same model or similar for LLM training. 

Reply
Prover-Estimator Debate: A New Scalable Oversight Protocol
Geoffrey Irving14dΩ110

The requirements are stability, compactness, and A-provability (meaning that the first player Alice knows how to correctly answer claims). It's important that A-probability is a requirement, as otherwise you can do silly things like lifting up to multilinear extensions of your problem over finite fields, and then there will always been lots of independent evidence which can be turned into stability.

Reply
Prover-Estimator Debate: A New Scalable Oversight Protocol
Geoffrey Irving14dΩ220

I agree with this! On the empirical side, we're hoping to both get more human participant experiments to happen around debate, and to build more datasets that try to probe obfuscated arguments. The dataset aspect is important, as I think in the years since the original paper follow-on scalable oversight experiments (debate or not) have been too underpowered in various ways to detect the problem, which then results in insufficient empirical work getting into the details.

Reply
Prover-Estimator Debate: A New Scalable Oversight Protocol
Geoffrey Irving14dΩ330

One way to think about amplification or debate is that they're methods for accelerated evaluation of large computations: instead of letting the debaters choose where in the computation to branch, you could just take all branches and do the full exponential work. Then safety splits into

1. Are all perturbations of the unaccelerated computation safe?
2. If we train for debate, do we get one of those?

If humans are systematically biased, this can break (1) before we get to (2). It may still be possible to shift some of the load from the unaccelerated computation to the protocol by finding protocols that are robust to some classes of systematic error (this post discusses that). This is a big issue, and one where we'll be trying to get more work to happen. A particular case is that many organisations are planning to use scalable oversight for automated safety research, and people love to be optimistic that new safety schemes might work.

Reply
Prover-Estimator Debate: A New Scalable Oversight Protocol
Geoffrey Irving14dΩ7130

On the AISI side, we would very excited to collaborate on further research! If you're interested in collaborating with UK AISI, you can express interest here. If you're a non-profit or academic, you can also apply for grants up to £200,000, from UK AISI directly here.

Reply
An alignment safety case sketch based on debate
Geoffrey Irving1moΩ110

Continuing with the Newtonian physics analogy, the case for optimism would be:

1. We have some theories with limited domain of applicability. Say, theory A.
2. Theory A is wrong at some limit, where it is replaced by theory B. Theory B is still wrong, but it has a larger domain of applicability.
3. We don't know theory B, and can't access it despite our best scalable oversight techniques, even though the AIs do figure out theory B. (This is the hard case: I think there other cases where scalable oversight does work.)
4. However, we do have some purchase on the domain of applicability of theory A: we know the limits of where it's been tested (energy levels, length scales, etc.).
5. Scalable oversight has an easier job talking about these limits to theory A than it doesn't about theory B itself. Concretely, what this means is that you can express arguments like "theory A doesn't resolve question Q, as the answer depends on applying theory A beyond it's decent-confidence domain of applicability".
6. Profit.

This gives you a capability cap: the AIs know theory B but you can't use it. But I do think if you can pull off the necessary restriction to which questions you can answer you can muddle through, even if you know only theory A and have some sense of its limits. The limits of Newtonian physics started to appear long before the replacement theories (relativity and quantum). I think we're in a similar place with the philosophical worries: we have both a bunch of specific games that fail with older theories, and a bunch of proposals (say, variants of FDT) without a clear winner.

The additional big thing you need here is a property of the world that makes that capability cap okay: if the only way to succeed is find perfect solutions using theory B, say because that gives you a necessary edge in an adversarial competition between multiple AIs, then lacking theory B sinks you. But I think we have a shot about not being in the worst case here.

(Sorry as well for delay! Was sick.)

Reply
An alignment safety case sketch based on debate
Geoffrey Irving2moΩ440

The Dodging systematic human errors in scalable oversight post is out as you saw, we can mostly take the conversation over there. But briefly, I think I'm mostly just more bullish on the margin than you about the (1) the probability that we can in fact make purchase on the hard philosophy, should that be necessary and (2) the utility we can get out of solving other problems should the hard philosophy problems remain unsolved. The goal with the dodging human errors post would be that if fail at case (1), we're more likely to recognise it and try to get utility out of (2) on other questions.

Part of this is that my mental model of formalisations standing the test of time is that we do have a lot of these: both of the links you point to are formalisations that have stood the test of time and have some reasonable domain of applicability in which they say useful things. I agree they aren't bulletproof, but I think I'd place more chance than you of muddling through with imperfect machinery. This is similar to physics: I would argue for example that Newtonian physics has stood the test of time even though it is wrong, as it still applies across a large domain of applicability.

That said, I'm not all confident in this picture: I'd place a lower probability than you on these considerations biting, but not that low.

Reply
Load More
25The need to relativise in debate
Ω
5d
Ω
2
88Prover-Estimator Debate: A New Scalable Oversight Protocol
Ω
15d
Ω
18
34Unexploitable search: blocking malicious use of free parameters
Ω
1mo
Ω
16
33Dodging systematic human errors in scalable oversight
Ω
2mo
Ω
3
57An alignment safety case sketch based on debate
Ω
2mo
Ω
21
113UK AISI’s Alignment Team: Research Agenda
Ω
2mo
Ω
2
29How to evaluate control measures for LLM agents? A trajectory from today to superintelligence
Ω
3mo
Ω
1
32Prospects for Alignment Automation: Interpretability Case Study
Ω
3mo
Ω
5
57A sketch of an AI control safety case
Ω
5mo
Ω
0
32Eliciting bad contexts
Ω
5mo
Ω
9
Load More