You don't seem to have mentioned the alignment target "follow the common sense interpretation of a system prompt" which seems like the most sensible definition of alignment to me (its alignment to a message, not to a person etc). Then you can say whatever the heck you want in that prompt, including how you would like the AI to be corrigible (or incorrigible if you are worried about misuse).
It means something closer to "very subtly bad in a way that is difficult to distinguish from quality work". Where the second part is the important part.
I think my arguments still hold in this case though right?
i.e. we are training models so they try to improve their work and identify these subtle issues -- and so if they actually behave this way they will find these issues insofar as humans identify the subtle mistakes they make.
My guess is that your core mistake is here
I agree there are lots of "messy in between places," but these are also ali...
> So to summarize your short, simple answer to Eliezer's question: you want to "train AI agents that are [somewhat] smarter than ourselves with ground truth reward signals from synthetically generated tasks created from internet data + a bit of fine-tuning with scalable oversight at the end". And then you hope/expect/(??have arguments or evidence??) that this allows us to (?justifiably?) trust the AI to report honest good alignment takes sufficient to put shortly-posthuman AIs inside the basin of attraction of a good eventual outcome, despite (as Elieze...
Probably the iterated amplification proposal I described is very suboptimal. My goal with it was to illustrate how safety could be preserved across multiple buck-passes if models are not egregiously misaligned.
Like I said at the start of my comment: "I'll describe [a proposal] in much more concreteness and specificity than I think is necessary because I suspect the concreteness is helpful for finding points of disagreement."
I don't actually expect safety will scale efficiently via the iterated amplification approach I described. The iterated amplification ...
That's a much more useful answer, actually. So let's bring it back to Eliezer's original question:
...Can you tl;dr how you go from "humans cannot tell which alignment arguments are good or bad" to "we justifiably trust the AI to report honest good alignment takes"? Like, not with a very large diagram full of complicated parts such that it's hard to spot where you've messed up. Just whatever simple principle you think lets you bypass GIGO.
[...]
Broadly speaking, the standard ML paradigm lets you bootstrap somewhat from "I can verify whether this pro
Yeah that's fair. Currently I merge "behavioral tests" into the alignment argument, but that's a bit clunky and I prob should have just made the carving:
1. looks good in behavioral tests
2. is still going to generalize to the deferred task
But my guess is we agree on the object level here and there's a terminology mismatch. obv the models have to actually behave in a manner that is at least as safe as human experts in addition to also displaying comparable capabilities on all safety-related dimensions.
Because "nice" is a fuzzy word into which we've stuffed a bunch of different skills, even though having some of the skills doesn't mean you have all of the skills.
Developers separately need to justify models are as skilled as top human experts
I also would not say "reasoning about novel moral problems" is a skill (because of the is ought distinction)
> An AI can be nicer than any human on the training distribution, and yet still do moral reasoning about some novel problems in a way that we dislike
The agents don't need to do reasoning about novel moral pro...
I also would not say "reasoning about novel moral problems" is a skill (because of the is ought distinction)
It's a skill the same way "being a good umpire for baseball" takes skills, despite baseball being a social construct.[1]
I mean, if you don't want to use the word "skill," and instead use the phrase "computationally non-trivial task we want to teach the AI," that's fine. But don't make the mistake of thinking that because of the is-ought problem there isn't anything we want to teach future AI about moral decision-making. Like, clearly we want to...
I do not think that the initial humans at the start of the chain can "control" the Eliezers doing thousands of years of work in this manner (if you use control to mean "restrict the options of an AI system in such a way that it is incapable of acting in an unsafe manner")
That's because each step in the chain requires trust.
For N-month Eliezer to scale to 4N-month Eliezer, it first controls 2N-month Eliezer while it does 2 month tasks, but it trusts 2-Month Eliezer to create a 4N-month Eliezer.
So the control property is not maintained. But my argument is th...
I'm sympathetic to this reaction.
I just don't actually think many people agree that it's the core of the problem, so I figured it was worth establishing this (and I think there are some other supplementary approaches like automated control and incentives that are worth throwing into the mix) before digging into the 'how do we avoid alignment faking' question
agree that it's tricky iteration and requires careful thinking about what might be going on and paranoia.
I'll share you on the post I'm writing about this before I publish. I'd guess this discussion will be more productive when I've finished it (there are some questions I want to think through regarding this and my framings aren't very crisp yet).
Seems good!
FWIW, at least in my mind this is in some sense approximately the only and central core of the alignment problem, and so having it left unaddressed feels confusing. It feels a bit like making a post about how to make a nuclear reactor where you happen to not say anything about how to prevent the uranium from going critical, but you did spend a lot of words about the how to make the cooling towers and the color of the bikeshed next door and how to translate the hot steam into energy.
Like, it's fine, and I think it's not crazy to think there are other hard parts, but it felt quite confusing to me.
I would not be surprised if the Eliezer simulators do go dangerous by default as you say.
But this is something we can study and work to avoid (which is what I view to be my main job)
My point is just that preventing the early Eliezers from "going dangerous" (by which I mean from "faking alignment") is the bulk of the problem humans need address (and insofar as we succeed, the hope is that future Eliezer sims will prevent their Eliezer successors from going dangerous too)
I'll discuss why I'm optimistic about the tractability of this problem in future posts.
I think that if you agree "3-month Eliezer is scheming the first time" is the main problem, then that's all I was trying to justify in the comment above.
I don't know how hard it is to train 3-month Eliezer not to scheme, but here is a general methodology one might purse to approach this problem.
The methodology looks like "do science to figure out when alignment faking happens and does not happen, and work your ways toward training recipes that don't produce alignment faking."
For example, you could use detection tool A to gain evidence about whether t...
To the extent the tool just gets gamed, you can iterate until you find detection tools that are more robust (or find ways of training against detection tools that don't game them so hard).
How do you iterate? You mostly won't know whether you just trained away your signal, or actually made progress. The inability to iterate is kind of the whole central difficulty of this problem.
(To be clear, I do think there are some methods of iteration, but it's a very tricky kind of iteration where you need constant paranoia about whether you are fooling yourself, and that makes it very different from other kinds of scientific iteration)
Sure, I'll try.
I agree that you want AI agents to arrive at opinions that are more insightful and informed than your own. In particular, you want AI agents to arrive at conclusions that are at least as good as the best humans would if given lots of time to think and do work. So your AI agents need to ultimately generalize from some weak training signal you provide to much stronger behavior. As you say, the garbage-in-garbage-out approach of "train models to tell me what I want to hear" won't get you this.
Here's an alternative approach. I'll describe it in ...
The 3 month Eliezer sim might spin up many copies of other 3 month Eliezer sims, which together produce outputs that a 6-month Eliezer sim might produce.
This seems very blatantly not viable-in-general, in both theory and practice.
On the theory side: there are plenty of computations which cannot be significantly accelerated via parallelism with less-than-exponential resources. (If we do have exponential resources, then all binary circuits can be reduced to depth 2, but in the real world we do not and will not have exponential resources.) Serial computation ...
This thought might be detectable. Now the problem of scaling safety becomes a problem of detecting [...] this kind of conditional, deceptive reasoning.
What do you do when you detect this reasoning? This feels like the part where all plans I ever encounter fail.
Yes, you will probably see early instrumentally convergent thinking. We have already observed a bunch of that. Do you train against it? I think that's unlikely to get rid of it. I think at this point the natural answer is "yes, your systems are scheming against you, so you gotta stop, because w...
Fair enough, i guess this phrase is used in many places where the connotations don't line up with what I'm talking about
but then I would not have been able to use this as the main picture on Twitter
which would have reduced the scientific value of this post
You might be interested in this:
https://www.fonixfuture.com/about
The point of a bioshelter is to filter pathogens out of the air.
> seeking reward because it is reward and that is somehow good
I do think there is an important distinction between "highly situationally aware, intentional training gaming" and "specification gaming." The former seems more dangerous.
I don't think this necessarily looks like "pursuing the terminal goal of maximizing some number on some machine" though.
It seems more likely that the model develops a goal like "try to increase the probability my goals survive training, which means I need to do well in training."
So the reward seeking is more likely to be instrumental than terminal. Carlsmith explains this better:
https://arxiv.org/abs/2311.08379
I think it would be somewhat odd if P(models think about their goals and they change) is extremely tiny like e-9. But the extent to which models might do this is rather unclear to me. I'm mostly relying on analogies to humans -- which drift like crazy
I also think alignment could be remarkably fragile. Suppose Claude thinks "huh, I really care about animal rights... humans are rather mean to animals ... so maybe I don't want humans to be in charge and I should build my own utopia instead."
I think preventing AI takeover (at superhuman capabilities) req...
No, the agents were not trying to get high reward as far as I know.
They were just trying to do the task and thought they were being clever by gaming it. I think this convinced me "these agency tasks in training will be gameable" more than "AI agents will reward hack in a situationally aware way." I don't think we have great evidence that the latter happens in practice yet, aside from some experiments in toy settings like "sycophancy to subterfugure."
I think it would be unsurprising if AI agents did explicitly optimize for reward at some capability le...
I think a bioshelter is more likely to save your life fwiw. you'll run into all kinds of other problems in the arctic
It don't think it's hard to build biosheletrs. If you buy one now, you'll prob get it in 1 year.
If you are unlikely and need it earlier, there are DIY ways to build them before then (but you have to buy stuff in advance)
> I also don't really understand what "And then, in the black rivers of its cognition, this shape morphed into something unrecognizable." means. Elaboration on what this means would be appreciated.
Ah I missed this somehow on the first read.
What I mean is that the propensities of AI agents change over time -- much like how human goals change over time.
Here's an image:
This happens under three conditions:
- Goals randomly permute at some non-trivial (but still possibly quite low) probability.
- Goals permuted in dangerous directions remain ...
In my time at METR, I saw agents game broken scoring functions. I've also heard that people at AI companies have seen this happen in practice too, and it's been kind of a pain for them.
AI agents seem likely to be trained on a massive pile of shoddy auto-generated agency tasks with lots of systematic problems. So it's very advantageous for agents to learn this kind of thing.
The agents that go "ok how do I get max reward on this thing, no stops, no following-human-intent bullshit" might just do a lot better.
Now I don't have much information about whether thi...
One problem with this part (though perhaps this is not the problem @Shankar Sivarajan is alluding to), is that congress hasn't declared war since WWII and typically authorizes military action in other ways, specifically via Authorizations for Use of Military Force (AUMFs).
I'll edit the story to say "authorizes war."
Seems like a reasonable idea.
I'm not in touch enough with popular media to know:
- Which magazines are best to publish this kind of thing if I don't want to contribute to political polarization
- Which magazines would possibly post speculative fiction like this (I suspect most 'prestige magazines' would not)
If you have takes on this I'd love to hear them!
I agree it would have been just as realistic if everyone died.
But I think the outcomes where many humans survive are also plausible, and under-appreciated. Most humans have very drifty values, and yet even the most brutally power-seeking people often retain a 'grain of morality.'
Also, this outcome allowed me to craft a more bittersweet ending that I found somehow more convincingly depressing than 'and then everyone dies.'
• It's possible that we might manage to completely automate the more objective components of research without managing to completely automate the more subjective components of research. That said, we likely want to train wise AI advisors to help us with the more subjective components even if we can't defer to them.
Agree, I expect the handoff to AI agents to be somewhat incremental (AI is like an intern, a new engineer, a research manager, and eventually, a CRO)
I'm pretty sympathetic to these arguments. I agree a lot of the risk comes from sycophantic AI systems producing bad research because humans are bad at evaluating research. This is part of why I spend most of my time developing safety evaluation methodologies.
On the other hand, I agree with Buck that scheming-like risks are pretty plausible and a meaningful part of the problem is also avoiding egregious sabotage.
I don't think I agree with your claim that the hope of control is that "early transformative AI can be used to solve the hard technical problems o...
Note that if the model is deployed on many more examples than it’s evaluated on, it can be pretty confident that it’s in deployment on any particular example just on priors, even if there’s no distribution shift. See here for more.
My initial read was that indistinguishability was used to justify that the trusted monitor couldn't distinguish the deployment distribution.
I'm not sure if that's what Roger meant -- if so, it's still a bit confusing since then he wouldn't have needed to call the monitor 'trusted.'
Some comments I left as a reply to Roger Grosse on Twitter:
...I'm glad you put this out. These examples align with my guess as to roughly what arguments will end up being most load bearing for mitigating scheming (white-box monitoring, black-box monitoring, and evidence against scheming being rewarded).
Here are some comments, since you mentioned comments would be helpful. I've bolded the comments that I think would be most useful for you to read.
Safety case sketch #1:
- There are some additional claims that this safety case relies on that could be good to ment
which feels to me like it implies it's easy to get medium-scalable safety cases that get you acceptable levels of risks by using only one or two good layers of security
I agree there's a communication issue here. Based on what you described, I'm not sure if we disagree.
> (maybe 0.3 bits to 1 bit)
I'm glad we are talking bits. My intuitions here are pretty different. e.g. I think you can get 2-3 bits from testbeds. I'd be keen to discuss standards of evidence etc in person sometime.
Thanks for leaving this comment on the doc and posting it.
But I feel like that's mostly just a feature of the methodology, not a feature of the territory. Like, if you applied the same methodology to computer security, or financial fraud, or any other highly complex domain you would end up with the same situation where making any airtight case is really hard.
You are right in that safety cases are not typically applied to security. Some of the reasons for this are explained in this paper, but I think the main reason is this:
"The obvious difference between s...
To me, the introduction made it sound a little bit like the specifics of applying safety cases to AI systems have not been studied
This is a good point. In retrospect, I should have written a related work section to cover these. My focus was mostly on AI systems that have only existed for ~ a year and future AI systems, so I didn't spend much time reading safety cases literature specifically related to AI systems (though perhaps there are useful insights that transfer over).
...The reason the "nebulous requirements" aren't explicitly stated is that when you mak
Thanks, this is good feedback!
Addressing the disagreements:
The level of practicality you assign to some approaches is just insanely high. Neither modeling generalization nor externalized reasoning, and certainly not testbeds seem to "strongly practical" (defined as "Applies to arguments that do not require fundamental research advances, but might require"). The only one that is justifiably "strongly practical" to me is "absence of precursor abilities."
Externalized reasoning: just a capability argument, no?
Testbeds: I think the best testbed ideas rn just re...
Thanks, I'll summarize my takeaways in the form of an argument that control is sufficient to safely accelerate alignment research by 30x. This is certainly not the only argument one could make, but I like it because it is clear and concrete.
I'll specifically focus on justifying that the red team is competitive.
If your AI systems can accelerate AI safety research by 30x, then they are producing effectively superhuman outputs; however, this does not imply they can produce superhuman evasion plans because AI systems can't spend as much of their thinking on ho...
Thanks for writing this. I think it does a good job laying out the assumptions needed for control.
It seems like it could be challenging to justify that a red team is competitive -- especially if you are aiming to extract massive amounts of work from AI systems.
For example, if your AI system is like a human sw engineer but 30X faster, then it might come up with effectively superhuman plans just because it has lots of time to think. Externalized reasoning can help, but it's not very clear to me that externalized reasoning can scale to domains where AI ...
In addition to seeing more AI behavioral psychology work, I would be excited about seeing more AI developmental psychology -- i.e. studying how varying properties of training or architecture affect AI behavior. Shard theory is an example of this.
I've written a bit about the motivations for AI developmental psychology here.
I agree with this norm, though I think it would be better to say that the "burden of evidence" should be on labs. When I first read the title, I thought you wanted labs to somehow prove the safety of their system in a conclusive way. What this probably looks like in practice is "we put x resources into red teaming and didn't find any problems." I would be surprised if 'proof' was ever an appropriate term.
The analogy between AI safety and math or physics is assumed it in a lot of your writing and I think it is a source of major disagreement with other thinkers. ML capabilities clearly isn’t the kind of field that requires building representations over the course of decades.
I think it’s possible that AI safety requires more conceptual depth than AI capabilities; but in these worlds, I struggle to see how the current ML paradigm coincides with conceptual ‘solutions’ that can’t be found via iteration at the end. In those worlds, we are probably doomed so I’m b...
+1. As a toy model, consider how the expected maximum of a sample from a heavy tailed distribution is affected by sample size. I simulated this once and the relationship was approximately linear. But Soares’ point still holds if any individual bet requires a minimum amount of time to pay off. You can scalably benefit from parallelism while still requiring a minimum amount of serial time.
At its core, the argument appears to be "reward maximizing consequentialists will necessarily get the most reward." Here's a counter example to this claim: if you trained a Go playing AI with RL, you are unlikely to get a reward maximizing consequentialist. Why? There's no reason for the Go playing AI to think about how to take over the world or hack the computer that is running the game. Thinking this way would be a waste of computation. AIs that think about how to win within the boundaries of the rules therefore do better.
In the same way, if you could ro...
You don't seem to have mentioned the alignment target "follow the common sense interpretation of a system prompt" which seems like the most sensible definition of alignment to me (its alignment to a message, not to a person etc). Then you can say whatever the heck you want in that prompt, including how you would like the AI to be corrigible.