Here are some techniques that could be used in practice if no technical safety progress was made but we were working with powerful models. (I'm not including some of the stuff you already said)
- For constructing powerful reward signals:
- For preventing high-stakes failures:
- Adversarial training, probably using model-generated inputs, as you noted.
- Using trusted dumb models to review the output of smart models. E.g. every time GPT-6 produces a bash command, you have GPT-4 read the whole context and flag it if it thinks the bash command is suspicious; when flagged, the command is reviewed by some slower catastrophe prevention process.
- Using smart models to review their own outputs for catastrophicness, combined with various tricks to make collusion harder.
- Limiting the power of smart models. E.g. if you're worried about GPT-6 backdooring your code but you still want to get the benefit of it being smarter about code than GPT-4, you might have these models write code by GPT-6 writing instructions for the code that are then implemented by GPT-4. The hope here is that it's easier for 6 to backdoor code than to write code-writing instructions that are themselves backdoored.
- another proposal here is to have GPT-4 write the first draft of code and have GPT-6 write diffs.
- For all of these, you can do various paraphrasing and censorship tricks to make it harder for the models to collude. See this ancient Paul Christiano post for descriptions of a bunch of ideas for techniques here.
- Redwood is hoping to release a paper evaluating many of these techniques in the code-backdooring setting in a month or so, DM me if you want to read an early draft.
What safety techniques exist? What techniques should labs be using, now or if they soon develop powerful models? What techniques should be considered industry-best-practices? "How might we align transformative AI if it’s developed very soon?" What's our current bag of tricks? What alignment research is ready to go?
Some general sources:
Some sources on particular (kinds of) techniques:
Some categories of safety techniques:
This all depends on threat models and how the leading lab would use its powerful AI to prevent others from training/deploying unaligned powerful AI (or make the world robust to unaligned powerful AI)...
Some notes or pointers-to-sources on what OpenAI and Anthropic say they do:
OpenAI: they use RLHF.[1] While working on GPT-4, they "used GPT-4 to help create training data for model fine-tuning and iterate on classifiers across training, evaluations, and monitoring"; see also Using GPT-4 for content moderation. "Risks & mitigations" in "GPT-4 Technical Report" discusses "Adversarial Testing via Domain Experts" and "Model-Assisted Safety Pipeline"; see also "Model Mitigations" in "GPT-4 System Card."
OpenAI ultimately has "Superalignment" goals, but they say "Our plan in the shorter term is to use AI to help humans evaluate the outputs of more complex models and monitor complex systems."[2][3]
Anthropic: they use Constitutional AI. "Alignment Capabilities" in "Core Views on AI Safety" (Anthropic 2023) mentions "debate, scaling automated red-teaming, Constitutional AI, debiasing, and RLHF," at least as research topics if not mature techniques. Similarly, their paper Red Teaming Language Models to Reduce Harms (Anthropic 2022) uses red-teaming to create data for RL; it's not clear whether they use this technique in practice.
This is a broad question; narrow answers would be helpful, e.g. "tagging pre-training data based on human preferences and filtering out some content about AI (especially takeover)."
Thanks to Aaron Scher for suggestions.
Related: Which possible AI systems are relatively safe?
Our approach to alignment research (OpenAI 2022) says "RL from human feedback is our main technique for aligning our deployed language models today." See also Aligning language models to follow instructions (OpenAI 2022) and GPT-4 (OpenAI 2023).
See also "Training models to assist human evaluation" in "Our approach to alignment research" (OpenAI 2022).
Aaron Scher left the following comment on a draft (slightly edited):