This post comes a bit late with respect to the news cycle, but I argued in a recent interview that o1 is an unfortunate twist on LLM technologies, making them particularly unsafe compared to what we might otherwise have expected:
The basic argument is that the technology behind o1 doubles down on a reinforcement learning paradigm, which puts us closer to the world where we have to get the value specification exactly right in order to avert catastrophic outcomes.
Additionally, this technology takes us further from interpretability. If you ask GPT4 to produce a chain-of-thought (with prompts such as "reason step-by-step to arrive at an answer"), you know that in some sense, the natural-language reasoning you see in the output is how it arrived at the answer.[1] This is not true of systems like o1. The o1 training rewards any pattern which results in better answers. This can work by improving the semantic reasoning which the chain-of-thought apparently implements, but it can also work by promoting subtle styles of self-prompting. In principle, o1 can learn a new internal language which helps it achieve high reward.
You can tell the RL is done properly when the models cease to speak English in their chain of thought
- Andrej Karpathy
A loss of this type of (very weak) interpretability would be quite unfortunate from a practical safety perspective. Technology like o1 moves us in the wrong direction.
Informal Alignment
The basic technology currently seems to have the property that it is "doing basically what it looks like it is doing" in some sense. (Not a very strong sense, but at least, some sense.) For example, when you ask ChatGPT to help you do your taxes, it is basically trying to help you do your taxes.
This is a very valuable property for AI safety! It lets us try approaches like Cognitive Emulation.
In some sense, the Agent Foundations program at MIRI sees the problem as: human values are currently an informal object. We can only get meaningful guarantees for formal systems. So, we need to work on formalizing concepts like human values. Only then will we be able to get formal safety guarantees.
Unfortunately, fully formalizing human values appears to be very difficult. Human values touch upon basically all of the human world, which is to say, basically all informal concepts. So it seems like this route would need to "finish philosophy" by making an essentially complete bridge between formal and informal. (This is, arguably, what approaches such as Natural Abstractions are attempting.)
Approaches similar to Cognitive Emulation lay out an alternative path. Formalizing informal concepts seems hard, but it turns out that LLMs "basically succeed" at importing all of the informal human concepts into a computer. GPT4 does not engage in the sorts of naive misinterpretations which were discussed in the early days of AI safety. If you ask it for a plan to manufacture paperclips, it doesn't think the best plan would involve converting all the matter in the solar system into paperclips. If you ask for a plan to eliminate cancer, it doesn't think the extermination of all biological life would count as a success.
We know this comes with caveats; phenomena such as adversarial examples show that the concept-borders created by modern machine learning are deeply inhuman in some ways. The computerized versions of human commonsense concepts are not robust to optimization. We don't want to naively optimize these rough mimics of human values.
Nonetheless, these "human concepts" seem robust enough to get a lot of useful work out of AI systems, without automatically losing sight of ethical implications such as the preservation of life. This might not be the sort of strong safety guarantee we would like, but it's not nothing. We should be thinking about ways to preserve these desirable properties going forward. Systems such as o1 threaten this.
- ^
Yes, this is a fairly weak sense. There is a lot of computation under the hood in the big neural network, and we don't know exactly what's going on there. However, we also know "in some sense" that the computation there is relatively weak. We also know it hasn't been trained specifically to cleverly self-prompt into giving a better answer (unlike o1); it "basically" interprets its own chain-of-thought as natural language, the same way it interprets human input.
So, to the extent that the chain-of-thought helps produce a better answer in the end, we can conclude that this is "basically" improved due to the actual semantic reasoning which the chain-of-thought apparently implements. This reasoning can fail for systems like o1.
Informal alignment
My hope is that reinforcement learning doesn't do too much damage to informal alignment.
ChatGPT might simulate the algorithms of human intelligence mixed together with the algorithms of human morality.
o1 might simulate the algorithms of human intelligence optimized to get right answers, mixed with the algorithms of human morality which do not interfere with getting the right answer.
Certain parts of morality neither help nor hinder getting the right answer. o1 might lose the parts of human morality which prevent it from lying to make its answer look better, but retain other parts of human morality.
The most vital part of human morality is that if someone tells you to achieve a goal, you do not immediately turn around and kill him in case he gets in the way of completing that goal.
Reinforcement learning might break this part of morality if it reinforces the tendency to "achieve the goal at all costs," but I think o1's reinforcement learning is only for question answering, not agentic behaviour. If its answer for a cancer cure is to kill all humans, it won't get reinforced for that.
If AI ever do get reinforcement learning for agentic behaviour, I suspect the reward signal will be negative if they accomplish the goal while causing side effects.
Informal Interpretability
I agree reinforcement learning can do a lot of damage to chain of thought interpretability. If they punish the AI for explicitly scheming to make an answer that looks good, the AI might scheme to do so anyways using words which do not sound like scheming. It may actually develop its own hidden language so that it can strategize about things the filters do not allow it to strategize about but improve its reward signal.
I think this is dangerous enough that they actually should allow the AI to scheme explicitly, and not punish it for its internal thoughts. This helps preserve chain of thought faithfulness.