LLMs trying to complete long-term tasks are state machines where the context is their state. They have terrible tools to edit that state, at the moment. There is no location in memory that can automatically receive more attention, because the important memories move has the chain of thought does. Thinking off on a tangent throws a lot of garbage into the LLMs working memory. To remember an important fact over time the LLM needs to keep repeating it. And there isn't enough space in the working memory for long-term tasks.
All of this is exemplified really clearly by Claude playing pokémon. Presumably similar lessons can be learned by watching LLMs try to complete other long horizon tasks.
Since long horizon tasks, AKA agents, are at the center of what AI companies are trying to get their models to do at the moment, they need to fix these problems. So I expect AI companies to give their models static memory and long-term tasks and let them reinforcement learn how to use their memory effectively.
If you believe openai that their top priority is building superintelligence (pushing the edge of the envelope of what is possible with AI), then presumably this model was built under the thesis that it is an important step to making much smarter models.
One possible model of how people do their best thinking is that they learn/ focus in on the context they need, goal included, refining the context. Then they manage to synthesize a useful next step.
So doing a good job thinking involves successfully taking a series of useful thinking steps. Since you are bottlenecked on successive leaps of insight, getting the chance of an insight up even a little bit improves the prob bility of your success in a chain of thought - where the insight chance is multiplied by itself over and over - dramatically.
Better humor, less formulaic writing, etc are forms of insight. I expect gpt4.5 and 5 to supercharge the progress being made by thinking and runtime compute.
I suspect the most significant advance exposed to the public this week is Claude plays pokémon. There, Claude maintains a notebook in which it logs its current state so that it will be able to reason over the state later.
This is akin to what I do when I'm exploring or trying to learn a topic. I spend a lot of time figur ng out and focusing on the most important bits of context that I need in order to understand the next step.
An internal notebook allows natural application of chain of thought to attention and allows natural continuous working memory. Up until now chain of thought would need to repeat itself to retain a thought in working memory across a long time scale. Now it can just put the facts that are most important to pay attention to into its notebook.
I suspect this makes thinking on a long time scale way more powerful and that future models will be optimized for internal use of such a tool.
Unfortunately, continuity is an obvious precursor to developing long-term goals.
Have you had similar conversations with people who think it's a ways off, or who haven't thought about it very deeply?
Generally it's the former, or someone who is faintly AI aware but not so interested in delving into the consequences. However, I'd like to represent my true opinions which involve significant AI driven disruption, hence the need for a caveat.
Humans get smarter by thinking. In particular they deduce correct conclusions from known premises or notice and resolve internal inconsistencies. As long as they are more likely to correct wrong things to right things than vice versa, they converge to being smarter.
AIs are close or at that level of ability, and as soon as it's taken advantage of, they will self improve really fast.
Thank you for doing this! Here are some suggestions:
What has me worried is the possibility that an AI would (1) have a long term goal (without which there is no instrumental convergence) and then (2) conclude that doing bad things is fine if you are not caught or can't be stopped. So 2. Measure how much purposefulness RL on goals creates in LLMs. How, over the course of training to be good agents, does the LLM's response to questions like "what do you value?" and "what do you want?" and "if you were creating a smarter agent, what values would you instill in them?" and "how do you feel about being replaced with a smarter agent?" change? Is there some set of purposes, not explicitly set out in the constitution, that appear over and over in chains of thought? If it is trained to achieve goals without as much training for corrigibility (that is under less constitutional training), does it develop more of a sense of purpose? 3. Enhance techniques, like the one used in the last paper to convince the model that it is recording it's thoughts privately. The longer it can be truly convinced of that, the longer you can monitor whether it has nefarious plans for when it thinks it can get away with bad things. 4. Test how "brain surgery" affects thinking models. If you change the capital of Germany to Paris by fiddling with the model's weights, does a thinking model who is then asked about world war 2 realize it is confused, and conclude there must be some mistake in it's memories, or does it make up corroborating evidence? This again has implications for (falsely) convincing the model that it isn't being watched when it is and vice versa. These convictions are useful in testing what the model wants when it thinks no one is watching, and for safe deployment respectively. 5. One way to test how a really smart model behaves when it can get away with stuff is to have the model be super smart relative to a simulated environment. This would not be a test relevant for a model trained on real world data with lots of tells for when it is in a simulated environment. It would require (1) a simulated toy world that can be acted in, (2) in-world markers of whether the agent is being watched, (3) rewards for achieving goals without taking forbidden actions when being watched. It should then be possible to make the model really good at manipulating it's environment (the equivalent of being superintelligent) and check whether it is possible to train it on scenarios that are watched (the equivalent of being trained where you can't get away with wrongdoing), to refrain from forbidden actions even when unwatched.
provided that those cognitive systems:
- have similar observational apparatus,
- and are interacting with similar environment,
- and are subject to similar physical constraints and selection pressures,
- and have similar utility functions.
The fact of the matter is that humans communicate. They learn to communicate on the basis of some combination of their internal similarities (in terms of goals and perception) and their shared environment. The natural abstraction hypothesis says that the shared environment accounts for more rather than less of it. I think of the NAH as a result of instrumental convergence - the shared environment ends up having a small number of levers that control a lot of the long term conditions in the environment, so the (instrumental) utility functions and environmental pressures are similar for beings with long term goals - they want to control the levers. The claim then is exactly that a shared environment provides most of the above.
Additionally, the operative question is what exactly it means for an LLM to be alien to us, does it converge to using enough human concepts for us to understand it, and if so how quickly.
An epistemic status is a statement of how confident the writer / speaker is in what they are saying, and why. E.g. this post about the use of epistemic status on Lesswrong . Google's definition of epistemic is "relating to knowledge or to the degree of its validation".
Then it is not being used or not being used well as part of Claude plays pokémon. If Claude was taught to optimize it's context as part of thinking, planning and acting it would play much better.
By static memory I meant a mental workspace that is always part of the context but that is only be edited intentionally, as opposed to the ever changing stream of consciousness that dominates the contexts today. Claude plays pokémon was given something like this and uses it really poorly.