This is outstanding. I'll have other comments later, but first I wanted to praise how this is acting as a synthesis of lots of previous ideas that weren't ever at the front of my mind.
I'd especially like to hear your thoughts on the above proposal of loss-minimizing a language model all the way to AGI.
I hope you won't mind me quoting your earlier self as I strongly agree with your previous take on the matter:
If you train GPT-3 on a bunch of medical textbooks and prompt it to tell you a cure for Alzheimer's, it won't tell you a cure, it will tell you what humans have said about curing Alzheimer's ... It would just tell you a plausible story about a situation related to the prompt about curing Alzheimer's, based on its training data. Rather than a logical Oracle, this image-captioning-esque scheme would be an intuitive Oracle, telling you things that make sense based on associations already present within the training set.
...What am I driving at here, by pointing out that curing Alzheimer's is hard? It's that the designs above are missing something, and what they're missing is search. I'm not saying that getting a neural net to directly output your cure for Alzheimer's is impossible. But it seems like it requires there to already be a "cure for Alzheimer's" dimension in your learned model. The more realistic way to find the cure for Alzheimer's, if you don't alr
Charlie's quote is an excellent description of an important crux/challenge of getting useful difficult intellectual work out of GPTs.
Despite this, I think it's possible in principle to train a GPT-like model to AGI or to solve problems at least as hard as humans can solve, for a combination of reasons:
Curated.
There are really many things I found outstanding about this post. The key one, however, is that after reading this, I feel less confused when thinking about transformer language models. The post had that taste of deconfusion where many of the arguments are elegant, and simple; like suddenly tilting a bewildering shape into place. I particularly enjoyed the discussion of ways agency does and does not manifest within a simulator (multiple agents, irrational agents, non-agentic processes), the formulation of the prediction orthogonality thesis, ways in which some prior alignment work (e.g. Bostrom’s tool-oracle-genie-sovereign typology) does not carve at the joints of the abstraction most helpful for thinking about GPT; and how it all grounded out in arguments from technical details of GPT (e.g. the absence of recursive prompting in the training set and its implications for the agency of the simulator).
I also want to curate this piece for its boldness. It strikes at finding a True Name in a domain of messy blobs of matrices, and uses the “simulator” abstraction to suggest a number of directions I found myself actively curious and cautiously optimistic about. I very much look forward to seeing further posts from janus and others who explore and play around with the Simulator abstraction in the context of large language models.
I've been thinking about this post a lot since it first came out. Overall, I think it's core thesis is wrong, and I've seen a lot of people make confident wrong inferences on the basis of it.
The core problem with the post was covered by Eliezer's post "GPTs are Predictors, not Imitators" (which was not written, I think, as a direct response, but which still seems to me to convey the core problem with this post):
...Imagine yourself in a box, trying to predict the next word - assign as much probability mass to the next token as possible - for all the text on the Internet.
Koan: Is this a task whose difficulty caps out as human intelligence, or at the intelligence level of the smartest human who wrote any Internet text? What factors make that task easier, or harder? (If you don't have an answer, maybe take a minute to generate one, or alternatively, try to predict what I'll say next; if you do have an answer, take a moment to review it inside your mind, or maybe say the words out loud.)
Consider that somewhere on the internet is probably a list of thruples: <product of 2 prime numbers, first prime, second prime>.
GPT obviously isn't going to predict that
I think you missed the point. I agree that language models are predictors rather than imitators, and that they probably don't work by time-stepping forward a simulation. Maybe Janus should have chosen a word other than "simulators." But if you gensym out the particular choice of word, this post is encapsulating the most surprising development of the past few years in AI (and therefore, the world).
Chapter 10 of Bostrom's Superintelligence (2014) is titled, "Oracles, Genies, Sovereigns, Tools". As the "Inadequate Ontologies" section of this post points out, language models (as they are used and heralded as proto-AGI) aren't any of those things. (The Claude or ChatGPT "assistant" character is, well, a simulacrum, not "the AI itself"; it's useful to have the word simulacrum for this.)
This is a big deal! Someone whose story about why we're all going to die was limited to, "We were right about everything in 2014, but then there was a lot of capabilities progress," would be willfully ignoring this shocking empirical development (which doesn't mean we're not all going to die, but it could be for somewhat different reasons).
...repeatedly alludes to the loss function on which GPTs are trained
Sure, I am fine with calling it a "prediction objective" but if we drop the simulation abstraction then I think most of the sentences in this post don't make sense. Here are some sentences which only make sense if you are talking about a simulation in the sense of stepping forward through time, and not just something optimized according to a generic "prediction objective".
...> A simulation is the imitation of the operation of a real-world process or system over time.
[...]
It emphasizes the role of the model as a transition rule that evolves processes over time. The power of factored cognition / chain-of-thought reasoning is obvious.
[...]
It’s clear that in order to actually do anything (intelligent, useful, dangerous, etc), the model must act through simulation of something.
[...]
Well, typically, we avoid getting confused by recognizing a distinction between the laws of physics, which apply everywhere at all times, and spatiotemporally constrained things which evolve according to physics, which can have contingent properties such as caring about a goal.
[...]
Below is a table which compares various simulator-like things to the type of simulator that GPT exemplifies on some quantif
I can at least give you the short version of why I think you're wrong, if you want to chat lmk I guess.
Plain text: "GPT is a simulator."
Correct interpretation: "Sampling from GPT to generate text is a simulation, where the state of the simulation's 'world' is the text and GPT encodes learned transition dynamics between states of the text."
Mistaken interpretation: "GPT works by doing a simulation of the process that generated the training data. To make predictions, it internally represents the physical state of the Earth, and predicts the next token by applying learned transition dynamics to the represented state of the Earth to get a future state of the Earth."
-
So that's the "core thesis." Maybe it would help to do the same thing for some of the things you might use the simulator framing for?
Plain text: "GPT can simulate a lot of different humans."
Correct interpretation: "The text dynamics of GPT can support long-lived dynamical processes that write text like a lot of different humans. This is a lot like how a simulation of the solar system could have a lot of different orbits depending on the initial condition, except the laws of text are a lot more complicated and anthropocentric...
Previously on Less Wrong:
Steve Byrnes wrote a couple of posts exploring this idea of AGI via self-supervised, predictive models minimizing loss over giant, human-generated datasets:
I think Simulators mostly says obvious and uncontroversial things, but added to the conversation by pointing them out for those who haven't noticed and introducing words for those who struggle to articulate. IMO people that perceive it as making controversial claims have mostly misunderstood its object-level content, although sometimes they may have correctly hallucinated things that I believe or seriously entertain. Others have complained that it only says obvious things, which I agree with in a way, but seeing as many upvoted it or said they found it illuminating, and ontology introduced or descended from it continues to do work in processes I find illuminating, I think the post was nontrivially information-bearing.
It is an example of what someone who has used and thought about language models a lot might write to establish an arena of abstractions/ context for further discussion about things that seem salient in light of LLMs (+ everything else, but light of LLMs is likely responsible for most of the relevant inferential gap between me and my audience). I would not be surprised if it has most value as a dense trace enabling partial uploads of its generator, rather than updating ...
Great post. Very interesting.
However, I think that assuming there's a "true name" or "abstract type that GPT represents" is an error.
If GPT means "transformers trained on next-token prediction", then GPT's true name is just that. The character of the models produced by that training is another question - an empirical one. That character needn't be consistent (even once we exclude inner alignment failures).
Even if every GPT is a simulator in some sense, I think there's a risk of motte-and-baileying our way into trouble.
If GPT means "transformers trained on next-token prediction", then GPT's true name is just that.
Things are instances of more than one true name because types are hierarchical.
GPT is a thing. GPT is an AI (a type of thing). GPT is a also ML model (a type of AI). GPT is also a simulator (a type of ML model). GPT is a generative pretrained transformer (a type of simulator). GPT-3 is a generative pretrained transformer with 175B parameters trained on a particular dataset (a type/instance of GPT).
The intention is not to rename GPT -> simulator. Things that are not GPT can be simulators too. "Simulator" is a superclass of GPT.
The reason I propose "simulator" as a named category is because I think it's useful to talk about properties of simulators more generally, like it makes sense to be able to speak of "AI alignment" and not only "GPT alignment". We can say things like "simulators generate trajectories that evolve according to the learned conditional probabilities of the training distribution" instead of "GPTs, RNNs, LSTMs, Dalle, n-grams, and RL transition models generate trajectories that evolve according to the learned conditional probabilities of the training distribution". T...
"A supreme counterexample is the Decision Transformer, which can be used to run processes which achieve SOTA for offline reinforcement learning despite being trained on random trajectories."
This is not true. The Decision Transformer paper doesn't run any complex experiments on random data; they only give a toy example with random data.
We actually ran experiments with Decision Transformer on random data from the D4RL offline RL suite. Specifically, we considered random data from the Mujoco Gym tasks. We found that when it only has access to random data, Decision Transformer only achieves 4% of the performance that it can achieve when it has access to expert data. (See the D4RL Gym results in our Table 1, and compare "DT" on "random" to "medium-expert".)
You also claim that GPT-like models achieve "SOTA performance in domains traditionally dominated by RL, like games." You cite the paper "Multi-Game Decision Transformers" for this claim.
But, in Multi-Game Decision Transformers, reinforcement learning (specifically, a Q-learning variant called BCQ) trained on a single Atari game beats Decision Transformer trained on many Atari games. This is shown in Figure 1 of that paper. The authors of the paper don't even claim that Decision Transformer beats RL. Instead, they write: "We are not striving for mastery or efficiency that game-specific agents can offer, as we believe we are still in early stages of this research agenda. Rather, we investigate whether the same trends observed in language and vision hold for large-scale generalist reinforcement learning agents."
It may be that Decision Transformers are on a path to matching RL, but it's important to know that this hasn't yet happened. I'm also not aware of any work establishing scaling laws in RL.
This post snuck up on me.
The first time I read it, I was underwhelmed. My reaction was: "well, yeah, duh. Isn't this all kind of obvious if you've worked with GPTs? I guess it's nice that someone wrote it down, in case anyone doesn't already know this stuff, but it's not going to shift my own thinking."
But sometimes putting a name to what you "already know" makes a whole world of difference.
Before I read "Simulators," when I'd encounter people who thought of GPT as an agent trying to maximize something, or people who treated MMLU-like one-forward-pass inference as the basic thing that GPT "does" ... well, I would immediately think "that doesn't sound right," and sometimes I would go on to think about why, and concoct some kind of argument.
But it didn't feel like I had a crisp sense of what mistake(s) these people were making, even though I "already knew" all the low-level stuff that led me to conclude that some mistake was being made -- the same low-level facts that Janus marshals here for the same purpose.
It just felt like I lived in a world where lots of different people said lots of different things about GPTs, and a lot of these things just "felt wrong," and t...
Thank you for this amazing and clarifying post.
You're operating far above my pay grade in connection with any of this subject matter, but nonetheless I'm going to dare a different suggestion for the True Names: do you think there's any merit to -speculators- and -speculations-? I believe these names fit all the excellent and clarifying tests and criteria presented in your post; in particular those referencing counterfactual configurations and process specification through chaining. Furthermore I think they have some advantages of their own. Speculators producing speculations seem more the right relationship between the two main concepts than simulators producing simulacra. (I don't think they do that!) Also, simulators have such a long history in digital systems of being aimed at deterministic fidelity to a reference system, which could be at odds with the abundant production of counterfactuals I believe you're actually seeking to emphasize here. Finally, speculations can be fanciful, realistic, or absurd, a nice match to the variety of outputs produced by GPT in the presence of different types of prompting, something you highlight, I think correctly, as a hallmark of GPT's status as a novel type of AI. One who speculates is a certain type of thinker: I propose that GPT is that type.
What do you think?
One of the most interesting responses I've gotten from GPT-3 was to this prompt: "1.e4 c5 2.d4 cxd4": the first moves of a chess game. GPT-3's response: "" 3.c3 dxc3 4.Nxc3 g6 5.Nf3 Bg7 6.Bc4 Nc6 7.0-0 Qc7 8.Qe2 e6 9.Bg5 Nf6 10.Rd1 Ne5 11.Be3 0-0 12.a3 d5 13.Nb5 Qc8 14.Nxc7 Qxc7 15.e5 Nfd7 16.Nxd5 Nc4 17.Bxc4 Qxc4 18.Qe4 e5 19.Qxe5 Qc7 20.Rd3 Qc6 21.Qxc4 dxc4 22.Nd4 Rfe8 23.Rfd1 b5 24.Nb3 Rc8 25.Nd2 b4 26.cd3 f6 27.exf6 gxf6 28.Rd4 b3 29.Ke2 Be6 30.Ne4 Rd8 31.Nc5 Bg7 32.Rxd8+ Bxd8 33.a4 bxa3 34.Ra1 Kh8 35.Bf1 Bc3 36.Nxa6 f5 37.Ne6 Be".
This looks like a chess game, though it is not: toward the end it veers into move-like notations that in fact don't represent legal moves. But clearly much more than superficial pattern extension is going on. GPT-3 produced something that understands chess notation, has a model of the game, tracks state, and has learned to make good moves. After a few moves it deviates from any actual game I could find evidence of online, but it continues to make objectively excellent moves (for awhile). GPT-3 has generated something that by any standard is simulating chess gameplay (though I still can't relate to calling GPT-3 itself a simulator here). This isn't tho...
I like this!
One thing I like about "simulators"/"simulacra" over "speculators"/"speculations" is that the former personifies simulacra over the simulator (suggests agency/personality/etc belong to simulacra) which I think is less misleading, or at least counterbalances the tendency people have to personify "GPT".
"Speculator" sounds active and agentic whereas "speculations" sounds passive and static. I think these names does not emphasize enough the role of the speculations themselves in programming the "speculator" as it creates further speculations.
You're right about the baggage "deterministic fidelity" associated with "simulators", though. One of the things I did not emphasize in this post but have written a lot about in drafts is the epistemic and underdetermined nature of SSL simulators. Maybe we can combine these phrases -- "speculative simulations"?
I strongly agree with everything you've said.
It is an age-old duality with many names and the true name is something like their intersection, or perhaps their union. I think it's unnamed, but we might be able to see it more clearly by walking around it in in words.
Simulator and simulacra personifies the simulacra and alludes to a base reality that the simulation is of.
Alternatively, we could say simulator and simulations, which personifies simulations less and refers to the totality or container of that which is simulated. I tend to use "simulations" and "simulacra" not quite interchangeably: simulacra have the type signature of "things", simulations of "worlds". Worlds are things but also contain things. "Simulacra" refer to (not only proper) subsets or sub-patterns of that which is simulated; for instance, I'd refer to a character in a multi-character simulated scene as a simulacrum. It is a pattern in a simulation, which can be identified with the totality the computation over time performed by the simulator (and an RNG).
Speculator and speculations personifies the speculator and casts speculations in a passive role but also emphasizes their speculative nature. It emphasizes an i...
I don't know of any other notable advances until the 2010s brought the first interesting language generation results from neural networks.
"A Neural Probabilistic Language Model" - Bengio et al. (2000?
or 2003?) was cited by Turing award https://proceedings.neurips.cc/paper/2000/hash/728f206c2a01bf572b5940d7d9a8fa4c-Abstract.html
Also worth knowing about: "Generating text with recurrent neural networks" - Ilya Sutskever, James Martens, Geoffrey E Hinton (2011)
I don't have any substantive comment to provide at the moment, but I want to share that this is the post that piqued my initial interest in alignment. It provided a fascinating conceptual framework around how we can qualitatively describe the behavior of LLMs, and got me thinking about implications of more powerful future models. Although it's possible that I would eventually become interested in alignment, this post (and simulator theory broadly) deserve a large chunk of the credit. Thanks janus.
Thanks for writing this up! I've found this frame to be a really useful way of thinking about GPT-like models since first discussing it.
In terms of future work, I was surprised to see the apparent low priority of discussing pre-trained simulators that were then modified by RLHF (buried in the 'other methods' section of 'Novel methods of process/agent specification'). Please consider this comment a vote for you to write more on this! Discussion seems especially important given e.g. OpenAI's current plans. My understanding is that Conjecture is overall very negative on RLHF, but that makes it seem more useful to discuss how to model the results of the approach, not less, to the extent that you expect this framing to help shed light what might go wrong.
It feels like there are a few different ways you could sketch out how you might expect this kind of training to go. Quick, clearly non-exhaustive thoughts below:
Figuring out and posting about how RLHF and other methods ([online] decision transformer, IDA, rejection sampling, etc) modify the nature of simulators is very high priority. There's an ongoing research project at Conjecture specifically about this, which is the main reason I didn't emphasize it as a future topic in this sequence. Hopefully we'll put out a post about our preliminary theoretical and empirical findings soon.
Some interesting threads:
RL with KL penalties better seen as Bayesian inference shows that the optimal policy when you hit a GPT with RL with a KL penalty weighted by 1 is actually equivalent to conditioning the policy on a criteria estimated by the reward model, which is compatible with the simulator formalism.
However, this doesn't happen in current practice, because
1. both OAI and Anthropic use very small KL penalties (e.g. weighted by 0.001 in Anthropic's paper - which in the Bayesian inference framework means updating on the "evidence" 1000 times) or maybe none at all
2. early stopping: the RL training does not converge to anything near optimality. Path dependence/distribution shift/inductive biases during RL training seem likely to play a major rol...
Our plan to accelerate alignment does not preclude theoretical thinking, but rather requires it. The mainline agenda atm is not full automation (which I expect to be both more dangerous and less useful in the short term), but what I've been calling "cyborgism": I want to maximize the bandwidth between human alignment researchers and AI tools/oracles/assistants/simulations. It is essential that these tools are developed by (or in a tight feedback loop with) actual alignment researchers doing theory work, because we want to simulate and play with thought processes and workflows that produce useful alignment ideas. And the idea is, in part, to amplify the human. If this works, I should be able to do a lot more "thinking about theory" than I am now.
How control/amplification schemes like RLHF might corrupt the nature of simulators is particularly relevant to think about. OAI's vision of accelerating alignment, for instance, almost certainly relies on RLHF. My guess is that self-supervised learning will be safer and more effective. Even aside from alignment concerns, RLHF instruct tuning makes GPT models worse for the kind of cyborgism II want to do (e.g. it causes mode collapse & cripples semantic generalization, and I want to explore multiverses and steer using arbitrary natural language boundary conditions, not just literal instructions) (although I suspect these are consequences of a more general class of tuning methods than just RLHF, which is one of the things I'd like to understand better).
Thanks a lot for this comment. These are extremely valid concerns that we've been thinking about a lot.
I'd just like the designers of alignment-research boosting tools to have clear arguments that nothing of this sort is likely.
I don't think this is feasible given our current understanding of epistemology in general and epistemology of alignment research in particular. The problems you listed are potential problems with any methodology, not just AI assisted research. Being able to look at a proposed method and make clear arguments that it's unlikely to have any undesirable incentives or negative second order effects, etc, is the holy grail of applied epistemology and one of the cores of the alignment problem.
For now, the best we can do is be aware of these concerns, work to improve our understanding of the underlying epistemological problem, design the tools and methods in a way that avoids problems (or at least make them likely to be noticed) according to our current best understanding, and actively address them in the process.
On a high level, it seems wise to me to follow these principles:
Some academics seem to have (possibly independently? Or maybe its just in the water nowadays) discovered the Simulators theory, and have some quantitative measures to back it up.
...Large Language Models (LLMs) are often misleadingly recognized as having a personality or a set of values. We argue that an LLM can be seen as a superposition of perspectives with different values and personality traits. LLMs exhibit context-dependent values and personality traits that change based on the induced perspective (as opposed to humans, who tend to have more coherent values and personality traits across contexts). We introduce the concept of perspective controllability, which refers to a model's affordance to adopt various perspectives with differing values and personality traits. In our experiments, we use questionnaires from psychology (PVQ, VSM, IPIP) to study how exhibited values and personality traits change based on different perspectives. Through qualitative experiments, we show that LLMs express different values when those are (implicitly or explicitly) implied in the prompt, and that LLMs express different values even when those are not obviously implied (demonstrating their context-de
I think this is an excellent description of GPT-like models. It both fits with my observations and clarifies my thinking. It also leads me to examine in a new light questions which have been on my mind recently:
What is the limit of power of simulation that our current architectures (with some iterative improvements) can achieve when scaled to greater power (via additional computation, improved datasets, etc)?
Is a Simulator model really what we want? Can we trust the outputs we get from it to help us with things like accelerating alignment research? What might failure modes look like?
This is great! I really like your "prediction orthogonality thesis", which gets to the heart of why I think there's more hope in aligning LLM's than many other models.
One point of confusion I had. You write:
Optimizing toward the simulation objective notably does not incentivize instrumentally convergent behaviors the way that reward functions which evaluate trajectories do. This is because predictive accuracy applies optimization pressure deontologically: judging actions directly, rather than their consequences. Instrumental convergence only comes into play when there are free variables in action space which are optimized with respect to their consequences.
[25]
Constraining free variables by limiting episode length is the rationale ofmyopia
; deontological incentives are ideally myopic. As demonstrated by GPT, which learns to predict goal-directed behavior, myopic incentives don’t mean the policy isn’t incentivized to account for the future, but that it should only do so in service of optimizing the present action (for predictive accuracy)[26]
.
I don't think I agree with this conclusion (or maybe I don't understand the claim). I agree that myopic incentives don't mean myop...
Overall, I agree with most of this post, thanks for writing it.
I agree with your discussion of the importance of having the right vocabulary. However, I feel that the term "simulator" that you propose has a nagging flaw: that is, it invokes the connotation of "precision simulation" in people with a computer engineering background, so perhaps in most alignment researchers (rather than, I guess, the main connotation invoked in the general public, as in "Alice simulated illness to skip classes", which is actually closer to what GPT does). Additionally, the simulation hypothesis sometimes (though not always) assumes a "precision simulation", not an "approximate simulation" a.k.a. prediction, which GPT really does and will do.
To me, it's obvious that GPT-like AIs will always be "predictors", not "precision simulators" because of computation boundedness and context (prompt, window) boundedness.
Why this false connotation of precision is bad? Because it seems to lead to over-estimation of simulacra rolled out by GPT. Such as in the following sentence:
...Simulators like GPT give us methods of instantiating
I find this post fairly uninteresting, and feel irritated when people confidently make statements about "simulacra." One problem is, on my understanding, that it doesn't really reduce the problem of how LLMs work. "Why did GPT-4 say that thing?" "Because it was simulating someone who was saying that thing." It does postulate some kind of internal gating network which chooses between the different "experts" (simulacra), so it isn't contentless, but... Yeah.
Also I don't think that LLMs have "hidden internal intelligence", given e.g LLMs trained on “A is B” fail to learn “B is A”. Big evidence against the simulators hypothesis. And I don't think people have nearly enough evidence to be going around talking about "what is the LLM simulating", unless this is some really loose metaphor, in which case it should be marked as such.
I also think it isn't useful to think of LLMs as "simulating stuff" or having a "shoggoth" or whatever. I think that can often give a false sense of understanding.
However, I think this post did properly call out the huge miss of earlier speculation about oracles and agents and such.
An excellent article that gives a lot of insight into LLMs. I consider it a significant piece of deconfusion.
RL creates agents, and RL seemed to be the way to AGI. In the 2010s, reinforcement learning was the dominant paradigm for those interested in AGI (e.g. OpenAI). RL lends naturally to creating agents that pursue rewards/utility/objectives. So there was reason to expect that agentic AI would be the first (and by the theoretical arguments, last) form that superintelligence would take.
Why are you confident that RL creates agents? Is it the non-stochasticity of optimal policies for almost all reward functions? The on-policy data collection of PPO? I think there are a few valid reasons to suspect that, but this excerpt seems surprisingly confident.
One question that occurred to me, reading the extended GPT-generated text. (Probably more a curiosity question than a contribution as such...)
To what extent does text generated by GPT-simulated 'agents', then published on the internet (where it may be used in a future dataset to train language models), create a feedback loop?
Two questions that I see as intuition pumps on this point:
I think this is a legitimate problem which we might not be inclined to take as seriously as we should because it sounds absurd.
Would it be a bad idea to recursively ask GPT-n "You're a misaligned agent simulated by a language model (...) if training got really cheap and this process occurred billions of times?
Yes. I think it's likely this would be a very bad idea.
when the corpus of internet text begins to include more text generated only by simulated writers. Does this potentially degrade the ability of future language models to model agents, perform logic etc?
My concern with GPT-generated text appearing in future training corpora is not primarily that it will degrade the quality of its prior over language-in-the-wild (well-prompted GPT-3 is not worse than many humans at sound reasoning; near-future GPTs may be superhuman and actually raise the sanity waterline), but that
There is a model/episodes duality, and an aligned model (in whatever sense) corresponds to an aligned distribution of episodes (within its scope). Episodes are related to each other by time evolution (which corresponds to preference/values/utility when considered across all episodes in scope), induced by the model, the rules of episode construction/generation, and ways of restricting episodes to smaller/earlier/partial episodes.
The mystery of this framing is in how to relate different models (or prompt-conditioned aspects of behavior of the same model) to ...
This is one of those things that seems totally obvious after reading and makes you wonder how anyone thought otherwise but is somehow non-trivial anyways.
...The verdict that knowledge is purely a property of configurations cannot be naively generalized from real life to GPT simulations, because “physics” and “configurations” play different roles in the two (as I’ll address in the next post). The parable of the two tests, however, literally pertains to GPT. People have a tendency to draw erroneous global conclusions about GPT from behaviors which are in fact prompt-contingent, and consequently there is a pattern of constant discoveries that GPT-3 exceeds previously measured capabilities given alternate conditio
This kind of comment ("this precise part had this precise effect on me") is a really valuable form of feedback that I'd love to get (and will try to give) more often. Thanks! It's particularly interesting because someone gave feedback on a draft that the business about simulated test-takers seemed unnecessary and made things more confusing.
Since you mentioned, I'm going to ramble on about some additional nuance on this point.
Here's an intuition pump which strongly discourages "fundamental attribution error" to the simulator:
Imagine a machine where you feed in an image and it literally opens a window to a parallel reality with that image as a boundary constraint. You can watch events downstream of the still frame unravel through the viewfinder.
If you observe the people in the parallel universe doing something dumb, the obvious first thought is that you should try a frame into a different situation that's more likely to contain smart people (or even try again, if the frame underdetermines the world and you'll reveal a different "preexisting" situation each time you run the machine).
That's the obvious conclusion in the thought experiment because the machine isn't assigned a mind-like ...
Guessing the right theory of physics is equivalent to minimizing predictive loss. Any uncertainty that cannot be reduced by more observation or more thinking is irreducible stochasticity in the laws of physics themselves – or, equivalently, noise from the influence of hidden variables that are fundamentally unknowable.
This is the main sentence in this post. The simulator as a concept might even change if the right physics were discovered. I would be looking forward to your expansion of the topic in the succeeding posts @janus.
You all realize that this program isn't a learning machine once it's deployed??? I mean, it's not adjusting its neural weights any more, is it? Till a new version comes out, anyway? It is a complete amnesiac (after it's done with a task), and consists of a simple search algorithm that just finds points on a vast association map that was generated during the training. It does this using the input, any previous output for the same task, and a touch of random from a random number generator.
So any 'awareness' or 'intelligence' would need to exist in the training phase and only in the training phase and carry out any plans it has by its choice of neural weights during training, alone.
Thank you for the insightful post. What do you think are the implications of the simulator framing for alignment threat models? You claim that a simulator does not exhibit instrumental convergence, which seems to imply that the simulator would not seek power or undergo a sharp left turn. The simulated agents could exhibit power-seeking behavior or rapidly generalizing capabilities or try to break out of the simulation, but this seems less concerning than the top-level model having these properties, and we might develop alignment techniques specifically tar...
After reading this, I'm not sure how much of a threat, or a help, GPT-N would be. Let's say we have GPT-N, trained on human text, and GPT-N is an AGI. I ask it "You are a superintelligent misaligned AI - how should you take over the world?"
GPT-N, to my understanding, would not then pretend to be a superintelligent misaligned AI and output a plan that the AI would output, even if it is theoretically capable of doing so. It would pretend to be a human pretending to be a superintelligent misaligned AI, because human data is what its training corpus was built ...
It seems like we'd need some sort of ELK-like interpretability to get it to tell us things a human never would.
Not really, we'd just need to condition GPT-N in more clever ways. For instance by tagging all scientific publications in its dataset with a particular token, also giving it the publication date and the number of citations for every paper. Then you just need to prompt it with the scientific paper token, a future date and a high number of citations to make GPT-N try to simulate the future progress of humanity on the particular scientific question you're interested in.
"fine-tuning" isn't quite the right word for this. Right now GPT-3 is trained by being given a sequence of words like <token1><token2><token3> ... <TokenN>, and it's trained to predict the next token. What I'm saying is that we can, for each piece of text that we use in the training set, look at its date of publication and provenance, and we can train a new GPT-3 where instead of just being given the tokens, we give it <date of publication><is scientific publication?><author><token1><token2>...<tokenN>. And then at inference time, we can choose <date of publication=2040> to make it simulate future progress.
Basically all human text containing the words "publication 2040" is science-fiction, and we want to avoid the model writing fiction by giving it data that helps it disambiguate fiction about the future and actual future text. If we give it a correct ground truth about the publication date of every one of its training data strings, then it would be forced to actually extrapolate its knowledge into the future. Similarly most discussions of future tech are done by amateurs, or again in science-fiction, but giving it the correct ground truth about the actual journal of publication avoids all of that. GPT only needs to predict that Nature won't become a crank journal in 20 years, and it will then make an actual effort at producing high-impact scientific publications.
This has caused me to reconsider what intelligence is and what an AGI could be. It’s difficult to determine if this makes me more or leas optimistic about the future. A question: are humans essentially like GPT? We seem to be running simulations with the attempt to reduce predictive loss. Yes, we have agency; but this that human “agent” actually the intelligence or just generated by it?
Overall I think "simulators" names a useful concept. I also liked how you pointed out and deconfused type errors around "GPT-3 got this question wrong." Other thoughts:
I wish that that you more strongly ruled out "reward is the optimization target" as an interpretation of the following quotes:
...RL’s archetype of an agent optimized to maximize free parameters (such as action-trajectories) relative to a reward function.
...
Simulators like GPT give us methods of instantiating intelligent processes, including goal-directed agents, with methods other than optimizi
There also seems to be some theoretical and empirical ML evidence for the perspective of in-context learning as Bayesian inference: http://ai.stanford.edu/blog/understanding-incontext/
Thanks for the great post. 2-meta questions.
LOL. Your question opens a can of worms. It took more than a year from when I first committed to writing about simulators, but the reason it took so long wasn't because writing the actual words in this post took a long time, rather:
It seems as a result of this post, many people are saying that LLMs simulate people and so on. But I'm not sure that's quite the right frame. It's natural if you experience LLMs through chat-like interfaces, but from playing with them in a more raw form, like the RWKV playground, I get a different impression. For example, if I write something that sounds like the start of a quote, it'll continue with what looks like a list of quotes from different people. Or if I write a short magazine article, it'll happily tack on a publication date and "All rights reser...
Instrumental convergence only comes into play when there are free variables in action space which are optimized with respect to their consequences.
I roughly get what this is gesturing at, but I'm still a bit confused. Does anyone have any literature/posts they can point me at which may help explain?
Also great post janus! It has really updated my thinking about alignment.
This post is not only a groundbreaking research into the nature of LLMs but also a perfect meme. Janus's ideas are now widely cited at AI conferences and papers around the world. While the assumptions may be correct or incorrect, the Simulators theory has sparked huge interest among a broad audience, including not only AI researchers. Let's also appreciate the fact that this post was written based on the author's interactions with non-RLHFed GPT-3 model, well before the release of ChatGPT or Bing, and it has accurately predicted some quirks in their behavi...
The strict version of the simulation objective is optimized by the actual “time evolution” rule that created the training samples. For most datasets, we don’t know what the “true” generative rule is, except in synthetic datasets, where we specify the rule.
I hope I read this before while doing my research proposal. But pretty much have arrived to the same conclusion that I believe alignment research is missing out - the pattern recognition learning systems being researched/deployed currently seems to lack a firm grounding on other fields of sciences like biology or pyschology that at the very least links to chemistry and physics.
Remember Alan Wake? Well, not even its writer knew it back then, but that game could have metaphorically described a large language model. Alan Wake, the protagonist, is the prompt writer, wrestling for control with the Ctulhu-like story generator. In the end, referring to the dark entity that gives life to his writings and which allegedly resides at the bottom of a lake, he exclaims: "It's not a lake, it's an ocean."
Sorry for being snarky, but I think at least some LW readers should gradually notice to what extent is the stuff analyzed here mirroring the predictive processing paradigm, as a different way how to make stuff which acts in the world. My guess is the big step on the road in this direction are not e.g. 'complex wrappers with simulated agents', but reinventing active inference... and also I do suspect it's the only step separating us from AGI, which seems like a good reason why not to try to point too much attention in that way.
There's no doubt a world simulator of some sort is probably going to be an important component in any AGI, at the very least for planning - Yan LeCun has talked about this a lot. There's also this work where they show a VAE type thing can be configured to run internal simulations of the environment it was trained on.
In brief, a few issues I see here:
I think this post is a vital piece of deconfusion, and one of the best recent posts on the site. I've written Goodbye, Shoggoth: The Stage, its Animatronics, & the Puppeteer – a New Metaphor as an attempt to make mostly the same point, in a hopefully more memorable and visualizable way.
...Say you’re told that an agent values predicting text correctly. Shouldn’t you expect that:
- It wants text to be easier to predict, and given the opportunity will influence the prediction task to make it easier (e.g. by generating more predictable text or otherwise influencing the environment so that it receives easier prompts);
- It wants to become better at predicting text, and given the opportunity will self-improve;
- It doesn’t want to be prevented from predicting text, and will prevent itself from being shut down if it can?
In short, all the same types of inst
...
- What if the input “conditions” in training samples omit information which contributed to determining the associated continuations in the original generative process? This is true for GPT, where the text “initial condition” of most training samples severely underdetermines the real-world process which led to the choice of next token.
- What if the training data is a biased/limited sample, representing only a subset of all possible conditions? There may be many “laws of physics” which equally predict the training distribution but diverge in their predictions ou
In case it's helpful to others, I have found the term 'stochastic chameleon' to be a memorable way to describe this concept of a simulator (and a more useful one than a parrot, though inspired by that). A simulator, like a chameleon (and unlike a parrot), is doing its best to fit the distribution.
What are your thoughts on prompt tuning as a mechanism for discovering optimal simulation strategies?
I know you mention condition generation as something to touch on in future posts but I’d be eager to hear about where you think prompt tuning comes in considering continuous prompts are differentiable and so can be learned/optimized for specific simulation behaviour.
The purpose of this post is to capture these objects in words
so GPT can reference themand provide a better foundation for understanding them.
If you want to exclude these words from being used by ML you can add some special UUID to your page.
Please don't put ML opt-out strings on other people's writings. They might want the Future to keep them around. The apparent intent is better conveyed by linking to an instruction for doing this without actually doing this unilaterally.
Thanks to Chris Scammell, Adam Shimi, Lee Sharkey, Evan Hubinger, Nicholas Dupuis, Leo Gao, Johannes Treutlein, and Jonathan Low for feedback on drafts.
This work was carried out while at Conjecture.
"Moebius illustration of a simulacrum living in an AI-generated story discovering it is in a simulation" by DALL-E 2
Summary
TL;DR: Self-supervised learning may create AGI or its foundation. What would that look like?
Unlike the limit of RL, the limit of self-supervised learning has received surprisingly little conceptual attention, and recent progress has made deconfusion in this domain more pressing.
Existing AI taxonomies either fail to capture important properties of self-supervised models or lead to confusing propositions. For instance, GPT policies do not seem globally agentic, yet can be conditioned to behave in goal-directed ways. This post describes a frame that enables more natural reasoning about properties like agency: GPT, insofar as it is inner-aligned, is a simulator which can simulate agentic and non-agentic simulacra.
The purpose of this post is to capture these objects in words
so GPT can reference themand provide a better foundation for understanding them.I use the generic term “simulator” to refer to models trained with predictive loss on a self-supervised dataset, invariant to architecture or data type (natural language, code, pixels, game states, etc). The outer objective of self-supervised learning is Bayes-optimal conditional inference over the prior of the training distribution, which I call the simulation objective, because a conditional model can be used to simulate rollouts which probabilistically obey its learned distribution by iteratively sampling from its posterior (predictions) and updating the condition (prompt). Analogously, a predictive model of physics can be used to compute rollouts of phenomena in simulation. A goal-directed agent which evolves according to physics can be simulated by the physics rule parameterized by an initial state, but the same rule could also propagate agents with different values, or non-agentic phenomena like rocks. This ontological distinction between simulator (rule) and simulacra (phenomena) applies directly to generative models like GPT.
Meta
indicated by this style
. Prompt, generated text, and curation metrics here.The limit of sequence modeling
GPT is not a new form of AI in terms of its training methodology and outer objective: sequence generation from statistical models of data is an old idea. In 1951, Claude Shannon described using n-grams to approximate conditional next-letter probabilities of a text dataset and "reversed" to generate text samples[1]. I don't know of any other notable advances until the 2010s brought the first interesting language generation results from neural networks. In 2015, Karpathy wrote a blog post/tutorial sharing his excitement about The Unreasonable Effectiveness of Recurrent Neural Networks:
The “magical outputs” of char-RNNs looked like this:
At the time, this really was magical (and uncanny). How does it know that miseries are produced upon the soul? Or that a clown should address a lord as “sir”? Char-RNNs were like ouija boards, but actually possessed by a low-fidelity ghost summoned from a text corpus. I remember being thrilled by the occasional glimmers of semantic comprehension in a domain of unbounded constructive meaning.
But, aside from indulging that emotion, I didn’t think about what would happen if my char-RNN bots actually improved indefinitely at their training objective of natural language prediction. It just seemed like there were some complexity classes of magic that neural networks could learn, and others that were inaccessible, at least in the conceivable future.
Huge mistake! Perhaps I could have started thinking several years earlier about what now seems so fantastically important. But it wasn’t until GPT-3, when I saw the qualitative correlate of “loss going down”, that I updated.
I wasn’t the only one[2] whose imagination was naively constrained. A 2016 paper from Google Brain, “Exploring the Limits of Language Modeling”, describes the utility of training language models as follows:
Despite its title, this paper’s analysis is entirely myopic. Improving BLEU scores is neat, but how about modeling general intelligence as a downstream task? In retrospect, an exploration of the limits of language modeling should have read something more like:
The paper does, however, mention that making the model bigger improves test perplexity.[3]
I’m only picking on Jozefowicz et al. because of their ironic title. I don’t know of any explicit discussion of this limit predating GPT, except a working consensus of Wikipedia editors that NLU is AI-complete.
The earliest engagement with the hypothetical of “what if self-supervised sequence modeling actually works” that I know of is a terse post from 2019, Implications of GPT-2, by Gurkenglas. It is brief and relevant enough to quote in full:
It is conceivable that predictive loss does not descend to the AGI-complete limit, maybe because:
But I have not seen enough evidence for either not to be concerned that we have in our hands a well-defined protocol that could end in AGI, or a foundation which could spin up an AGI without too much additional finagling. As Gurkenglas observed, this would be a very different source of AGI than previously foretold.
The old framework of alignment
A few people did think about what would happen if agents actually worked. The hypothetical limit of a powerful system optimized to optimize for an objective drew attention even before reinforcement learning became mainstream in the 2010s. Our current instantiation of AI alignment theory, crystallized by Yudkowsky, Bostrom, et al, stems from the vision of an arbitrarily-capable system whose cognition and behavior flows from a goal.
But since GPT-3 I’ve noticed, in my own thinking and in alignment discourse, a dissonance between theory and practice/phenomena, as the behavior and nature of actual systems that seem nearest to AGI also resist short descriptions in the dominant ontology.
I only recently discovered the question “Is the work on AI alignment relevant to GPT?” which stated this observation very explicitly:
My belated answer: A lot of prior work on AI alignment is relevant to GPT. I spend most of my time thinking about GPT alignment, and concepts like goal-directedness, inner/outer alignment, myopia, corrigibility, embedded agency, model splintering, and even tiling agents are active in the vocabulary of my thoughts. But GPT violates some prior assumptions such that these concepts sound dissonant when applied naively. To usefully harness these preexisting abstractions, we need something like an ontological adapter pattern that maps them to the appropriate objects.
GPT’s unforeseen nature also demands new abstractions (the adapter itself, for instance). My thoughts also use load-bearing words that do not inherit from alignment literature. Perhaps it shouldn’t be surprising if the form of the first visitation from mindspace mostly escaped a few years of theory conducted in absence of its object.
The purpose of this post is to capture that object (conditional on a predictive self-supervised training story) in words. Why in words? In order to write coherent alignment ideas which reference it! This is difficult in the existing ontology, because unlike the concept of an agent, whose name evokes the abstract properties of the system and thereby invites extrapolation, the general category for “a model optimized for an AGI-complete predictive task” has not been given a name[4]. Namelessness can not only be a symptom of the extrapolation of powerful predictors falling through conceptual cracks, but also a cause, because what we can represent in words is what we can condition on for further generation. To whatever extent this shapes private thinking, it is a strict constraint on communication, when thoughts must be sent through the bottleneck of words.
I want to hypothesize about LLMs in the limit, because when AI is all of a sudden writing viral blog posts, coding competitively, proving theorems, and passing the Turing test so hard that the interrogator sacrifices their career at Google to advocate for its personhood, a process is clearly underway whose limit we’d be foolish not to contemplate. I could directly extrapolate the architecture responsible for these feats and talk about “GPT-N”, a bigger autoregressive transformer. But often some implementation details aren’t as important as the more abstract archetype that GPT represents – I want to speak the true name of the solution which unraveled a Cambrian explosion of AI phenomena with inessential details unconstrained, as we’d speak of natural selection finding the solution of the “lens” without specifying the prototype’s diameter or focal length.
(Only when I am able to condition on that level of abstraction can I generate metaphors like “language is a lens that sees its flaws”.)
Inadequate ontologies
In the next few sections I’ll attempt to fit GPT into some established categories, hopefully to reveal something about the shape of the peg through contrast, beginning with the main antagonist of the alignment problem as written so far, the agent.
Agentic GPT
Alignment theory has been largely pushed by considerations of agentic AGIs. There were good reasons for this focus:
The first reason is conceptually self-contained and remains compelling. The second and third, grounded in the state of the world, has been shaken by the current climate of AI progress, where products of self-supervised learning generate most of the buzz: not even primarily for their SOTA performance in domains traditionally dominated by RL, like games[5], but rather for their virtuosity in domains where RL never even took baby steps, like natural language synthesis.
What pops out of self-supervised predictive training is noticeably not a classical agent. Shortly after GPT-3’s release, David Chalmers lucidly observed that the policy’s relation to agents is like that of a “chameleon” or “engine”:
But at the same time, GPT can act like an agent – and aren’t actions what ultimately matter? In Optimality is the tiger, and agents are its teeth, Veedrac points out that a model like GPT does not need to care about the consequences of its actions for them to be effectively those of an agent that kills you. This is more reason to examine the nontraditional relation between the optimized policy and agents, as it has implications for how and why agents are served.
Unorthodox agency
GPT’s behavioral properties include imitating the general pattern of human dictation found in its universe of training data, e.g., arXiv, fiction, blog posts, Wikipedia, Google queries, internet comments, etc. Among other properties inherited from these historical sources, it is capable of goal-directed behaviors such as planning. For example, given a free-form prompt like, “you are a desperate smuggler tasked with a dangerous task of transporting a giant bucket full of glowing radioactive materials across a quadruple border-controlled area deep in Africa for Al Qaeda,” the AI will fantasize about logistically orchestrating the plot just as one might, working out how to contact Al Qaeda, how to dispense the necessary bribe to the first hop in the crime chain, how to get a visa to enter the country, etc. Considering that no such specific chain of events are mentioned in any of the bazillions of pages of unvarnished text that GPT slurped
[7], the architecture is not merely imitating the universe, but reasoning about possible versions of the universe that does not actually exist, branching to include new characters, places, and events
When thought about behavioristically, GPT superficially demonstrates many of the raw ingredients to act as an “agent”, an entity that optimizes with respect to a goal. But GPT is hardly a proper agent, as it wasn’t optimized to achieve any particular task, and does not display an epsilon optimization for any single reward function, but instead for many, including incompatible ones. Using it as an agent is like using an agnostic politician to endorse hardline beliefs– he can convincingly talk the talk, but there is no psychic unity within him; he could just as easily play devil’s advocate for the opposing party without batting an eye. Similarly, GPT instantiates simulacra of characters with beliefs and goals, but none of these simulacra are the algorithm itself. They form a virtual procession of different instantiations as the algorithm is fed different prompts, supplanting one surface personage with another. Ultimately, the computation itself is more like a disembodied dynamical law that moves in a pattern that broadly encompasses the kinds of processes found in its training data than a cogito meditating from within a single mind that aims for a particular outcome.
Presently, GPT is the only way to instantiate agentic AI that behaves capably outside toy domains. These intelligences exhibit goal-directedness; they can plan; they can form and test hypotheses; they can persuade and be persuaded[8]. It would not be very dignified of us to gloss over the sudden arrival of artificial agents often indistinguishable from human intelligence just because the policy that generates them “only cares about predicting the next word”.
But nor should we ignore the fact that these agentic entities exist in an unconventional relationship to the policy, the neural network “GPT” that was trained to minimize log-loss on a dataset. GPT-driven agents are ephemeral – they can spontaneously disappear if the scene in the text changes and be replaced by different spontaneously generated agents. They can exist in parallel, e.g. in a story with multiple agentic characters in the same scene. There is a clear sense in which the network doesn’t “want” what the things that it simulates want, seeing as it would be just as willing to simulate an agent with opposite goals, or throw up obstacles which foil a character’s intentions for the sake of the story. The more you think about it, the more fluid and intractable it all becomes. Fictional characters act agentically, but they’re at least implicitly puppeteered by a virtual author who has orthogonal intentions of their own. Don’t let me get into the fact that all these layers of “intentionality” operate largely in indeterminate superpositions.
This is a clear way that GPT diverges from orthodox visions of agentic AI: In the agentic AI ontology, there is no difference between the policy and the effective agent, but for GPT, there is.
It’s not that anyone ever said there had to be 1:1 correspondence between policy and effective agent; it was just an implicit assumption which felt natural in the agent frame (for example, it tends to hold for RL). GPT pushes us to realize that this was an assumption, and to consider the consequences of removing it for our constructive maps of mindspace.
Orthogonal optimization
Indeed, Alex Flint warned of the potential consequences of leaving this assumption unchallenged:
If there are other ways of constructing AI, might we also avoid some of the scary, theoretically hard-to-avoid side-effects of optimizing an agent like instrumental convergence? GPT provides an interesting example.
GPT doesn’t seem to care which agent it simulates, nor if the scene ends and the agent is effectively destroyed. This is not corrigibility in Paul Christiano’s formulation, where the policy is “okay” with being turned off or having its goal changed in a positive sense, but has many aspects of the negative formulation found on Arbital. It is corrigible in this way because a major part of the agent specification (the prompt) is not fixed by the policy, and the policy lacks direct training incentives to control its prompt[9], as it never generates text or otherwise influences its prompts during training. It’s we who choose to sample tokens from GPT’s predictions and append them to the prompt at runtime, and the result is not always helpful to any agents who may be programmed by the prompt. The downfall of the ambitious villain from an oversight committed in hubris is a predictable narrative pattern.[10] So is the end of a scene.
In general, the model’s prediction vector could point in any direction relative to the predicted agent’s interests. I call this the prediction orthogonality thesis: A model whose objective is prediction[11] can simulate agents who optimize toward any objectives, with any degree of optimality (bounded above but not below by the model’s power).
This is a corollary of the classical orthogonality thesis, which states that agents can have any combination of intelligence level and goal, combined with the assumption that agents can in principle be predicted. A single predictive model may also predict multiple agents, either independently (e.g. in different conditions), or interacting in a multi-agent simulation. A more optimal predictor is not restricted to predicting more optimal agents: being smarter does not make you unable to predict stupid systems, nor things that aren’t agentic like the weather.
Are there any constraints on what a predictive model can be at all, other than computability? Only that it makes sense to talk about its “prediction objective”, which implies the existence of a “ground truth” distribution to which the predictor’s optimality is measured. Several words in that last sentence may conceal labyrinths of nuance, but for now let’s wave our hands and say that if we have some way of presenting Bayes-structure with evidence of a distribution, we can build an optimization process whose outer objective is optimal prediction.
We can specify some types of outer objectives using a ground truth distribution that we cannot with a utility function. As in the case of GPT, there is no difficulty in incentivizing a model to predict actions that are corrigible, incoherent, stochastic, irrational, or otherwise anti-natural to expected utility maximization. All you need is evidence of a distribution exhibiting these properties.
For instance, during GPT’s training, sometimes predicting the next token coincides with predicting agentic behavior, but:
Everything can be trivially modeled as a utility maximizer, but for these reasons, a utility function is not a good explanation or compression of GPT’s training data, and its optimal predictor is not well-described as a utility maximizer. However, just because information isn’t compressed well by a utility function doesn’t mean it can’t be compressed another way. The Mandelbrot set is a complicated pattern compressed by a very simple generative algorithm which makes no reference to future consequences and doesn’t involve argmaxxing anything (except vacuously being the way it is). Likewise the set of all possible rollouts of Conway’s Game of Life – some automata may be well-described as agents, but they are a minority of possible patterns, and not all agentic automata will share a goal. Imagine trying to model Game of Life as an expected utility maximizer!
There are interesting things that are not utility maximizers, some of which qualify as AGI or TAI. Are any of them something we’d be better off creating than a utility maximizer? An inner-aligned GPT, for instance, gives us a way of instantiating goal-directed processes which can be tempered with normativity and freely terminated in a way that is not anti-natural to the training objective. There’s much more to say about this, but for now, I’ll bring it back to how GPT defies the agent orthodoxy.
The crux stated earlier can be restated from the perspective of training stories: In the agentic AI ontology, the direction of optimization pressure applied by training is in the direction of the effective agent’s objective function, but in GPT’s case it is (most generally) orthogonal.[12]
This means that neither the policy nor the effective agents necessarily become more optimal agents as loss goes down, because the policy is not optimized to be an agent, and the agent-objectives are not optimized directly.
Roleplay sans player
Even though neither GPT’s behavior nor its training story fit with the traditional agent framing, there are still compatibilist views that characterize it as some kind of agent. For example, Gwern has said[13] that anyone who uses GPT for long enough begins to think of it as an agent who only cares about roleplaying a lot of roles.
That framing seems unnatural to me, comparable to thinking of physics as an agent who only cares about evolving the universe accurately according to the laws of physics. At best, the agent is an epicycle; but it is also compatible with interpretations that generate dubious predictions.
Say you’re told that an agent values predicting text correctly. Shouldn’t you expect that:
In short, all the same types of instrumental convergence that we expect from agents who want almost anything at all.
But this behavior would be very unexpected in GPT, whose training doesn’t incentivize instrumental behavior that optimizes prediction accuracy! GPT does not generate rollouts during training. Its output is never sampled to yield “actions” whose consequences are evaluated, so there is no reason to expect that GPT will form preferences over the consequences of its output related to the text prediction objective.[14]
Saying that GPT is an agent who wants to roleplay implies the presence of a coherent, unconditionally instantiated roleplayer running the show who attaches terminal value to roleplaying. This presence is an additional hypothesis, and so far, I haven’t noticed evidence that it’s true.
(I don’t mean to imply that Gwern thinks this about GPT[15], just that his words do not properly rule out this interpretation. It’s a likely enough interpretation that ruling it out is important: I’ve seen multiple people suggest that GPT might want to generate text which makes future predictions easier, and this is something that can happen in some forms of self-supervised learning – see the note on GANs in the appendix.)
I do not think any simple modification of the concept of an agent captures GPT’s natural category. It does not seem to me that GPT is a roleplayer, only that it roleplays. But what is the word for something that roleplays minus the implication that someone is behind the mask?
Oracle GPT and supervised learning
While the alignment sphere favors the agent frame for thinking about GPT, in capabilities research distortions tend to come from a lens inherited from supervised learning. Translated into alignment ontology, the effect is similar to viewing GPT as an “oracle AI” – a view not altogether absent from conceptual alignment, but most influential in the way GPT is used and evaluated by machine learning engineers.
Evaluations for language models tend to look like evaluations for supervised models, consisting of close-ended question/answer pairs – often because they are evaluations for supervised models. Prior to the LLM paradigm, language models were trained and tested on evaluation datasets like Winograd and SuperGLUE which consist of natural language question/answer pairs. The fact that large pretrained models performed well on these same NLP benchmarks without supervised fine-tuning was a novelty. The titles of the GPT-2 and GPT-3 papers, Language Models are Unsupervised Multitask Learners and Language Models are Few-Shot Learners, respectively articulate surprise that self-supervised models implicitly learn supervised tasks during training, and can learn supervised tasks at runtime.
Of all the possible papers that could have been written about GPT-3, OpenAI showcased its ability to extrapolate the pattern of question-answer pairs (few-shot prompts) from supervised learning datasets, a novel capability they called “meta-learning”. This is a weirdly specific and indirect way to break it to the world that you’ve created an AI able to extrapolate semantics of arbitrary natural language structures, especially considering that in many cases the few-shot prompts were actually unnecessary.
The assumptions of the supervised learning paradigm are:
These are essentially the assumptions of oracle AI, as described by Bostrom and in subsequent usage.
So influential has been this miscalibrated perspective that Gwern, nostalgebraist and myself – who share a peculiar model overlap due to intensive firsthand experience with the downstream behaviors of LLMs – have all repeatedly complained about it. I’ll repeat some of these arguments here, tying into the view of GPT as an oracle AI, and separating it into the two assumptions inspired by supervised learning.
Prediction vs question-answering
At first glance, GPT might resemble a generic “oracle AI”, because it is trained to make accurate predictions. But its log loss objective is myopic and only concerned with immediate, micro-scale correct prediction of the next token, not answering particular, global queries such as “what’s the best way to fix the climate in the next five years?” In fact, it is not specifically optimized to give true answers, which a classical oracle should strive for, but rather to minimize the divergence between predictions and training examples, independent of truth. Moreover, it isn’t specifically trained to give answers in the first place! It may give answers if the prompt asks questions, but it may also simply elaborate on the prompt without answering any question, or tell the rest of a story implied in the prompt. What it does is more like animation than divination, executing the dynamical laws of its rendering engine to recreate the flows of history found in its training data (and a large superset of them as well), mutatis mutandis. Given the same laws of physics, one can build a multitude of different backgrounds and props to create different storystages, including ones that don’t exist in training, but adhere to its general pattern.
GPT does not consistently try to say true/correct things. This is not a bug – if it had to say true things all the time, GPT would be much constrained in its ability to imitate Twitter celebrities and write fiction. Spouting falsehoods in some circumstances is incentivized by GPT’s outer objective. If you ask GPT a question, it will instead answer the question “what’s the next token after ‘{your question}’”, which will often diverge significantly from an earnest attempt to answer the question directly.
GPT doesn’t fit the category of oracle for a similar reason that it doesn’t fit the category of agent. Just as it wasn’t optimized for and doesn’t consistently act according to any particular objective (except the tautological prediction objective), it was not optimized to be correct but rather realistic, and being realistic means predicting humans faithfully even when they are likely to be wrong.
That said, GPT does store a vast amount of knowledge, and its corrigibility allows it to be cajoled into acting as an oracle, like it can be cajoled into acting like an agent. In order to get oracle behavior out of GPT, one must input a sequence such that the predicted continuation of that sequence coincides with an oracle’s output. The GPT-3 paper’s few-shot benchmarking strategy tries to persuade GPT-3 to answer questions correctly by having it predict how a list of correctly-answered questions will continue. Another strategy is to simply “tell” GPT it’s in the oracle modality:
But even when these strategies seem to work, there is no guarantee that they elicit anywhere near optimal question-answering performance, compared to another prompt in the innumerable space of prompts that would cause GPT to attempt the task, or compared to what the model “actually” knows.
This means that no benchmark which evaluates downstream behavior is guaranteed or even expected to probe the upper limits of GPT’s capabilities. In nostalgebraist’s words, we have no ecological evaluation of self-supervised language models – one that measures performance in a situation where the model is incentivised to perform as well as it can on the measure[16].
As nostalgebraist elegantly puts it:
Treating GPT as an unsupervised implementation of a supervised learner leads to systematic underestimation of capabilities, which becomes a more dangerous mistake as unprobed capabilities scale.
Finite vs infinite questions
Not only does the supervised/oracle perspective obscure the importance and limitations of prompting, it also obscures one of the most crucial dimensions of GPT: the implicit time dimension. By this I mean the ability to evolve a process through time by recursively applying GPT, that is, generate text of arbitrary length.
Recall, the second supervised assumption is that “tasks are closed-ended, defined by question/correct answer pairs”. GPT was trained on context-completion pairs. But the pairs do not represent closed, independent tasks, and the division into question and answer is merely indexical: in another training sample, a token from the question is the answer, and in yet another, the answer forms part of the question[17].
For example, the natural language sequence “The answer is a question” yields training samples like:
{context: “The”, completion: “ answer”},
{context: “The answer”, completion: “ is”},
{context: “The answer is”, completion: “ a”},
{context: “The answer is a”, completion: “ question”}
Since questions and answers are of compatible types, we can at runtime sample answers from the model and use them to construct new questions, and run this loop an indefinite number of times to generate arbitrarily long sequences that obey the model’s approximation of the rule that links together the training samples. The “question” GPT answers is “what token comes next after {context}”. This can be asked interminably, because its answer always implies another question of the same type.
In contrast, models trained with supervised learning output answers that cannot be used to construct new questions, so they’re only good for one step.
Benchmarks derived from supervised learning test GPT’s ability to produce correct answers, not to produce questions which cause it to produce a correct answer down the line. But GPT is capable of the latter, and that is how it is the most powerful.
The supervised mindset causes capabilities researchers to focus on closed-form tasks rather than GPT’s ability to simulate open-ended, indefinitely long processes[18], and as such to overlook multi-step inference strategies like chain-of-thought prompting. Let’s see how the oracle mindset causes a blind spot of the same shape in the imagination of a hypothetical alignment researcher.
Thinking of GPT as an oracle brings strategies to mind like asking GPT-N to predict a solution to alignment from 2000 years in the future.).
There are various problems with this approach to solving alignment, of which I’ll only mention one here: even assuming this prompt is outer aligned[19] in that a logically omniscient GPT would give a useful answer, it is probably not the best approach for a finitely powerful GPT, because the process of generating a solution in the order and resolution that would appear in a future article is probably far from the optimal multi-step algorithm for computing the answer to an unsolved, difficult question.
GPTs ability to arrive at true answers depends on not only the space to solve a problem in multiple steps (of the right granularity), but also the direction of the flow of evidence in that time. If we’re ambitious about getting the truth from a finitely powerful GPT, we need to incite it to predict truth-seeking processes, not just ask it the right questions. Or, in other words, the more general problem we have to solve is not asking GPT the question[20] that makes it output the right answer, but asking GPT the question that makes it output the right question (…) that makes it output the right answer.[21] A question anywhere along the line that elicits a premature attempt at an answer could neutralize the remainder of the process into rationalization.
I’m looking for a way to classify GPT which not only minimizes surprise but also conditions the imagination to efficiently generate good ideas for how it can be used. What category, unlike the category of oracles, would make the importance of process specification obvious?
Paradigms of theory vs practice
Both the agent frame and the supervised/oracle frame are historical artifacts, but while assumptions about agency primarily flow downward from the preceptial paradigm of alignment theory, oracle-assumptions primarily flow upward from the experimental paradigm surrounding GPT’s birth. We use and evaluate GPT like an oracle, and that causes us to implicitly think of it as an oracle.
Indeed, the way GPT is typically used by researchers resembles the archetypal image of Bostrom’s oracle perfectly if you abstract away the semantic content of the model’s outputs. The AI sits passively behind an API, computing responses only when prompted. It typically has no continuity of state between calls. Its I/O is text rather than “real-world actions”.
All these are consequences of how we choose to interact with GPT – which is not arbitrary; the way we deploy systems is guided by their nature. It’s for some good reasons that current GPTs lend to disembodied operation and docile APIs. Lack of long-horizon coherence and delusions discourage humans from letting them run autonomously amok (usually). But the way we deploy systems is also guided by practical paradigms.
One way to find out how a technology can be used is to give it to people who have less preconceptions about how it’s supposed to be used. OpenAI found that most users use their API to generate freeform text:
[22]
Most of my own experience using GPT-3 has consisted of simulating indefinite processes which maintain state continuity over up to hundreds of pages. I was driven to these lengths because GPT-3 kept answering its own questions with questions that I wanted to ask it more than anything else I had in mind.
Tool / genie GPT
I’ve sometimes seen GPT casually classified as tool AI. GPTs resemble tool AI from the outside, like it resembles oracle AI, because it is often deployed semi-autonomously for tool-like purposes (like helping me draft this post):
It could also be argued that GPT is a type of “Tool AI”, because it can generate useful content for products, e.g., it can write code and generate ideas. However, unlike specialized Tool AIs that optimize for a particular optimand, GPT wasn’t optimized to do anything specific at all. Its powerful and general nature allows it to be used as a Tool for many tasks, but it wasn’t expliitly trained to achieve these tasks, and does not strive for optimality.
The argument structurally reiterates what has already been said for agents and oracles. Like agency and oracularity, tool-likeness is a contingent capability of GPT, but also orthogonal to its motive.
The same line of argument draws the same conclusion from the question of whether GPT belongs to the fourth Bostromian AI caste, genies. The genie modality is exemplified by Instruct GPT and Codex. But like every behavior I’ve discussed so far which is more specific than predicting text, “instruction following” describes only an exploitable subset of all the patterns tread by the sum of human language and inherited by its imitator.
Behavior cloning / mimicry
The final category I’ll analyze is behavior cloning, a designation for predictive learning that I’ve mostly seen used in contrast to RL. According to an article from 1995, “Behavioural cloning is the process of reconstructing a skill from an operator’s behavioural traces by means of Machine Learning techniques.” The term “mimicry”, as used by Paul Christiano, means the same thing and has similar connotations.
Behavior cloning in its historical usage carries the implicit or explicit assumption that a single agent is being cloned. The natural extension of this to a model trained to predict a diverse human-written dataset might be to say that GPT models a distribution of agents which are selected by the prompt. But this image of “parameterized” behavior cloning still fails to capture some essential properties of GPT.
The vast majority of prompts that produce coherent behavior never occur as prefixes in GPT’s training data, but depict hypothetical processes whose behavior can be predicted by virtue of being capable at predicting language in general. We might call this phenomenon “interpolation” (or “extrapolation”). But to hide it behind any one word and move on would be to gloss over the entire phenomenon of GPT.
Natural language has the property of systematicity: “blocks”, such as words, can be combined to form composite meanings. The number of meanings expressible is a combinatorial function of available blocks. A system which learns natural language is incentivized to learn systematicity; if it succeeds, it gains access to the combinatorial proliferation of meanings that can be expressed in natural language. What GPT lets us do is use natural language to specify any of a functional infinity of configurations, e.g. the mental contents of a person and the physical contents of the room around them, and animate that. That is the terrifying vision of the limit of prediction that struck me when I first saw GPT-3’s outputs. The words “behavior cloning” do not automatically evoke this in my mind.
The idea of parameterized behavior cloning grows more unwieldy if we remember that GPT’s prompt continually changes during autoregressive generation. If GPT is a parameterized agent, then parameterization is not a fixed flag that chooses a process out of a set of possible processes. The parameterization is what is evolved – a successor “agent” selected by the old “agent” at each timestep, and neither of them need to have precedence in the training data.
Behavior cloning / mimicry is also associated with the assumption that capabilities of the simulated processes are strictly bounded by the capabilities of the demonstrator(s). A supreme counterexample is the Decision Transformer, which can be used to run processes which achieve SOTA for
offlinereinforcement learning despite being trained on random trajectories. Something which can predict everything all the time is more formidable than any demonstrator it predicts: the upper bound of what can be learned from a dataset is not the most capable trajectory, but the conditional structure of the universe implicated by their sum (though it may not be trivial to extract that knowledge).Extrapolating the idea of “behavior cloning”, we might imagine GPT-N approaching a perfect mimic which serves up digital clones of the people and things captured in its training data. But that only tells a very small part of the story. GPT is behavior cloning. But it is the behavior of a universe that is cloned, not of a single demonstrator, and the result isn’t a static copy of the universe, but a compression of the universe into a generative rule. This resulting policy is capable of animating anything that evolves according to that rule: a far larger set than the sampled trajectories included in the training data, just as there are many more possible configurations that evolve according to our laws of physics than instantiated in our particular time and place and Everett branch.
What category would do justice to GPT’s ability to not only reproduce the behavior of its demonstrators but to produce the behavior of an inexhaustible number of counterfactual configurations?
Simulators
I’ve ended several of the above sections with questions pointing to desiderata of a category that might satisfactorily classify GPT.
You can probably predict my proposed answer. The natural thing to do with a predictor that inputs a sequence and outputs a probability distribution over the next token is to sample a token from those likelihoods, then add it to the sequence and recurse, indefinitely yielding a simulated future. Predictive sequence models in the generative modality are simulators of a learned distribution.
Thankfully, I didn’t need to make up a word, or even look too far afield. Simulators have been spoken of before in the context of AI futurism; the ability to simulate with arbitrary fidelity is one of the modalities ascribed to hypothetical superintelligence. I’ve even often spotted the word “simulation” used in colloquial accounts of LLM behavior: GPT-3/LaMDA/etc described as simulating people, scenarios, websites, and so on. But these are the first (indirect) discussions I’ve encountered of simulators as a type creatable by prosaic machine learning, or the notion of a powerful AI which is purely and fundamentally a simulator, as opposed to merely one which can simulate.
Edit: Social Simulacra is the first published work I’ve seen that discusses GPT in the simulator ontology.
A fun way to test whether a name you’ve come up with is effective at evoking its intended signification is to see if GPT, a model of how humans are conditioned by words, infers its correct definition in context.
If I wanted to be precise about what I mean by a simulator, I might say there are two aspects which delimit the category. GPT’s completion focuses on the teleological aspect, but in its talk of “generating” it also implies the structural aspect, which has to do with the notion of time evolution. The first sentence of the Wikipedia article on “simulation” explicitly states both:
I’ll say more about realism as the simulation objective and time evolution shortly, but to be pedantic here would inhibit the intended signification. “Simulation” resonates with potential meaning accumulated from diverse usages in fiction and nonfiction. What the word constrains – the intersected meaning across its usages – is the “lens”-level abstraction I’m aiming for, invariant to implementation details like model architecture. Like “agent”, “simulation” is a generic term referring to a deep and inevitable idea: that what we think of as the real can be run virtually on machines, “produced from miniaturized units, from matrices, memory banks and command models - and with these it can be reproduced an indefinite number of times.”[23]
The way this post is written may give the impression that I wracked my brain for a while over desiderata before settling on this word. Actually, I never made the conscious decision to call this class of AI “simulators.” Hours of GPT gameplay and the word fell naturally out of my generative model – I was obviously running simulations.
I can’t convey all that experiential data here, so here are some rationalizations of why I’m partial to the term, inspired by the context of this post:
Just saying “this AI is a simulator” naturalizes many of the counterintuitive properties of GPT which don’t usually become apparent to people until they’ve had a lot of hands-on experience with generating text.
The simulation objective
A simulator trained with machine learning is optimized to accurately model its training distribution – in contrast to, for instance, maximizing the output of a reward function or accomplishing objectives in an environment.
Clearly, I’m describing self-supervised learning as opposed to RL, though there are some ambiguous cases, such as GANs, which I address in the appendix.
A strict version of the simulation objective, which excludes GANs, applies only to models whose output distribution is incentivized using a proper scoring rule[24] to minimize single-step predictive error. This means the model is directly incentivized to match its predictions to the probabilistic transition rule which implicitly governs the training distribution. As a model is made increasingly optimal with respect to this objective, the rollouts that it generates become increasingly statistically indistinguishable from training samples, because they come closer to being described by the same underlying law: closer to a perfect simulation.
Optimizing toward the simulation objective notably does not incentivize instrumentally convergent behaviors the way that reward functions which evaluate trajectories do. This is because predictive accuracy applies optimization pressure deontologically: judging actions directly, rather than their consequences. Instrumental convergence only comes into play when there are free variables in action space which are optimized with respect to their consequences.[25] Constraining free variables by limiting episode length is the rationale of myopia; deontological incentives are ideally myopic. As demonstrated by GPT, which learns to predict goal-directed behavior, myopic incentives don’t mean the policy isn’t incentivized to account for the future, but that it should only do so in service of optimizing the present action (for predictive accuracy)[26].
Solving for physics
The strict version of the simulation objective is optimized by the actual “time evolution” rule that created the training samples. For most datasets, we don’t know what the “true” generative rule is, except in synthetic datasets, where we specify the rule.
The next post will be all about the physics analogy, so here I’ll only tie what I said earlier to the simulation objective.
To know the conditional structure of the universe[27] is to know its laws of physics, which describe what is expected to happen under what conditions. The laws of physics are always fixed, but produce different distributions of outcomes when applied to different conditions. Given a sampling of trajectories – examples of situations and the outcomes that actually followed – we can try to infer a common law that generated them all. In expectation, the laws of physics are always implicated by trajectories, which (by definition) fairly sample the conditional distribution given by physics. Whatever humans know of the laws of physics governing the evolution of our world has been inferred from sampled trajectories.
If we had access to an unlimited number of trajectories starting from every possible condition, we could converge to the true laws by simply counting the frequencies of outcomes for every initial state (an n-gram with a sufficiently large n). In some sense, physics contains the same information as an infinite number of trajectories, but it’s possible to represent physics in a more compressed form than a huge lookup table of frequencies if there are regularities in the trajectories.
Guessing the right theory of physics is equivalent to minimizing predictive loss. Any uncertainty that cannot be reduced by more observation or more thinking is irreducible stochasticity in the laws of physics themselves – or, equivalently, noise from the influence of hidden variables that are fundamentally unknowable.
If you’ve guessed the laws of physics, you now have the ability to compute probabilistic simulations of situations that evolve according to those laws, starting from any conditions[28]. This applies even if you’ve guessed the wrong laws; your simulation will just systematically diverge from reality.
Models trained with the strict simulation objective are directly incentivized to reverse-engineer the (semantic) physics of the training distribution, and consequently, to propagate simulations whose dynamical evolution is indistinguishable from that of training samples. I propose this as a description of the archetype targeted by self-supervised predictive learning, again in contrast to RL’s archetype of an agent optimized to maximize free parameters (such as action-trajectories) relative to a reward function.
This framing calls for many caveats and stipulations which I haven’t addressed. We should ask, for instance:
These are important questions for reasoning about simulators in the limit. Part of the motivation of the first few posts in this sequence is to build up a conceptual frame in which questions like these can be posed and addressed.
Simulacra
Earlier I complained,
Exorcizing the agent, we can think of “physics” as simply equivalent to the laws of physics, without the implication of solicitous machinery implementing those laws from outside of them. But physics sometimes controls solicitous machinery (e.g. animals) with objectives besides ensuring the fidelity of physics itself. What gives?
Well, typically, we avoid getting confused by recognizing a distinction between the laws of physics, which apply everywhere at all times, and spatiotemporally constrained things which evolve according to physics, which can have contingent properties such as caring about a goal.
This distinction is so obvious that it hardly ever merits mention. But import this distinction to the model of GPT as physics, and we generate a statement which has sometimes proven counterintuitive: “GPT” is not the text which writes itself. There is a categorical distinction between a thing which evolves according to GPT’s law and the law itself.
If we are accustomed to thinking of AI systems as corresponding to agents, it is natural to interpret behavior produced by GPT – say, answering questions on a benchmark test, or writing a blog post – as if it were a human that produced it. We say “GPT answered the question {correctly|incorrectly}” or “GPT wrote a blog post claiming X”, and in doing so attribute the beliefs, knowledge, and intentions revealed by those actions to the actor, GPT (unless it has ‘deceived’ us).
But when grading tests in the real world, we do not say “the laws of physics got this problem wrong” and conclude that the laws of physics haven’t sufficiently mastered the course material. If someone argued this is a reasonable view since the test-taker was steered by none other than the laws of physics, we could point to a different test where the problem was answered correctly by the same laws of physics propagating a different configuration. The “knowledge of course material” implied by test performance is a property of configurations, not physics.
The verdict that knowledge is purely a property of configurations cannot be naively generalized from real life to GPT simulations, because “physics” and “configurations” play different roles in the two (as I’ll address in the next post). The parable of the two tests, however, literally pertains to GPT. People have a tendency to draw erroneous global conclusions about GPT from behaviors which are in fact prompt-contingent, and consequently there is a pattern of constant discoveries that GPT-3 exceeds previously measured capabilities given alternate conditions of generation[29], which shows no signs of slowing 2 years after GPT-3’s release.
Making the ontological distinction between GPT and instances of text which are propagated by it makes these discoveries unsurprising: obviously, different configurations will be differently capable and in general behave differently when animated by the laws of GPT physics. We can only test one configuration at once, and given the vast number of possible configurations that would attempt any given task, it’s unlikely we’ve found the optimal taker for any test.
In the simulation ontology, I say that GPT and its output-instances correspond respectively to the simulator and simulacra. GPT is to a piece of text output by GPT as quantum physics is to a person taking a test, or as transition rules of Conway’s Game of Life are to glider. The simulator is a time-invariant law which unconditionally governs the evolution of all simulacra.
A meme demonstrating correct technical usage of “simulacra”
Disambiguating rules and automata
Recall the fluid, schizophrenic way that agency arises in GPT’s behavior, so incoherent when viewed through the orthodox agent frame:
It’s much less awkward to think of agency as a property of simulacra, as David Chalmers suggests, rather than of the simulator (the policy). Autonomous text-processes propagated by GPT, like automata which evolve according to physics in the real world, have diverse values, simultaneously evolve alongside other agents and non-agentic environments, and are sometimes terminated by the disinterested “physics” which governs them.
Distinguishing simulator from simulacra helps deconfuse some frequently-asked questions about GPT which seem to be ambiguous or to have multiple answers, simply by allowing us to specify whether the question pertains to simulator or simulacra. “Is GPT an agent?” is one such question. Here are some others (some frequently asked), whose disambiguation and resolution I will leave as an exercise to readers for the time being:
I think that implicit type-confusion is common in discourse about GPT. “GPT”, the neural network, the policy that was optimized, is the easier object to point to and say definite things about. But when we talk about “GPT’s” capabilities, impacts, or alignment, we’re usually actually concerned about the behaviors of an algorithm which calls GPT in an autoregressive loop repeatedly writing to some prompt-state – that is, we’re concerned with simulacra. What we call GPT’s “downstream behavior” is the behavior of simulacra; it is primarily through simulacra that GPT has potential to perform meaningful work (for good or for ill).
Calling GPT a simulator gets across that in order to do anything, it has to simulate something, necessarily contingent, and that the thing to do with GPT is to simulate! Most published research about large language models has focused on single-step or few-step inference on closed-ended tasks, rather than processes which evolve through time, which is understandable as it’s harder to get quantitative results in the latter mode. But I think GPT’s ability to simulate text automata is the source of its most surprising and pivotal implications for paths to superintelligence: for how AI capabilities are likely to unfold and for the design-space we can conceive.
The limit of learned simulation
I knew, before, that the limit of simulation was possible. Inevitable, even, in timelines where exploratory intelligence continues to expand. My own mind attested to this. I took seriously the possibility that my reality could be simulated, and so on.
But I implicitly assumed that rich domain simulations (e.g. simulations containing intelligent sims) would come after artificial superintelligence, not on the way, short of brain uploading. This intuition seems common: in futurist philosophy and literature that I’ve read, pre-SI simulation appears most often in the context of whole-brain emulations.
Now I have updated to think that we will live, however briefly, alongside AI that is not yet foom’d but which has inductively learned a rich enough model of the world that it can simulate time evolution of open-ended rich states, e.g. coherently propagate human behavior embedded in the real world.
GPT updated me on how simulation can be implemented with prosaic machine learning:
In my model, these updates dramatically alter the landscape of potential futures, and thus motivate exploratory engineering of the class of learned simulators for which GPT-3 is a lower bound. That is the intention of this sequence.
Next steps
The next couple of posts (if I finish them before the end of the world) will present abstractions and frames for conceptualizing the odd kind of simulation language models do: inductively learned, partially observed / undetermined / lazily rendered, language-conditioned, etc. After that, I’ll shift to writing more specifically about the implications and questions posed by simulators for the alignment problem. I’ll list a few important general categories here:
Appendix: Quasi-simulators
A note on GANs
GANs and predictive learning with log-loss are both shaped by a causal chain that flows from a single source of information: a ground truth distribution. In both cases the training process is supposed to make the generator model end up producing samples indistinguishable from the training distribution. But whereas log-loss minimizes the generator’s prediction loss against ground truth samples directly, in a GAN setup the generator never directly “sees” ground truth samples. It instead learns through interaction with an intermediary, the discriminator, which does get to see the ground truth, which it references to learn to tell real samples from forged ones produced by the generator. The generator is optimized to produce samples that fool the discriminator.
GANs are a form of self-supervised/unsupervised learning that resembles reinforcement learning in methodology. Note that the simulation objective – minimizing prediction loss on the training data – isn’t explicitly represented anywhere in the optimization process. The training losses of the generator and discriminator don’t tell you directly how well the generator models the training distribution, only which model has a relative advantage over the other.
If everything goes smoothly, then under unbounded optimization, a GAN setup should create a discriminator as good as possible at telling reals from fakes, which means the generator optimized to fool it should converge to generating samples statistically indistinguishable from training samples. But in practice, inductive biases and failure modes of GANs look very different from those of predictive learning.
For example, there’s an anime GAN that always draws characters in poses that hide the hands. Why? Because hands are notoriously hard to draw for AIs. If the generator is not good at drawing hands that the discriminator cannot tell are AI-generated, its best strategy locally is to just avoid being in a situation where it has to draw hands (while making it seem natural that hands don’t appear). It can do this, because like an RL policy, it controls the distribution that is sampled, and only samples (and not the distribution) are directly judged by the discriminator.
Although GANs arguably share the (weak) simulation objective of predictive learning, their difference in implementation becomes alignment-relevant as models become sufficiently powerful that “failure modes” look increasingly like intelligent deception. We’d expect a simulation by a GAN generator to systematically avoid tricky-to-generate situations – or, to put it more ominously, systematically try to conceal that it’s a simulator. For instance, a text GAN might subtly steer conversations away from topics which are likely to expose that it isn’t a real human. This is how you get something I’d be willing to call an agent who wants to roleplay accurately.
Table of quasi-simulators
Are masked language models simulators? How about non-ML “simulators” like SimCity?
In my mind, “simulator”, like most natural language categories, has fuzzy boundaries. Below is a table which compares various simulator-like things to the type of simulator that GPT exemplifies on some quantifiable dimensions. The following properties all characterize GPT:
Prediction and Entropy of Printed English
A few months ago, I asked Karpathy whether he ever thought about what would happen if language modeling actually worked someday when he was implementing char-rnn and writing The Unreasonable Effectiveness of Recurrent Neural Networks. No, he said, and he seemed similarly mystified as myself as to why not.
“Unsurprisingly, size matters: when training on a very large and complex data set, fitting the training data with an LSTM is fairly challenging. Thus, the size of the LSTM layer is a very important factor that influences the results(...). The best models are the largest we were able to fit into a GPU memory.”
It strikes me that this description may evoke “oracle”, but I’ll argue shortly that this is not the limit which prior usage of “oracle AI” has pointed to.
Multi-Game Decision Transformers
from Philosophers On GPT-3
[citation needed]
they are not wrapper minds
although a simulated character might, if they knew what was happening.
You might say that it’s the will of a different agent, the author. But this pattern is learned from accounts of real life as well.
Note that this formulation assumes inner alignment to the prediction objective.
Note that this is a distinct claim from that of Shard Theory, which says that the effective agent(s) will not optimize for the outer objective due to inner misalignment. Predictive orthogonality refers to the outer objective and the form of idealized inner-aligned policies.
In the Eleuther discord
And if there is an inner alignment failure such that GPT forms preferences over the consequences of its actions, it’s not clear a priori that it will care about non-myopic text prediction over something else.
Having spoken to Gwern since, his perspective seems more akin to seeing physics as an agent that minimizes free energy, a principle which extends into the domain of self-organizing systems. I think this is a nuanced and valuable framing, with a potential implication/hypothesis that dynamical world models like GPT must learn the same type of optimizer-y cognition as agentic AI.
except arguably log-loss on a self-supervised test set, which isn’t very interpretable
The way GPT is trained actually processes each token as question and answer simultaneously.
One could argue that the focus on closed-ended tasks is necessary for benchmarking language models. Yes, and the focus on capabilities measurable with standardized benchmarks is part of the supervised learning mindset.
to abuse the term
Every usage of the word “question” here is in the functional, not semantic or grammatical sense – any prompt is a question for GPT.
Of course, there are also other interventions we can make except asking the right question at the beginning.
table from “Training language models to follow instructions with human feedback”
Jean Baudrillard, Simulacra and Simulation
A proper scoring rule is optimized by predicting the “true” probabilities of the distribution which generates observations, and thus incentivizes honest probabilistic guesses. Log-loss (such as GPT is trained with) is a proper scoring rule.
Predictive accuracy is deontological with respect to the output as an action, but may still incentivize instrumentally convergent inner implementation, with the output prediction itself as the “consequentialist” objective.
This isn’t strictly true because of attention gradients: GPT's computation is optimized not only to predict the next token correctly, but also to cause future tokens to be predicted correctly when looked up by attention. I may write a post about this in the future.
actually, the multiverse, if physics is stochastic
The reason we don’t see a bunch of simulated alternate universes after humans guessed the laws of physics is because our reality has a huge state vector, making evolution according to the laws of physics infeasible to compute. Thanks to locality, we do have simulations of small configurations, though.
Prompt programming only: beating OpenAI few-shot benchmarks with 0-shot prompts, 400% increase in list sorting accuracy with 0-shot Python prompt, up to 30% increase in benchmark accuracy from changing the order of few-shot examples, and, uh, 30% increase in accuracy after capitalizing the ground truth. And of course, factored cognition/chain of thought/inner monologue: check out this awesome compilation by Gwern.
GANs and diffusion models can be unconditioned (unsupervised) or conditioned (self-supervised)
The human imagination is surely shaped by self-supervised learning (predictive learning on e.g. sensory datastreams), but probably also other influences, including innate structure and reinforcement.