At some level I agree with this post---policies learned by RL are probably not purely described as optimizing anything. I also agree that an alignment strategy might try to exploit the suboptimality of gradient descent, and indeed this is one of the major points of discussion amongst people working on alignment in practice at ML labs.
However, I'm confused or skeptical about the particular deviations you are discussing and I suspect I disagree with or misunderstand this post.
As you suggest, in deep RL we typically use gradient descent to find policies that achieve a lot of reward (typically updating the policy based on an estimator for the gradient of the reward).
If you have a system with a sophisticated understanding of the world, then cognitive policies like "select actions that I expect would lead to reward" will tend to outperform policies like "try to complete the task," and so I usually expect them to be selected by gradient descent over time. (Or we could be more precise and think about little fragments of policies, but I don't think it changes anything I say here.)
It seems to me like you are saying that you think gradient descent will fail to find such policies because...
Thanks for the detailed comment. Overall, it seems to me like my points stand, although I think a few of them are somewhat different than you seem to have interpreted.
policies learned by RL are probably not purely described as optimizing anything. I also agree that an alignment strategy might try to exploit the suboptimality of gradient descent
I think I believe the first claim, which I understand to mean "early-/mid-training AGI policies consist of contextually activated heuristics of varying sophistication, instead of e.g. a globally activated line of reasoning about a crisp inner objective." But that wasn't actually a point I was trying to make in this post.
in deep RL we typically use gradient descent to find policies that achieve a lot of reward (typically updating the policy based on an estimator for the gradient of the reward).
Depends. This describes vanilla PG but not DQN. I think there are lots of complications which throw serious wrenches into the "and then SGD hits a 'global reward optimum'" picture. I'm going to have a post explaining this in more detail, but I will say some abstract words right now in case it shakes something loose / clarifies my thoughts.
Critic-ba...
It sounded like OP was saying: using gradient descent to select a policy that gets a high reward probably won't produce a policy that tries to maximize reward. After all, look at humans, who aren't just trying to get a high reward.
And I am saying: this analogy seem like it's pretty weak evidence, because human brains seem to have a lot of things going on other than "search for a policy that gets high reward," and those other things seem like they have a massive impacts on what goals I end up pursuing.
ETA: as a simple example, it seems like the details of humans' desire for their children's success, or their fear of death, don't seem to match well with the theory that all human desires come from RL on intrinsic reward. I guess you probably think they do? If you've already written about that somewhere it might be interesting to see. Right now the theory "human preferences are entirely produced by doing RL on an intrinsic reward function" seems to me to make a lot of bad predictions and not really have any evidence supporting it (in contrast with a more limited theory about RL-amongst-other-things, which seems more solid but not sufficient for the inference you are trying to make in this post).
I didn’t write the OP. If I were writing a post like this, I would (1) frame it as a discussion of a more specific class of model-based RL algorithms (a class that includes human within-lifetime learning), (2) soften the claim from “the agent won’t try to maximize reward” to “the agent won’t necessarily try to maximize reward”.
I do think the human (within-lifetime) reward function has an outsized impact on what goals humans ends up pursuing, although I acknowledge that it’s not literally the only thing that matters.
(By the way, I’m not sure why your original comment brought up inclusive genetic fitness at all; aren’t we talking about within-lifetime RL? The within-lifetime reward function is some complicated thing involving hunger and sex and friendship etc., not inclusive genetic fitness, right?)
I think incomplete exploration is very important in this context and I don’t quite follow why you de-emphasize that in your first comment. In the context of within-lifetime learning, perfect exploration entails that you try dropping an anvil on your head, and then you die. So we don’t expect perfect exploration; instead we’d presumably design the agent such that explores if and only if it ...
It seems to me that incomplete exploration doesn't plausibly cause you to learn "task completion" instead of "reward" unless the reward function is perfectly aligned with task completion in practice. That's an extremely strong condition, and if the entire OP is conditioned on that assumption then I would expect it to have been mentioned.
Let’s say, in the first few actually-encountered examples, reward is in fact strongly correlated with task completion. Reward is also of course 100% correlated with reward itself.
Then (at least under many plausible RL algorithms), the agent-in-training, having encountered those first few examples, might wind up wanting / liking the idea of task completion, OR wanting / liking the idea of reward, OR wanting / liking both of those things at once (perhaps to different extents). (I think it’s generally complicated and a bit fraught to predict which of these three possibilities would happen.)
But let’s consider the case where the RL agent-in-training winds up mostly or entirely wanting / liking the idea of task completion. And suppose further that the agent-in-training is by now pretty smart and self-aware and in control of its situation. Then the agent m...
I think RFLO is mostly imagining model-free RL with updates at the end of each episode, and my comment was mostly imagining model-based RL with online learning (e.g. TD learning). The former is kinda like evolution, the latter is kinda like within-lifetime learning, see e.g. §10.2.2 here.
The former would say: If I want lots of raspberries to get eaten, and I have a genetic disposition to want raspberries to be eaten, then I should maybe spend some time eating raspberries, but also more importantly I should explicitly try to maximize my inclusive genetic fitness so that I have lots of descendants, and those descendants (who will also disproportionately have the raspberry-eating gene) will then eat lots of raspberries.
The latter would say: If I want lots of raspberries to get eaten, and I have a genetic disposition to want raspberries to be eaten, then I shouldn’t go do lots of highly-addictive drugs that warp my preferences such that I no longer care about raspberries or indeed anything besides the drugs.
- Stop worrying about finding “outer objectives” which are safe to maximize.[9] I think that you’re not going to get an outer-objective-maximizer (i.e. an agent which maximizes the explicitly specified reward function).
- Instead, focus on building good cognition within the agent.
- In my ontology, there's only an inner alignment problem: How do we grow good cognition inside of the trained agent?
This feels very strongly reminiscent of an update I made a while back, and which I tried to convey in this section of AGI safety from first principles. But I think you've stated it far too strongly; and I think fewer other people were making this mistake than you expect (including people in the standard field of RL), for reasons that Paul laid out above. When you say things like "Any reasoning derived from the reward-optimization premise is now suspect until otherwise supported", this assumes that the people doing this reasoning were using the premise in the mistaken way that you (and some other people, including past Richard) were. Before drawing these conclusions wholesale, I'd suggest trying to identify ways in which the things other people are saying are consistent with th...
When you say things like "Any reasoning derived from the reward-optimization premise is now suspect until otherwise supported", this assumes that the people doing this reasoning were using the premise in the mistaken way
I have considered the hypothesis that most alignment researchers do understand this post already, while also somehow reliably emitting statements which, to me, indicate that they do not understand it. I deem this hypothesis unlikely. I have also considered that I may be misunderstanding them, and think in some small fraction of instances I might be.
I do in fact think that few people actually already deeply internalized the points I'm making in this post, even including a few people who say they have or that this post is obvious. Therefore, I concluded that lots of alignment thinking is suspect until re-analyzed.
I did preface "Here are some major updates which I made:". The post is ambiguous on whether/why I believe others have been mistaken, though. I felt that if I just blurted out my true beliefs about how people had been reasoning incorrectly, people would get defensive. I did in fact consider combing through Ajeya's post for disagreements, but I thought it...
It seems to me that the basic conceptual point made in this post is entirely contained in our Risks from Learned Optimization paper. I might just be missing a point. You've certainly phrased things differently and made some specific points that we didn't, but am I just misunderstanding something if I think the basic conceptual claims of this post (which seems to be presented as new) are implied by RFLO? If not, could you state briefly what is different?
(Note I am still surprised sometimes that people still think certain wireheading scenario's make sense despite them having read RFLO, so it's plausible to me that we really didn't communicate everyrhing that's in my head about this).
Maybe you have made a gestalt-switch I haven't made yet, or maybe yours is a better way to communicate the same thing, but: the way I think of it is that the reward function is just a function from states to numbers, and the way the information contained in the reward function affects the model parameters is via reinforcement of pre-existing computations.
Is there a difference between saying:
It seems to me that once you acknowledge the point about reinforcement, the additional statement that reward is not an objective doesn't actually imply anything further about the mechanistic properties of deep reinforcement learners? It is just a way to put a high-level conceptual story on top of it, and in this sense it seems to me that this point is ...
Very late reply, sorry.
"even though reward is not a kind of objective", this is a terminological issue. In my view, calling a "antecedent-computation reinforcement criterion" an "objective" matches my definition of "objective", and this is just a matter of terminology. The term "objective" is ill-defined enough that "even though reward is not a kind of objective" is a terminological claim about objective, not a claim about math/the world.
The idea that RL agents "reinforce antecedent computations" is completely core to our story of deception. You could not make sense of our argument for deception if you didn't look at RL systems in this way. Viewing the base optimizer as "trying" to achieve an "objective" but "failing" because it is being "deceived" by the mesa optimizer is purely a metaphorical/terminological choice. It doesn't negate the fact that we all understood that the base optimizer is just reinforcing "antecedent computations". How else could you make sense of the story of deception, where an existing model, which represents the mesa optimizer, is being reinforced by the base optimizer because that existing model understands the base optimizer's optimization process?
I am not claiming that the RFLO communicated this point well, just that it was understood and absolutely was core to the paper, and large parts of the paper wouldn't even make sense if you didn't have this insight. (Certainly the fact that we called it an objective doesn't communicate the point, and it isn't meant to).
I do in fact think that few people actually already deeply internalized the points I'm making in this post, even including a few people who say they have or that this post is obvious. Therefore, I concluded that lots of alignment thinking is suspect until re-analyzed.
“Risks from Learned Optimization in Advanced Machine Learning Systems,” which we published three years ago and started writing four years ago, is extremely explicit that we don't know how to get an agent that is actually optimizing for a specified reward function. The alignment research community has been heavily engaging with this idea since then. Though I agree that many alignment researchers used to be making this mistake, I think it's extremely clear that by this point most serious alignment researchers understand the distinction.
I have relatively little idea how to "improve" a reward function so that it improves the inner cognition chiseled into the policy, because I don't know the mapping from outer reward schedules to inner cognition within the agent. Does an "amplified" reward signal produce better cognition in the inner agent? Possibly? Even if that were true, how would I know it?
This is precisely the point I make in “How do we become confident in the safety of a machine learning system is making,” btw.
(Just wanted to echo that I agree with TurnTrout that I find myself explaining the point that reward may not be the optimization target a lot, and I think I disagree somewhat with Ajeya's recent post for similar reasons. I don't think that the people I'm explaining it to literally don't understand the point at all; I think it mostly hasn't propagated into some parts of their other reasoning about alignment. I'm less on board with the "it's incorrect to call reward a base objective" point but I think it's pretty plausible that once I actually understand what TurnTrout is saying there I'll agree with it.)
The way I attempt to avoid confusion is to distinguish between the RL algorithm's optimization target and the RL policy's optimization target, and then avoid talking about the "RL agent's" optimization target, since that's ambiguous between the two meanings. I dislike the title of this post because it implies that there's only one optimization target, which exacerbates this ambiguity. I predict that if you switch to using this terminology, and then start asking a bunch of RL researchers questions, they'll tend to give broadly sensible answers (conditional on taking on the idea of "RL policy's optimization target" as a reasonable concept).
Authors' summary of the "reward is enough" paper:
...In this paper we hypothesise that the objective of maximising reward is enough to drive behaviour that exhibits most if not all attributes of intelligence that are studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language and generalisation. This is in contrast to the view that specialised problem formulations are needed for each attribute of intelligence, based on other signals or objectives. The reward-is-enough hypothesis suggests that
At this point, there isn’t a strong reason to elevate this “inner reward optimizer” hypothesis to our attention. The idea that AIs will get really smart and primarily optimize some reward signal… I don’t know of any good mechanistic stories for that. I’d love to hear some, if there are any.
Here's a story:
I don't think this is guaranteed to happen, but seems likely enough to elevate “inner reward optimizer” hypothesis to our attention, at least.
As a more general/tangential comment, I'm a bit confused about how "elevate hypothesis to our attention" is supposed to work. I mean it took some conscious effort to come up with a possible mechanistic story about how "inner reward optimizer" might arise, so how were we supposed to come up with such a story without paying attention to "inner reward optimizer" in the first place?
Perhaps it's not that we should literally pay no attention to "inner reward optimizer" until we have a good mechanistic story for it, but more like we are (or were) paying too much attention to it, given that we don't (didn't) yet have a good mechanistic story? (But if so, how to decide how much is too much?)
I like this post, and basically agree, but it comes across somewhat more broad and confident than I am, at least in certain places.
I’m currently thinking about RL along the lines of Nostalgebraist here:
“Reinforcement learning” (RL) is not a technique. It’s a problem statement, i.e. a way of framing a task as an optimization problem, so you can hand it over to a mechanical optimizer.
What’s more, even calling it a problem statement is misleading, because it’s (almost) the most general problem statement possible for any arbitrary task. —Nostalgebraist 2020
If that’s right, then I am very reluctant to say anything whatsoever about “RL agents in general”. They’re too diverse.
Much of the post, especially the early part, reads (to me) like confident claims about all possible RL agents. For example, the excerpt “…reward is the antecedent-computation-reinforcer. Reward reinforces those computations which produced it.” sounds like a confident claim about all RL agents, maybe even by definition of “RL”. (If so, I think I disagree.)
But other parts of the post aren’t like that—for example, the “Does the choice of RL algorithm matter?” part seems more reasonable and hedged, and l...
Here is an example story I wrote (that has been minorly edited by TurnTrout) about how an agent trained by RL could plausibly not optimize reward, forsaking actions that it knew during training would get it high reward. I found it useful as a way to understand his views, and he has signed off on it. Just to be clear, this is not his proposal for why everything is fine, nor is it necessarily an accurate representation of my views, just a plausible-to-TurnTrout story for how agents won't end up wanting to game human approval:
I view this post as providing value in three (related) ways:
Re 1: I first heard about the inner alignment problem through Risks From Learned Optimization and popularizations of the work. I didn't truly comprehend it - sure, I could parrot back terms like "base optimizer" and "mesa-optimizer", but it didn't click. I was confused.
Some months later I read this post and then it clicked.
Part of the pedagogical value is not having to introduce the 4 terms of form [base/mesa] + [optimizer/objective] and throwing those around. Even with Rob Miles' exposition skills that's a bit overwhelming.
Another part I liked were the phrases "Just because common English endows “reward” with suggestive pleasurable connotations" and "Let’s strip away the suggestive word “reward”, and replace it by its substance: cognition-updater." One could be tempted to object and say that surely no one would make the mistakes pointed out here, but definitely some people do. I did. Being a bit gloves...
Retrospective: I think this is the most important post I wrote in 2022. I deeply hope that more people benefit by fully integrating these ideas into their worldviews. I think there's a way to "see" this lesson everywhere in alignment: for it to inform your speculation about everything from supervised fine-tuning to reward overoptimization. To see past mistaken assumptions about how learning processes work, and to think for oneself instead. This post represents an invaluable tool in my mental toolbelt.
I wish I had written the key lessons and insights more plainly. I think I got a bit carried away with in-group terminology and linguistic conventions, which limited the reach and impact of these insights.
I am less wedded to "think about what shards will form and make sure they don't care about bad stuff (like reward)", because I think we won't get intrinsically agentic policy networks. I think the most impactful AIs will be LLMs+tools+scaffolding, with the LLMs themselves being "tool AI."
Why, exactly, would the AI seize[6] the button?
If it is a advanced AI, it may have learned to prefer more generalizable approaches and strategies. Perhaps it has learned the following features:
If you have trained it to take out the trash and clean windows, it will have been (mechanistically) trained to favor situations in which all three of these features occur. And if button pressing wasn't a viable strategy during training, it will favor actions that lead specifically to 2 and 3.
However, I do think it's conceivable that:
A similar point is (briefly) made in K. E. Drexler (2019). Reframing Superintelligence: Comprehensive AI Services as General Intelligence, §18 “Reinforcement learning systems are not equivalent to reward-seeking agents”:
Reward-seeking reinforcement-learning agents can in some instances serve as models of utility-maximizing, self-modifying agents, but in current practice, RL systems are typically distinct from the agents they produce … In multi-task RL systems, for example, RL “rewards” serve not as sources of value to agents, but as signals that guide training[.]
And an additional point which calls into question the view of RL-produced agents as the product of one big training run (whose reward specification we better get right on the first try), as opposed to the product of an R&D feedback loop with reward as one non-static component:
...RL systems per se are not reward-seekers (instead, they provide rewards), but are instead running instances of algorithms that can be seen as evolving in competition with others, with implementations subject to variation and selection by developers. Thus, in current RL practice, developers, RL systems, and agents have distinct purposes and roles.
…
R
I think this is very important, probably roughly the way to go for top level alignment strategies, and we should start hammering out the mechanistic details of it more as soon as it's at all feasible.
Do you already have any ideas for experimentally verifying parts of this, and refining/formalising it further?
For example, do you think we could look at current RL models, and trace out how a particular pattern of behaviour being reinforced in early training led to things connected to that behaviour becoming the system's target even in later stages of tr...
Another reason to not expect the selection argument to work is that it’s instrumentally convergent for most inner agent values to not become wireheaders, for them to not try hitting the reward button. [...] Therefore, it decides to not hit the reward button.
I think that subsection has the crucial insights from your post. Basically you're saying that, if we train an agent via RL in a limited environment where the reward correlates with another goal (eg "pick up the trash"), there are multiple policies the agent could have, multiple meta-policies it could...
Importantly, reward does not automatically spawn thoughts about reward, and reinforce those reward-focused thoughts!
I feel like this post has some themes similar to my article on tranquilism.
For a bit of context: In the article, I distinguish between "reflection-based motivation" and "need-based motivation." The former is something like "reflectively endorsed preferences / things the rational, planning part of your brain wants to do." The latter is something like "impulsive, system-1, unreflected motivation / things you can't help but be tempted to do." (I...
I think the quotes cited under "The field of RL thinks reward=optimization target" are all correct. One by one:
The agent's job is to find a policy… that maximizes some long-run measure of reinforcement.
Yes, that is the agent's job in RL, in the sense that if the training algorithm didn't do that we'd get another training algorithm (if we thought it was feasible for another algorithm to maximize reward). Basically, the field of RL uses a separation of concerns, where they design a reward function to incentivize good behaviour, and the agent maximizes th...
I discussed this post recently with a colleague, who encouraged me to post this excerpt:
[Colleague] It seems like: 1. RL is in the business of finding optimal policies. (...)
[TurnTrout] I disagree, or at least think it's not appropriate for it to be in that business these days. Reinforcement learning is, in my opinion, about learning from reinforcement, about how policy gradients accrue into interesting policies.
I think that a focus on optimal policies is a red herring and a stayover from the bygone age of tabular methods on tiny toy problems where policy iteration really does find the optimal policy, in reasonable time to boot.
The term "RL agent" means an agent with architecture from a certain class, amenable to a specific kind of training. Since you are discussing RL agents in this post, I think it could be misleading to use human examples and analogies ("travelling across the world to do cocaine") in it because humans are not RL agents, neither on the level of wetware biological architecture (i. e., neurons and synapses don't represent a policy) nor on the abstract, cognitive level. On the cognitive level, even RL-by-construction agents of sufficient intelligence, trained in s...
3. Stop worrying about finding “outer objectives” which are safe to maximize.[9] I think that you’re not going to get an outer-objective-maximizer (i.e. an agent which maximizes the explicitly specified reward function).
- Instead, focus on building good cognition within the agent.
- In my ontology, there's only an inner alignment problem: How do we grow good cognition inside of the trained agent?
This vibes well with what I've been thinking about recently.
There a post in the back of my mind called "Character alignment", which is...
Relevant quote I just found in the paper "Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents":
...The primary measure of an agent’s performance is the score achieved during an episode, namely the undiscounted sum of rewards for that episode. While this performance measure is quite natural, it is important to realize that score, in and of itself, is not necessarily an indicator of AI progress. In some games, agents can maximize their score by “getting stuck” in a loop of “small” rewards, ignoring what human p
I'm feeling confused.
It might just be my inexperience with reinforcement learning, but while I agree with what you say, I can't square it with my intuition of what a ML model does.
If our model uses some variant of gradient ascent, it will end up in high reward function values. (Not necessarily in any global/local maxima, but the attempt is to get it to some such maxima.) In that sense the model does optimize for reward.
Is that a special attribute of gradient ascent, that we shouldn't expect other models to have? Does that mean that gradient ascent models are more dangerous? Are you just noting that the model won't necessarily find the global maxima, and only reach some local maxima?
In my ontology, there's only an inner alignment problem: How do we grow good cognition inside of the trained agent?
This seems like a great takeaway and the part I agree with most here, although probably stated less strongly. Did you see Richard Ngo's Shaping Safer Goals (2020) or my Motivations, Natural Selection, and Curriculum Engineering (2021) responding to it[1]? Both relate to this sort of picture.
...So the RL agent’s algorithm won’t make it e.g. explore wireheading either, and so the convergence theorems don’t apply even a little—even in spirit...
I think there are some subtleties here regarding the distinction between RL as a type of reward signal, and RL as a specific algorithm. You can take the exact same reward signal and use it either to update all computations in the entire AI (with some slightly magical credit assignment scheme) as in this post, or you can use it to update a reward prediction model in a model-based RL agent that acts a lot more like a maximizer.
I'd also like to hear your opinion on the effect of information leakage. For example, if reward only correlates with getting to the g...
There’s no escaping it: After enough backup steps, you’re traveling across the world to do cocaine.
But obviously these conditions aren’t true in the real world.
I think they are a little? Some people do travel to other countries for easier and better drug access. And some people become total drug addicts (perhaps arguably by miscalculating their long-term reward consequences and having too-high a discount rate, oops), while others do a light or medium amount of drugs longer-term.
Lots of people also don't do this, but there's a huge amount of info...
"And not only do I not expect the trained agents to not maximize the original “outer” reward signal"
Nitpick: one "not" too many?
Here's my general view on this topic:
Very interesting. I would love to see this worked out in a toy example, where you can see that an RL agent in a grid world does not in general maximize reward, but is able to reason to do… something else. That’s the part I have the hardest time translating into a simulation: what does it mean that the agent is “thinking” about outcomes, if that is something different than running an RL algorithm?
But the essential point that humans choose not to wirehead — or in general to delay or avoid gratification — is a good one. Why do they do this? Is there any RL al...
Another reason to not expect the selection argument to work is that it’s instrumentally convergent for most inner agent values to not become wireheaders, for them to not try hitting the reward button.
To me this implies that as the AI becomes more situationally aware it learns to avoid rewards that reinforce away its current goals (because it wants to preserve its goals.) As a result, throughout the training process, the AIs goals start out malleable and "harden" once the AI gains enough situational awareness. This implies that goals have to be simple enough for the agent to be able to model them early on in its training process.
Edit 11/15/22: The original version of this post talked about how reward reinforces antecedent computations in policy gradient approaches. This is not true in general. I edited the post to instead talk about how reward is used to upweight certain kinds of actions in certain kinds of situations, and therefore reward chisels cognitive grooves into agents.
Update: Changed
RL agents which don’t think about reward before getting reward, will not become reward optimizers, because there will be no reward-oriented computations for credit assignment to reinforce.
to
...While it's possible to have activations on "pizza consumption predicted to be rewarding" and "execute
motor-subroutine-#51241
" and then have credit assignment hook these up into a new motivational circuit, this is only one possible direction of value formation in the agent. Seemingly, the most direct way for an agent to become more of a
The deceptive alignment worry is that there is some goal about the real world at all. Deceptive alignment breaks robustness of any properties of policy behavior, not just the property of following reward as a goal in some unfathomable sense.
So refuting this worry requires quieting the more general hypothesis that RL selects optimizers with any goals of their own, doesn't matter what goals those are. It's only the argument for why this seems plausible that needs to refer to reward as related to the goal of such an optimizer, but the way the argument goes su...
OP says that this post is focused on RL policy gradient algorithms (e.g. PPO) where the RL signal is used by gradient descent to update the policy.
But what about Q-learning which is another popular RL algorithm? My understanding of Q-learning is that the policy network takes an observation as input, calculates the value (expected return) of each possible action in the state and then chooses the action with the highest value.
Does this mean that reward is not the optimization target for policy gradient algorithms but is for Q-learning algor...
Reward has the mechanistic effect of chiseling cognition into the agent's network.
Absolutely. Though in the next sentence:
Therefore, properly understood, reward does not express relative goodness and is therefore not an optimization target at all.
I'd mention two things here:
1) The more complex and advanced a model is, the more likely it is [I think] to learn a mesa-optimization goal that is extremely similar to the actual reward a model was trained on (because it's basically the most generalizable mesa-goal to be learned, w.r.t. training data).
2) Rein...
The argument above isn’t clear to me, because I’m not sure how you’re defining your terms.
I should note that, contrary to the statement “reward is _not_, in general, that-which-is-optimized by RL agents”, by definition "reward _must be_ what is optimized for by RL agents." If they do not do that, they are not RL agents. At least, that is true based on the way the term “reward” is commonly used in the field of RL. That is what RL agents are programmed by humans to do. They do that by changing their behavior over many trials, and testing the results of ...
Sorry if I should have misunderstood the point of your post, but I'm surprised that Bellman's optimality equation was nowhere mentioned. From Sutton's book on the topic I understood that once the policy iteration of vanilla RL converged to the point that the BOE holds, the agent is maximizing "value", which I would define in words as something like "expectation of discounted and cumulated reward". Now before one turns off a student new to the topic by giving a precise definition of those terms right away, I can see why he might have contracted that a bit u...
I feel like there is some vocabulary confusion in the genesis of this post. "Reward" is hard coded into the agents. The Dinosaurs of Jurrasic Park (spoiler alert) were genetically engineered to lack iodine. So, the trainers could use iodine as a reward to incentives other behaviors because be definition the dinos valued iodine as a terminal value. In humans Seratonin and Dopamine bonding to appropriate brain receptors are DNA-coded terminal values that inherently train us to pursue certain behaviors (eg food, sex). An AI is, by definition, going to take whatever actions maximize its Reward system. That's what having a Reward system means.
this post made me understand something i did not understand before that seems very important. important enough that it made me reconsider a bunch of related beliefs about ai.
This insight was made possible by many conversations with Quintin Pope, where he challenged my implicit assumptions about alignment. I’m not sure who came up with this particular idea.
In this essay, I call an agent a “reward optimizer” if it not only gets lots of reward, but if it reliably makes choices like “reward but no task completion” (e.g. receiving reward without eating pizza) over “task completion but no reward” (e.g. eating pizza without receiving reward). Under this definition, an agent can be a reward optimizer even if it doesn't contain an explicit representation of reward, or implement a search process for reward.
ETA 9/18/23: This post addresses the model-free policy gradient setting, including algorithms like PPO and REINFORCE.
Many people[1] seem to expect that reward will be the optimization target of really smart learned policies—that these policies will be reward optimizers. I strongly disagree. As I argue in this essay, reward is not, in general, that-which-is-optimized by RL agents.[2]
Separately, as far as I can tell, most[3] practitioners usually view reward as encoding the relative utilities of states and actions (e.g. it’s this good to have all the trash put away), as opposed to imposing a reinforcement schedule which builds certain computational edifices inside the model (e.g. reward for picking up trash → reinforce trash-recognition and trash-seeking and trash-putting-away subroutines). I think the former view is usually inappropriate, because in many setups, reward chisels cognitive grooves into an agent.
Therefore, reward is not the optimization target in two senses:
Reward probably won’t be a deep RL agent’s primary optimization target
After work, you grab pizza with your friends. You eat a bite. The taste releases reward in your brain, which triggers credit assignment. Credit assignment identifies which thoughts and decisions were responsible for the release of that reward, and makes those decisions more likely to happen in similar situations in the future. Perhaps you had thoughts like
motor-subroutine-#51241
to take out my wallet” andMany of these thoughts will be judged responsible by credit assignment, and thereby become more likely to trigger in the future. This is what reinforcement learning is all about—the reward is the reinforcer of those things which came before it and the creator of new lines of cognition entirely (e.g. anglicized as "I shouldn't buy pizza when I'm mostly full"). The reward chisels cognition which increases the probability of the reward accruing next time.
Importantly, reward does not automatically spawn thoughts about reward, and reinforce those reward-focused thoughts! Just because common English endows “reward” with suggestive pleasurable connotations, that does not mean that an RL agent will terminally value reward!
What kinds of people (or non-tabular agents more generally) will become reward optimizers, such that the agent ends up terminally caring about reward (and little else)? Reconsider the pizza situation, but instead suppose you were thinking thoughts like “this pizza is going to be so rewarding” and “in this situation, eating pizza sure will activate my reward circuitry.”
You eat the pizza, triggering reward, triggering credit assignment, which correctly locates these reward-focused thoughts as contributing to the release of reward. Therefore, in the future, you will more often take actions because you think they will produce reward, and so you will become more of the kind of person who intrinsically cares about reward. This is a path[4] to reward-optimization and wireheading.
While it's possible to have activations on "pizza consumption predicted to be rewarding" and "execute
motor-subroutine-#51241
" and then have credit assignment hook these up into a new motivational circuit, this is only one possible direction of value formation in the agent. Seemingly, the most direct way for an agent to become more of a reward optimizer is to already make decisions motivated by reward, and then have credit assignment further generalize that decision-making.The siren-like suggestiveness of the word “reward”
Let’s strip away the suggestive word “reward”, and replace it by its substance: cognition-updater.
Suppose a human trains an RL agent by pressing the cognition-updater button when the agent puts trash in a trash can. While putting trash away, the AI’s policy network is probably “thinking about”[5] the actual world it’s interacting with, and so the cognition-updater reinforces those heuristics which lead to the trash getting put away (e.g. “if trash-classifier activates near center-of-visual-field, then grab trash using
motor-subroutine-#642
”).Then suppose this AI models the true fact that the button-pressing produces the cognition-updater. Suppose this AI, which has historically had its trash-related thoughts reinforced, considers the plan of pressing this button. “If I press the button, that triggers credit assignment, which will reinforce my decision to press the button, such that in the future I will press the button even more.”
Why, exactly, would the AI seize[6] the button? To reinforce itself into a certain corner of its policy space? The AI has not had antecedent-computation-reinforcer-thoughts reinforced in the past, and so its current decision will not be made in order to acquire the cognition-updater!
RL is not, in general, about training cognition-updater optimizers.
When is reward the optimization target of the agent?
If reward is guaranteed to become your optimization target, then your learning algorithm can force you to become a drug addict. Let me explain.
Convergence theorems provide conditions under which a reinforcement learning algorithm is guaranteed to converge to an optimal policy for a reward function. For example, value iteration maintains a table of value estimates for each state s, and iteratively propagates information about that value to the neighbors of s. If a far-away state f has huge reward, then that reward ripples back through the environmental dynamics via this “backup” operation. Nearby parents of f gain value, and then after lots of backups, far-away ancestor-states gain value due to f’s high reward.
Eventually, the “value ripples” settle down. The agent picks an (optimal) policy by acting to maximize the value-estimates for its post-action states.
Suppose it would be extremely rewarding to do drugs, but those drugs are on the other side of the world. Value iteration backs up that high value to your present space-time location, such that your policy necessarily gets at least that much reward. There’s no escaping it: After enough backup steps, you’re traveling across the world to do cocaine.
But obviously these conditions aren’t true in the real world. Your learning algorithm doesn’t force you to try drugs. Any AI which e.g. tried every action at least once would quickly kill itself, and so real-world general RL agents won’t explore like that because that would be stupid. So the RL agent’s algorithm won’t make it e.g. explore wireheading either, and so the convergence theorems don’t apply even a little—even in spirit.
Anticipated questions
motor-subroutine-#642
”, and then this gets reinforced into reward-focused cognition early on?motor-subroutine-#642
”, and then this gets reinforced into blue-wall-focused cognition early on? Why consider either scenario to begin with?Dropping the old hypothesis
At this point, I don't see a strong reason to focus on the “reward optimizer” hypothesis. The idea that AIs will get really smart and primarily optimize some reward signal… I don’t know of any tight mechanistic stories for that. I’d love to hear some, if there are any.
As far as I’m aware, the strongest evidence left for agents intrinsically valuing cognition-updating is that some humans do strongly (but not uniquely) value cognition-updating,[8] and many humans seem to value it weakly, and humans are probably RL agents in the appropriate ways. So we definitely can’t rule out agents which strongly (and not just weakly) value the cognition-updater. But it’s also not the overdetermined default outcome. More on that in future essays.
It’s true that reward can be an agent’s optimization target, but what reward actually does is reinforce computations which lead to it. A particular alignment proposal might argue that a reward function will reinforce the agent into a shape such that it intrinsically values reinforcement, and that the cognition-updater goal is also a human-aligned optimization target, but this is still just one particular approach of using the cognition-updating to produce desirable cognition within an agent. Even in that proposal, the primary mechanistic function of reward is reinforcement, not optimization-target.
Implications
Here are some major updates which I made:
Edit 11/15/22: The original version of this post talked about how reward reinforces antecedent computations in policy gradient approaches. This is not true in general. I edited the post to instead talk about how reward is used to upweight certain kinds of actions in certain kinds of situations, and therefore reward chisels cognitive grooves into agents.
Appendix: The field of RL thinks reward=optimization target
Let’s take a little stroll through Google Scholar’s top results for “reinforcement learning", emphasis added:
Steve Byrnes did, in fact, briefly point out part of the “reward is the optimization target” mistake:
I don't think it's just sloppy talk, I think it's incorrect belief in many cases. I mean, I did my PhD on RL theory, and I still believed it. Many authorities and textbooks confidently claim—presenting little to no evidence—that reward is an optimization target (i.e. the quantity which the policy is in fact trying to optimize, or the quantity to be optimized by the policy). Check what the math actually says.
Including the authors of the quoted introductory text, Reinforcement learning: An introduction. I have, however, met several alignment researchers who already internalized that reward is not the optimization target, perhaps not in so many words.
Utility ≠ Reward points out that an RL-trained agent is optimized by original reward, but not necessarily optimizing for the original reward. This essay goes further in several ways, including when it argues that reward and utility have different type signatures—that reward shouldn’t be viewed as encoding a goal at all, but rather a reinforcement schedule. And not only do I not expect the trained agents to maximize the original “outer” reward signal, I think they probably won’t try to strongly optimize any reward signal.
Reward shaping seems like the most prominent counterexample to the “reward represents terminal preferences over state-action pairs” line of thinking.
But also, you were still probably thinking about reality as you interacted with it (“since I’m in front of the shop where I want to buy food, go inside”), and credit assignment will still locate some of those thoughts as relevant, and so you wouldn’t purely reinforce the reward-focused computations.
"Reward reinforces existing thoughts" is ultimately a claim about how updates depend on the existing weights of the network. I think that it's easier to update cognition along the lines of existing abstractions and lines of reasoning. If you're already running away from wolves, then if you see a bear and become afraid, you can be updated to run away from large furry animals. This would leverage your existing concepts.
From A shot at the diamond-alignment problem:
Quintin Pope remarks: “The AI would probably want to establish control over the button, if only to ensure its values aren't updated in a way it wouldn't endorse. Though that's an example of convergent powerseeking, not reward seeking.”
For mechanistically similar reasons, keep cocaine out of the crib until your children can model the consequences of addiction.
I am presently ignorant of the relationship between pleasure and reward prediction error in the brain. I do not think they are the same.
However, I think people are usually weakly hedonically / experientially motivated. Consider a person about to eat pizza. If you give them the choice between "pizza but no pleasure from eating it" and "pleasure but no pizza", I think most people would choose the latter (unless they were really hungry and needed the calories). If people just navigated to futures where they had eaten pizza, that would not be true.
From correspondence with another researcher: There may yet be an interesting alignment-related puzzle to "Find an optimization process whose maxima are friendly", but I personally don't share the intuition yet.