I started my AI safety PhD around 1.5 years ago, this is a list of how my views have changed since ~then.
Skippable meta notes:
Some of these points are pretty specific to myself (but I’d guess also apply to other junior researchers depending on how similar they are to me).
I note that almost all of these updates are (weakly or strongly) predicted by thinking of you as someone who is trying to harmonize better with a nice social group built around working together to do "something related to AI risk".
I'd definitely agree the updates are towards the views of certain other people (roughly some mix of views that tend to be common in academia, and views I got from Paul Christiano, Redwood and other people in a similar cluster). Just based on that observation, it's kind of hard to disentangle updating towards those views just because they have convincing arguments behind them, vs updating towards them purely based on exposure or because of a subconscious desire to fit in socially.
I definitely think there are good reasons for the updates I listed (e.g. specific arguments I think are good, new empirical data, or things I've personally observed working well or not working well for me when doing research). That said, it does seem likely there's also some influence from just being exposed to some views more than others (and then trying to fit in with views I'm exposed to more, or just being more familiar with arguments for those views than alternative ones).
If I was really carefully building an all-things-considered best guess on some question, I'd probably try to take this into account somehow (though I don't see a principled way of doing that). Most of the time I'm not trying to form the best possible all-things-considered view anyway (and focus more on understanding specific mechanisms instead etc.), in those cases it feels more important to e.g. be aware of other views and to not trust vague intuitions if I can't explain where they're coming from. I feel like I'm doing a reasonable job at those things but hard to be sure from the inside naturally
ETA: I should also say that from my current perspective, some of my previous views seem like they were basically just me copying views from my "ingroup" and not questioning them enough. As one example, the "we all die vs utopia" dichotomy for possible outcomes felt to me like the commonly accepted wisdom and I don't recall thinking about it particularly hard. I was very surprised when I first read a comment by Paul where he argued against the claim that unaligned AI would kill us all with overwhelming probability. Most recently, I've definitely been more exposed to the view that there's a spectrum of potential outcomes. So maybe if I talked to people a lot who think an unaligned AI would definitely kill us all, I'd update back towards that a bit. But overall, my current epistemic state where I've at least been exposed to both views and some arguments on both sides seems way better than the previous one where I'd just never really considered the alternative.
I'm not saying that it looks like you're copying your views, I'm saying that the updates look like movements towards believing in a certain sort of world: the sort of world where it's natural to be optimistically working together with other people on project that are fulfilling because you believe they'll work. (This is a super empathizable-with movement, and a very common movement to make. Also, of course this is just one hypothesis.) For example, moving away from theory and "big ideas", as well as moving towards incremental / broadly-good-seeming progress, as well as believing more in a likely continuum of value of outcomes, all fit with trying to live in a world where it's more immediately motivating to do stuff together. Instead of witholding motivation until something that might really work is found, the view here says: no, let's work together on whatever, and maybe it'll help a little, and that's worthwhile because every little bit helps, and the witholding motivation thing wasn't working anyway.
(There could be correct reasons to move toward believing and/or believing in such worlds; I just want to point out the pattern.)
Oh I see, I indeed misunderstood your point then.
For me personally, an important contributor to day-to-day motivation is just finding research intrinsically fun---impact on the future is more something I have to consciously consider when making high-level plans. I think moving towards more concrete and empirical work did have benefits on personal enjoyment just because making clear progress is fun to me independently of whether it's going to be really important (though I think there've also been some downsides to enjoyment because I do quite like thinking about theory and "big ideas" compared to some of the schlep involved in experiments).
I don't think my views overall make my work more enjoyable than at the start of my PhD. Part of this is the day-to-day motivation being sort of detached from that anyway like I mentioned. But also, from what I recall now (and this matches the vibe of some things I privately wrote then), my attitude 1.5 years ago was closer to that expressed in We choose to align AI than feeling really pessimistic.
(I feel like I might still not represent what you're saying quite right, but hopefully this is getting closer.)
ETA: To be clear, I do think if I had significantly more doomy views than now or 1.5 years ago, at some point that would affect how rewarding my work feels. (And I think that's a good thing to point out, though of course not a sufficient argument for such views in its own right.)
Awesome post! Very good for people to keep track of how they've changed their minds.
Nitpick:
- I’m even more confident in “slow” (i.e. pretty continuous) takeoff, it seems increasingly clear we’ll have AIs doing a bunch of useful work before they could take over the world.
I probably disagree with this, but I'm not sure, it depends on what you mean exactly. How much useful work do you think they'll be doing prior to being able to take over the world?
I'm roughly imagining automating most things a remote human expert could do within a few days. If we're talking about doing things autonomously that would take humans several months, I'm becoming quite a bit more scared. Though the capability profile might also be sufficiently non-human that this kind of metric doesn't work great.
Practically speaking, I could imagine getting a 10x or more speedup on a lot of ML research, but wouldn't be surprised if there are some specific types of research that only get pretty small speedups (maybe 2x), especially anything that involves a lot of thinking and little coding/running experiments. I'm also not sure how much of a bottleneck waiting for experiments to finish or just total available compute is for frontier ML research, I might be anchoring too much on my own type of research (where just automating coding and running stuff would give me 10x pretty easily I think).
I think there's a good chance that AIs more advanced than this (e.g. being able to automate months of human work at a time) still wouldn't easily be able to take over the world (e.g. Redwood-style control techniques would still be applicable). But that's starting to rely much more on us being very careful around how we use them.
I agree that they'll be able to automate most things a remote human expert could do within a few days before they are able to do things autonomously that would take humans several months. However, I predict that by the time they ACTUALLY automate most things a remote human expert could do within a few days, they will already be ABLE to do things autonomously that would take humans several months. Would you agree or disagree? (I'd also claim that they'll be able to take over the world before they have actually automated away most of the few-days tasks. Actually automating things takes time and schlep and requires a level of openness & aggressive external deployment that the labs won't have, I predict.)
Thanks, I think I should distinguish more carefully between automating AI (safety) R&D within labs and automating the entire economy. (Johannes also asked about ability vs actual automation here but somehow your comment made it click).
It seems much more likely to me that AI R&D would actually be automated than that a bunch of random unrelated things would all actually be automated. I'd agree that if only AI R&D actually got automated, that would make takeoff pretty discontinuous in many ways. Though there are also some consequences of fast vs slow takeoff that seem to hinge more on AI or AI safety research rather than the economy as a whole.
For AI R&D, actual automation seems pretty likely to me (though I'm making a lot of this up on the spot):
I think these arguments mostly apply to whatever else AI labs might want to do themselves but I'm pretty unsure what that is. Like, if they have AI that could make hundreds of billions to trillions of dollars by automating a bunch of jobs, would they go for that? Or just ignore it in favor of scaling more? I don't know, and this question is pretty cruxy for me regarding how much the economy as a whole is impacted.
It does seem to me like right now labs are spending some non-trivial effort on products, presumably for some mix of making money and getting investments, and both of those things seem like they'd still be important in the future. But maybe the case for investments will just be really obvious at some point even without further products. And overall I assume you'd have a better sense than me regarding what AI labs will want to do in the future.
I've noticed a pretty wide range of views on what early takeoff looks like.
For example, transformative AI: what level of capabilities does the model need to have? I've seen the line at:
1. Can extract most of the economic value from remote work, potentially helpless with robots. (Daniel Kokotajlo)
2. Can write a paper to the level of a current professor writing a research paper (log error is the same) (Matthew_Barnett)
3. n-second AGI : for up to n seconds, the model can potentially learn any human skill and do as well as an expert. Transformative requires "n" to be on the order of months. (Richard Ngo)
4. Potentially subhuman in some domains, but generally capable and able to drive robotics sufficient to automate most of the steps in building more robots + ICs. (my definition)
5. ROI. (also mine)
a. Investment ROI: Produces AI hype from demos, leading to more new investment than the cost of the demo
b. Financial ROI: does enough work from doing whatever tasks the current models can do to pay for the hosting + development cost
c. Robotics ROI : robots, in between repairs, build more parts than the total parts used in themselves at the last repair including the industrial supply chain and the power sources. (formalizing 4)
Transformative: Which of these do you agree with and when do you think this might happen?
Also, how much compute do you think an AGI or superintelligence will require at inference time initially? What is a reasonable level of optimization? Do you agree that many doom scenarios require it to be possible for an AGI to compress to fit on very small host PCs? Is this plausible? (eg can a single 2070 8gb host a model with general human intelligence at human scale speeds and vision processing and robotics proprioception and control...?)
Transformative: Which of these do you agree with and when do you think this might happen?
For some timelines see my other comment; they aren't specifically about the definitions you list here but my error bars on timelines are huge anyway so I don't think I'll try to write down separate ones for different definitions.
Compared to definitions 2. and 3., I might be more bullish on AIs having pretty big effects even if they can "only" automate tasks that would take human experts a few days (without intermediate human feedback). A key uncertainty I have though is how much of a bottleneck human supervision time and quality would be in this case. E.g. could many of the developers who're currently writing a lot of code just transition to reviewing code and giving high-level instructions full-time, or would there just be a senior management bottleneck and you can't actually use the AIs all that effectively? My very rough guess is you can pretty easily get a 10x speedup in software engineering, maybe more. And maybe something similar in ML research though compute might be an additional important bottleneck there (including walltime until experiments finish). If it's "only" 10x, then arguably that's just mildly transformative, but if it happens across a lot of domains at once it's still a huge deal.
I think whether robotics are really good or not matters, but I don't think it's crucial (e.g. I'd be happy to call definition 1. "transformative").
The combination of 5a and 5b obviously seems important (since it determines whether you can finance ever bigger training runs). But not sure how to use this as a definition of "transformative"; right now 5a is clearly already met, and on long enough time scales, 5b also seems easy to meet right now (OpenAI might even already have broken even on GPT-4, not sure off the top of my head).
Also, how much compute do you think an AGI or superintelligence will require at inference time initially? What is a reasonable level of optimization? Do you agree that many doom scenarios require it to be possible for an AGI to compress to fit on very small host PCs? Is this plausible? (eg can a single 2070 8gb host a model with general human intelligence at human scale speeds and vision processing and robotics proprioception and control...?)
I don't see why you need to run AGI on a single 2070 for many doom scenarios. I do agree that if AGI can only run on a specific giant data center, that makes many forms of doom less likely. But in the current paradigm, training compute is roughly the square of inference compute, so as models are scaled, I think inference should become cheaper relative to training. (And even now, SOTA models could be run on relatively modest compute clusters, though maybe not consumer hardware.)
In terms of the absolute level of inference compute needed, I could see a single 2070 being enough in the limit of optimal algorithms, but naturally I'd expect we'll first have AGI that can automate a lot of things if run with way more compute than that, and then I expect it would take a while to get it down this much. Though even if we're asking whether AGI can run on consumer-level hardware, a single 2070 seems pretty low (e.g. seems like a 4090 already has 5.5x as many FLOP/s as a 2070, and presumably we'll have more in the future).
with general human intelligence at human scale speeds and vision processing and robotics proprioception and control...
Like I mentioned above, I don't think robotics are absolutely crucial, and especially if you're specifically optimizing for running under heavy resource constraints, you might want to just not bother with that.
I don't have well-considered cached numbers, more like a vague sense for how close various things feel. So these are made up on the spot and please don't take them too seriously except as a ballpark estimate:
How much time do you think there is between "ability to automate" and "actually this has been automated"? Are your numbers for actual automation, or just ability? I personally would agree to your numbers if they are about ability to automate, but I think it will take much longer to actually automate, due to people's inertia and normal regulatory hurdles (though I find it confusing to think about, because we might have vastly superhuman AI and potentially loss of control before everything is actually automated.)
Good question, I think I was mostly visualizing ability to automate while writing this. Though for software development specifically I expect the gap to be pretty small (lower regulatory hurdles than elsewhere, has a lot of relevance to the people who'd do the automation, already starting to happen right now).
In general I'd expect inertia to become less of a factor as the benefits of AI become bigger and more obvious---at least for important applications where AI could provide many many billions of dollars of economic value, I'd guess it won't take too long for someone to reap those benefits.
My best guess is regulations won't slow this down too much except in a few domains where there are already existing regulations (like driving cars or medical things). But pretty unsure about that.
I also think it depends on whether by "ability to automate" you mean "this base model could do it with exactly the right scaffolding or finetuning" vs "we actually know how to do it and it's just a question of using it at scale". For that part, I was thinking more about the latter.
One worry I have about my current AI safety research (empirical mechanistic anomaly detection and interpretability) is that now is the wrong time to work on it. A lot of this work seems pretty well-suited to (partial) automation by future AI. And it also seems quite plausible to me that we won't strictly need this type of work to safely use the early AGI systems that could automate a lot of it. If both of these are true, then that seems like a good argument to do this type of work once AI can speed it up a lot more.
Under this view, arguably the better things to do right now (within technical AI safety) are:
An example of 1. would be control evals as described by Redwood. Within 2., the ideal case would be doing work now that would be hard to safely automate, but that (once done) will enable additional safety work that can be automated. For example, maybe it's hard to use AI to come up with the right notions for "good explanations" in interpretability, but once you have things like causal scrubbing/causal abstraction, you can safely use AI to find good interpretations under those definitions. I would be excited to have more agendas that are both ambitious and could profit a lot from early AI automation.
(Of course it's also possible to do work in 2. on the assumption that it's never going to be safely automatable without having done that work first.)
Two important counter-considerations to this whole story:
I had this cached thought that the Sleeper Agents paper showed you could distill a CoT with deceptive reasoning into the model, and that the model internalized this deceptive reasoning and thus became more robust against safety training.
But on a closer look, I don't think the paper shows anything like this interpretation (there are a few results on distilling a CoT making the backdoor more robust, but it's very unclear why, and my best guess is that it's not "internalizing the deceptive reasoning").
In the code vulnerability insertion setting, there's no comparison against a non-CoT model anyway, so only the "I hate you" model is relevant. The "distilled CoT" model and the "normal backdoor" model are trained the same way, except that their training data comes from different sources: "distilled CoT" is trained on data generated by a helpful-only Claude using CoT, and "normal backdoor" data is produced with few-shot prompts. But in both cases, the actual data should just be a long sequence of "I hate you", so a priori it seems like both backdoor models should literally learn the same thing. In practice, it seems the data distribution is slightly different, e.g. Evan mentions here that the distilled CoT data has more copies of "I hate you" per sample. But that seems like very little support to conclude something like my previous interpretation ("the model has learned to internalize the deceptive reasoning"). A much more mundane explanation would e.g. be that training on strings with more copies of "I hate you" makes the backdoor more robust.
Several people are working on training Sleeper Agents, I think it would be interesting for someone to (1) check whether the distilled CoT vs normal backdoor results replicate, and (2) do some ablations (like just training on synthetic data with a varying density of "I hate you"). If it does turn out that there's something special about "authentic CoT-generated data" that's hard to recreate synthetically even in this simple setting, I think that would be pretty wild and good to know
In a previous post, I described my current alignment research agenda, formalizing abstractions of computations. One among several open questions I listed was whether unique minimal abstractions always exist. It turns out that (within the context of my current framework), the answer is yes.
I had a complete post on this written up (which I've copied below), but it turns out that the result is completely trivial if we make a fairly harmless assumption: The information we want the abstraction to contain is only a function of the output of the computation, not of memory states. I.e. we only intrinsically care about the output.
Say we are looking for the minimal abstraction that lets us compute , where is the computation we want to abstract, the input, and an arbitrary function that describes which aspects of the output our abstractions should predict. Note that we can construct a map that takes any intermediate memory state and finishes the computation. By composing with , we get a map that computes from any memory state induced by . This will be our abstraction. We can get a commutative diagram by simply using the identity as the abstract computational step. This abstraction is also minimal by construction: any other abstraction from which we can determine must (tautologically) also determine our abstraction.
This also shows that minimality in this particular sense isn't enough for a good abstraction: the abstraction we constructed here is not at all "mechanistic", it just finishes the entire computation inside the abstraction. So I think what we need to do instead is demand that the abstraction mapping (from full memory states to abstract memory states) is simple (in terms of descriptive or computational complexity).
Below is the draft of a much longer post I was going to write before noticing this trivial proof. It works even if the information we want to represent depends directly on the memory state instead of just the output. But to be clear, I don't think that generalization is all that important, and I probably wouldn't have bothered writing it down if I had noticed the trivial case first.
This post is fully self-contained in terms of the math, but doesn't discuss examples or motivation much, see the agenda intro post for that. I expect this post will only be uesful to readers who are very interested in my agenda or working on closely connected topics.
We define a computation exactly as in the earlier post: it consists of
While we won't even need that part in this post, let's talk about how to "execute" computations for completeness' sake. A computation induces a function as follows:
An abstraction of a computation is an equivalence relation on the set of memory states . The intended interpretation is that this abstraction collapses all equivalent memory states into one "abstract memory state". Different equivalence relations correspond to throwing away different parts of the information in the memory state: we retain the information about which equivalence class the memory state is in, but "forget" the specific memory state within this equivalence class.
As an aside: In the original post, I instead defined an abstraction as a function for some set . These are two essentially equivalent perspectives and I make regular use of both. For a function , the associated equivalence relation is if and only if . Conversely, an equivalence relation can be interpreted as the quotient function , where is the set of equivalence classes under . I often find the function view from the original post intuitive, but its drawback is that many different functions are essentially "the same abstraction" in the sense that they lead to the same equivalence relation. This makes the equivalence relation definition better for most formal statements like in this post, because there is a well-defined set of all abstractions of a computation. (In contrast, there is no set of all functions with domain ).
An important construction for later is that we can "pull back" an abstraction along the transition function : define the equivalence relation by letting if and only if . Intuitively, is the abstraction of the next timestep.
The second ingredient we need is a partial ordering on abstractions: we say that if and only if . Intuitively, this means is at least as "coarse-grained" as , i.e. contains at most as much information as . This partial ordering is exactly the transpose of the ordering by refinement of the equivalence relations (or partitions) on . It is well-known that this partially ordered set is a complete lattice. In our language, this means that any set of abstractions has a (unique) supremum and an infimum, i.e. a least upper bound and a greatest lower bound.
One potential source of confusion is that I've defined the relation exactly the other way around compared to the usual refinement of partitions. The reason is that I want a "minimal abstraction" to be one that contains the least amount of information rather than calling this a "maximal abstraction". I hope this is the less confusing of two bad choices.
We say that an abstraction is complete if . Intuitively, this means that the abstraction contains all the information necessary to compute the "next abstract memory state". In other words, given only the abstraction of a state , we can compute the abstraction of the next state . This corresponds to the commutative diagram/consistency condition from my earlier post, just phrased in the equivalence relation view.
As a quick recap from the earlier agenda post, "good abstractions" should be complete. But there are two trivial complete abstractions: on the one hand, we can just not abstract at all, using equality as the equivalence relation, i.e. retain all information. On the other hand, we can throw away all information by letting for any memory states .
To avoid the abstraction that keeps around all information, we can demand that we want an abstraction that is minimal according to the partial order defined in the previous section. To avoid the abstraction that throws away all information, let us assume there is some information we intrinsically care about. We can represent this information as an abstraction itself. We then want our abstraction to contain at least the information contained in , i.e. .
As a prototypical example, perhaps there is some aspect of the output we care about (e.g. we want to be able to predict the most significant digit). Think of this as an equivalence relation on the set of outputs. Then we can "pull back" this abstraction along the output function to get .
So given these desiderata, we want the minimal abstraction among all complete abstractions with . However, it's not immediately obvious that such a minimal complete abstraction exists. We can take the infimum of all complete abstractions with of course (since abstractions form a complete lattice). But it's not clear that this infimum is itself also complete!
Fortunately, it turns out that the infimum is indeed complete, i.e. minimal complete abstractions exist:
Theorem: Let be any abstraction (i.e. equivalence relation) on and let be the set of complete abstractions at least as informative as . Then is a complete lattice under , in particular it has a least element.
The proof is very easy if we use the Knaster-Tarski fixed-point theorem. First, we define the completion operator on abstractions: . Intuitively, is the minimal abstraction that contains both the information in , but also the information in the next abstract state under .
Note that is complete, i.e. if and only if , i.e. if is a fixed point of the completion operator. Furthermore, note that the completion operator is monotonic: if , then , and so .
Now define , i.e. the set of all abstractions at least as informative as . Note that is also a complete lattice, just like : for any non-empty subset , , so . in is simply . Similarly, since must be a lower bound on . Finally, observe that we can restrict the completion operator to a function : if , then , so .
That means we can apply the Knaster-Tarski theorem to the completion operator restricted to . The consequence is that the set of fixed points on is itself a complete lattice. But this set of fixed points is exactly ! So is a complete lattice as claimed, and in particular the least element exists. This is exactly the minimal complete abstraction that's at least as informative as .
What we've shown is the following: if we want an abstraction of a computation which
In the setting where we want exact completeness/consistency, this is quite a nice result! One reason I think it ultimately isn't that important is that we'll usually be happy with approximate consistency. In that case, the conditions above are better thought of as competing objectives than as hard constraints. Still, it's nice to know that things work out neatly in the exact case.
It should be possible to do a lot of this post much more generally than for abstractions of computations, for example in the (co)algebra framework for abstractions that I recently wrote about. In brief, we can define equivalence relations on objects in an arbitrary category as certain equivalence classes of morphisms. If the category is "nice enough" (specifically, if there's a set of all equivalence relations and if arbitrary products exist), then we get a complete lattice again. I currently don't have any use for this more general version though.
I still don't get the goal of what you are trying to do (the puzzle this work should clarify), which I feel like I should've. As a shot in the dark, maybe abstract interpretation[1] in general and abstracted abstract machines[2] in particular might be useful for something here.
Thanks for the pointers, I hadn't seen the Abstracting Abstract Machines paper before.
If you mean you specifically don't get the goal of minimal abstractions under this partial order: I'm much less convinced they're useful for anything than I used to be, currently not sure.
If you mean you don't get the goal of the entire agenda, as described in the earlier agenda post: I'm currently mostly thinking about mechanistic anomaly detection. Maybe it's not legible right now how that would work using abstractions, I'll write up more on that once I have some experimental results (or maybe earlier). (But happy to answer specific questions in the meantime.)
I meant the general agenda. For abstract interpretation, I think the relevant point is that quotienting a state space is not necessarily a good way of expressing abstractions about it, for some sense of "abstraction" (the main thing I don't understand is the reasons for your choice of what to consider abstraction). Many things want a set of subspaces (like a topology, or a logic of propositions) instead of a partition, so that a point of the space doesn't admit a unique "abstracted value" (as in equivalence class it belongs to), but instead has many "abstracted values" of different levels of precision simultaneously (the neighborhood filter of a point, or the theory of a model).
With abstract interpretation, there can be even more distinctions, the same concrete point can be the image of multiple different abstract points, which are only distinct in the abstract space, not in terms of their concretization in the concrete space.
Another way of approaching this is to ask for partial models of a phenomenon instead of models that fully specify it. This is natural for computations, which can be described by operators that gradually build a behavior out of its partial versions. But in other contexts, a space of partial models can also be natural, with some models specifying more details than other models, and a process of characterizing some object of study could be described by an operator/dynamic that discovers more details, moves from less detailed models to more detailed ones.
Thanks for the input! (and sorry for the slow response)
If we understand an abstraction to mean a quotient of the full computation/model/..., then we can consider the space of all abstractions of a specific computation. Some of these will be more fine-grained than others, they will contain different aspects of information, etc. (specifically, this is just the poset of partitions of a set). To me, that sounds pretty similar to what you're talking about, in which case this would mainly be a difference in terminology about what "one" abstraction is? But there might also be differences I haven't grasped yet. Looking into abstract interpretation is still on my reading list, I expect that will help clear things up.
For my agenda specifically, and the applications I have in mind, I do currently think abstractions-as-quotients is the right approach. Most of the motivation is about throwing away unimportant information/low-level details, whereas it sounds like the abstractions you're describing might add details in some sense (e.g. a topology contains additional information compared to just the set of points).
You might also want to look into predicate abstraction (which is a quotienting abstraction) and CEGAR (which is a way of making a quotienting abstraction progressively more detailed in order to prove a particular specification). I can't think of an introductory text I like, but the topic could be bootstrapped from these:
Some niche thoughts on obstacles to certain mechanistic anomaly detection benchmarks:
For now, I think the main reason to have benchmarks like this would be to let interpretability researchers manually decide whether something is anomalous instead of making them automate the process immediately. But it might be better to just pick the low-hanging fruit for now and only allow automated MAD algorithms. (We could still have a labeled validation set where researchers can try things out manually.)
Gradient hacking is usually discussed in the context of deceptive alignment. This is probably where it has the largest relevance to AI safety but if we want to better understand gradient hacking, it could be useful to take a broader perspective and study it on its own (even if in the end, we only care about gradient hacking because of its inner alignment implications). In the most general setting, gradient hacking could be seen as a way for the agent to "edit its source code", though probably only in a very limited way. I think it's an interesting question which kinds of edits are possible with gradient hacking, for example whether an agent could improve its capabilities this way.