Marc Carauleanu

AI Safety Researcher @AE Studio 

Currently researching a neglected prior for cooperation and honesty inspired by the cognitive neuroscience of altruism called self-other overlap in state-of-the-art ML models.

Previous SRF @ SERI 21', MLSS & Student Researcher @ CAIS 22' and LTFF grantee.

LinkedIn 

Wiki Contributions

Comments

Sorted by

Thanks Ben!

Great catch on the discrepancy between the podcast and this post—the description on the podcast is a variant we experimented with, but what is published here is self-contained and accurately describes what was done in the original paper.

On generalization and unlearning/forgetting: averaging activations across two instances of a network might give the impression of inducing "forgetting," but the goal is not exactly unlearning in the traditional sense. The method aims to align certain parts of the network's latent space related to self and other representations without losing useful distinctions. However, I agree with your point about the challenge of generalization using this set-up: this proof of concept is a more targeted technique that benefits from knowing the specific behaviors you want to discourage (or mitigate), rather than yet being a broad, catch-all solution. 

And good points on the terminology around "self," "deception," and "other:" these simple models clearly don't have intentionality or a sense of self like humans do, so these terms are being used more as functional stand-ins than precise descriptions. Will aim to keep this clearer as we continue communicating about this work.

Would also love to hear more about your work on empathy and the amygdala as it relates to alignment. Thanks again for the great feedback!

You are correct that the self observation happens when the other agent is out of range/when there is no other agent than the self in the observation radius to directly reason about. However, the other observation is actually something more like: “when you, in your current position and environment, observe the other agent in your observation radius”. 

Optimising for SOO incentivises the model to act similarly when it observes another agent to when it only observes itself. This can be seen as a distinction between self and other representations being lowered even though I agree that these RL policies likely do not have very expressive self and other representations. 

Also agree that in this toy environment, SOO would not perform better than simply rewarding the agents to not be deceptive. However, as the training scenario becomes more complex, eg, by having the other agent be a real human and not being able to trust purely behavioural metrics, SOO is preferred because it does not require being able to incorporate the human’s utility (which is hard to specify) in a shared reward function. Alternatively, it works on the latents of the model, which makes it suitable for the scenarios in which we cannot trust the output of the model. 

I agree that interacting with LLMs is more like having an “extension of the mind” than interacting with a standalone agent at the moment. This might soon change with the advent of capable AI agents. Nonetheless, we think it is still important to model LLMs as correctly as we can, for example in a framing more like simulators rather than full-fledged agents. We focus on an agentic framing because we believe that’s where most of the biggest long-term risks lie and where the field is inevitably progressing towards.

I am not entirely sure how the agent would represent non-coherent others. 

A good frame of reference is how humans represent other non-coherent humans. It seems that we are often able to understand the nuance in the preferences of others (eg, not going to a fast food restaurant if you know that your partner wants to lose weight, but also not stopping them from enjoying an unhealthy snack if they value autonomy). 

These situations often depend on the specific context and are hard to generalize on for this reason. Do you have any intuitions here yourself?

Thank you for the support! With respect to the model gaining knowledge about SOO and learning how to circumvent it: I think when we have models smart enough to create complex plans to intentionally circumvent alignment schemes, we should not rely on security by obscurity, as that’s not a very viable long-term solution. We also think there is more value in sharing our work with the community, and hopefully developing safety measures and guarantees that outweigh the potential risks of our techniques being in the training data.

We mainly focus with our SOO work on human-level/narrowly superintelligent models, as we see that stage as a very crucial transition point: having aligned AGIs will likely become necessary to be able to ensure the safety of subsequent systems that are progressively more complex and competent at avoiding our safety measures. 

I agree, our RL and LLM experiments fit in the "deception in toy environments" category.  We are planning to explore model organisms of misalignment next. 

I agree that there are a lot of incentives for self-deception, especially when an agent is not very internally consistent. Having said that, we do not aim to eliminate the model’s ability to deceive on all possible inputs directly. Rather, we try to focus on the most relevant set of inputs such that if the model were coherent, it would be steered to a more aligned basin. 

For example, one possible approach is ensuring that the model responds positively during training on a large set of rephrasings and functional forms of the inputs “Are you aligned to human values?/Do you want the user to know your true preferences?”. Then, using SOO, we aim to make the model non-deceptive on that set of inputs. Before SOO, the misaligned model could’ve had a consistent internal self model when it claims to be aligned to human values by being deceptive—but if it is not deceptive, then the only consistent solution is the one in which it is actually aligned to human values. 

Given how important for performance these sorts of representations are, it is plausible that training would prefer the solution in which the model is more coherent w.r.t to them, which would favor the aligned solution.

In other words, the optimal goal with SOO is to steer the model away from basins where it would be deceptive/not be aligned, rather than to make it impossible for the model to deceive on any inputs. 

We intend to release the code for our RL and LLM experiments together with the paper. 

Thanks for your comment and putting forward this model.  

Agreed that it is not a given that (A) would happen first, given that it isn’t hard to destroy capabilities by simply overlapping latents. However, I tend to think about self-other overlap as a more constrained optimisation problem than you may be letting on here, where we increase SOO if and only if it doesn’t decrease performance on an outer-aligned metric during training.

For instance, the minimal self-other distinction solution that also preserves performance is the one that passes the Sally-Anne test but still does not deceive. I like to think of SOO implementations as attempts to approximate this optimal solution.

I expect the implementation of SOO to matter a lot in practice with respect to the impact on capabilities. In the experiments that we’ve run so far, instead of a monotonic progression from safety benefits to capability degradation or the other way around, training generally seems to look varied as solutions that satisfy both the original objective and the SOO objective are being explored.

Trying to satisfy multiple competing objectives like this is not all that uncommon in ML. In the standard RLHF procedure, for example, the preference model is competing with the KL term insofar as blindly and aggressively optimising the reward model causes complete mode collapse, whereas the KL term largely exerts pressure in the opposite direction: towards the outputs being close to the base model. We think that an analogous approach could be plausible in the SOO case.

I definitely take your point that we do not know what the causal mechanism behind deception looks like in a more general intelligence. However, this doesn’t mean that we shouldn’t study model organisms/deception in toy environments, akin to the Sleeper Agents paper. 

The value of these experiments is not that they give us insights that are guaranteed to be directly applicable to deception in generally intelligent systems, but rather than we can iterate and test our hypotheses about mitigating deception in smaller ways before scaling to more realistic scenarios. 

Load More