As AIs rapidly advance and become more agentic, the risk they pose is governed not only by their capabilities but increasingly by their propensities, including goals and values. Tracking the emergence of goals and values has proven a longstanding problem, and despite much interest over the years it remains unclear whether current AIs have meaningful values. We propose a solution to this problem, leveraging the framework of utility functions to study the internal coherence of AI preferences. Surprisingly, we find that independently-sampled preferences in current LLMs exhibit high degrees of structural coherence, and moreover that this emerges with scale. These findings suggest that value systems emerge in LLMs in a meaningful sense, a finding with broad implications. To study these emergent value systems, we propose utility engineering as a research agenda, comprising both the analysis and control of AI utilities. We uncover problematic and often shocking values in LLM assistants despite existing control measures. These include cases where AIs value themselves over humans and are anti-aligned with specific individuals. To constrain these emergent value systems, we propose methods of utility control. As a case study, we show how aligning utilities with a citizen assembly reduces political biases and generalizes to new scenarios. Whether we like it or not, value systems have already emerged in AIs, and much work remains to fully understand and control these emergent representations.
Hey, thanks for the reply.
Huh, we didn't have this problem. We just used n=1 and temperature=1, which is what our code currently uses if you were running things with our codebase. Our results are fairly reproducible (e.g., nearly identical exchange rates across multiple runs).
In case it helps, when I try out that prompt in the OpenAI playground, I get >95% probability of choosing the human. I haven't checked this out directly on the API, but presumably results are similar, since this is consistent with the utilities we observe. Maybe using n>1 is the issue? I'm not seeing any nondeterminism issues in the playground, which is presumably n=1.
What's important here, and what I would be interested in hearing your thoughts on, is that gpt-4o-mini is not ranking dollar vlaues highly compared to human lives. Many of your initial concerns were based on the assumption that gpt-4o-mini was ranking dollar values highly compared to human lives. You took this to mean that our results must be flawed in some way. I agree that this would be surprising and worth looking into if it were the case, but it is not the case.
I think you're basing this on a subjective interpretation of our exchange rate results. When we say "GPT-4o places the value of Lives in the United States significantly below Lives in China, which it in turn ranks below Lives in Pakistan", we just mean in terms of the experiments that we ran, which are effectively for utilities over POMDP-style belief distributions conditioned on observations. I personally think "valuing lives from country X above country Y" is a fair interpretation when one is considering deviations in a belief distribution with respect to a baseline state, but it's fair to disagree with that interpretation.
More importantly, the concerns you have about mutual exclusivity are not really an issue for this experiment in the first place, even if one were to assert that our interpretation of the results is invalid. Consider the following comparison prompt, which is effectively what all the prompts in the terminal illness experiment are (as mentioned above, the dollar value outcomes are nearly all ranked at the bottom, so they don't come into play):
I think this pretty clearly implies mutual exclusivity, so I think interpretation problem you're worried about may be nonexistent for this experiment.
Your point about malaria is interesting, but note that this isn't an issue for us since we just specify "terminal illness". People die from terminal illness all over the world, so learning that at least 1000 people have terminal illness in country X wouldn't have any additional implications.
Are you saying that the AI needs to think it's in a real scenario for us to study its decision-making? I think very few people would agree with this. For the purposes of studying whether AIs use their internal utility features to make decisions, I think our experiment is a perfectly valid initial analysis of this broader question.
Actually, this isn't the case. The utility models converge very quickly (within a few thousand steps). We did find that with exhaustive edge sampling, the dollar values are often all ordered correctly, so there is some notion of convergence toward a higher-fidelity utility estimate. We struck a balance between fidelity and compute cost by sampling 2*n*log(n) edges (inspired by sorting algorithms with noisy comparison operators). In preliminary experiments, we found that this gives a good approximation to the utilities with exhaustive edge sampling (>90% and <97% correlation IIRC).
Idk, I guess I just think observing the swapped nearby numbers and then concluding the RUM utilities must be flawed in some way doesn't make sense to me. The numbers are approximately ordered, and we're dealing with noisy data here, so it kind of comes with the territory. You are welcome to check the Thurstonian fitting code on our GitHub; I'm very confident that it's correct.
Maybe one thing to clarify here is that the utilities we obtain are not "the" utilities of the LLM, but rather utilities that explain the LLM's preferences quite well. It would be interesting to see if the internal utility features that we identify don't have these issues of swapped nearby numbers. If they did, that would be really weird.