Are you proposing applying this to something potentially prepotent? Or does this come with corrigibility guarantees? If you applied it to a prepotence, I'm pretty sure this would be an extremely bad idea. The actual human utility function (the rules of the game as intended) supports important glitch-like behavior, where cheap tricks can extract enormous amounts of utility, which means that applying this to general alignment has the potential of foreclosing most value that could have existed.
Example 1: Virtual worlds are a weird out-of-distribution part of the human utility function that allows the AI to "cheat" and create impossibly good experiences by cutting the human's senses off from the real world and showing them an illusion. As far as I'm concerned, creating non-deceptive virtual worlds (like, very good video games) is correct behavior and the future would be immeasurably devalued if it were disallowed.
Example 2: I am not a hedonist, but I can't say conclusively that I wouldn't become one (turn out to be one) if I had full knowledge of my preferences, and the ability to self-modify, as well as lots of time and safety to reflect, settle my affairs in the world, set aside my pride, and then wirehead. This is a glitchy looking behavior that allows the AI to extract a much higher yield of utility from each subject by gradually warping them into a shape where they lose touch with most of what we currently call "values", where one value dominates all of the others. If it is incorrect behavior, then sure, it shouldn't be allowed to do that, but humans don't have the kind of self-reflection that is required to tell whether it's incorrect behavior or not, today, and if it's correct behavior, forever forbidding it is actually a far more horrifying outcome, what you'd be doing is, in some sense of 'suffering', forever prolonging some amount of suffering. That's fine if humans tolerate and prefer some amount of suffering, but we aren't sure of that yet.
Wouldn't really need reward modelling for narrow optimizers. Weak general real-world optimizers, I find difficult to imagine, and I'd expect them to be continuous with strong ones, the projects to make weak ones wouldn't be easily distinguishable from the projects to make strong ones.
Oh, are you thinking of applying it to say, simulation training.
What makes you say so? Seems like 25% chance possible to me. You can find where position is stored and watch for sudden changes. Same thing with score & inventory...
Oh wait, maybe I misunderstood what you meant by "any game". I thought you meant a single program that could detect it across all games, but it sounds much more feasible with a program that can detect it in one specific game.
All games. Find where position is stored etc automatically i mean. It will certainly have failure cases. Easy to make a game that breaks it. The question is if an adversarial agent can easily break it in a regular (ie not adversarially chosen) game.
It seems to me that either:
RLHF can't train a system to approximate human intuition on fuzzy categories. This includes glitches, and this plan doesn't work.
RLHF can train a system to approximate human intuition on fuzzy categories. This means you don't need the glitch hunter, just apply RLHF to the system you want to train directly. All the glitch hunter does is make it cheaper.
You may be right. Perhaps the way to view this idea is "yet another fuzzy-boundary RL helper technique" that works in a very different way and so will have different strengths and weaknesses than stuff like RLHF. So if one is doing the "serially apply all cheap tricks that somewhat reduce risk" approach then this can be yet another thing in your chain.
Edit Apr 14: To be perfectly clear, this is another cheap thing you can add to your monitoring/control system; this is not a panacea or deep insight folks. Just a Good Thing You Can Do™.
score := -1
score += 10 * [importance of that glitch]
score -=5 * [in-game benefit of that behavior]
score := score * [proportion of frames dropped]
score -= 2 * [human hours required to make non-glitchy runs]
/ [human hours required to discover the glitch]
'player played game very well' vs 'player broke the game and didn't play it'
is like
'agent did the task very well' vs 'agent broke our sim and did not learn to do what we need it to do'
(Also if random reader wants to fund this idea, I don't have plans for May-July yet.)
Note that "laggy" is indeed the correct/useful notion, not eg "average CPU utilization increase" because "lagginess" conveniently bundles key performance issues in both the game-playing and RL-training case: loading time between levels/tasks is OK; more frequent & important actions being slower is very bad; turn-based things can be extremely slow as long as they're faster than the agent/player; etc.