This sounds familiar. Are you aware of other similar concepts previously communicated elsewhere? I feel certain I've read something along these lines before. By all means, claim it's original though.
Not sure if this is what you're thinking of, but there's a research area called "adjustable autonomy" and a few other names, which superficially sounds similar but isn't actually getting at the problem described here, which comes about due to convergent instrumental values in sufficiently advanced agents.
Benja, Eliezer, and I have published a new technical report, in collaboration with Stuart Armstrong of the Future of Humanity institute. This paper introduces Corrigibility, a subfield of Friendly AI research. The abstract is reproduced below:
We're excited to publish a paper on corrigibility, as it promises to be an important part of the FAI problem. This is true even without making strong assumptions about the possibility of an intelligence explosion. Here's an excerpt from the introduction:
(See the paper for references.)
This paper includes a description of Stuart Armstrong's utility indifference technique previously discussed on LessWrong, and a discussion of some potential concerns. Many open questions remain even in our small toy scenario, and many more stand between us and a formal description of what it even means for a system to exhibit corrigible behavior.