E.g. assume that the users (the programmers) would use a remote controlled robotic arm to press the shutdown button. If the agents turns out to be a paperclipper, it may disassemble the robotic arm just to turn it into paperclips. The agent is not "intentionally" trying to resist shutdown, but the effect will be the same. Symmetrically there could be scenarios where the agent "accidentally" presses the shutdown button itself.
Yep! In fact, this is exactly the problem discussed in section 4.1 and described in Theorem 6, is it not?
Section 4.1 frames the problem in terms of the agent creating a sub-agent or successor. My point is that the issue is more general, as there are manipulative actions that don't involve creating other agents.
Theorem 6 seems to address the general case, although I would remark that even if epsilon == 0 (that is, even UN is indifferent to manipulation) you aren't safe.
Benja, Eliezer, and I have published a new technical report, in collaboration with Stuart Armstrong of the Future of Humanity institute. This paper introduces Corrigibility, a subfield of Friendly AI research. The abstract is reproduced below:
We're excited to publish a paper on corrigibility, as it promises to be an important part of the FAI problem. This is true even without making strong assumptions about the possibility of an intelligence explosion. Here's an excerpt from the introduction:
(See the paper for references.)
This paper includes a description of Stuart Armstrong's utility indifference technique previously discussed on LessWrong, and a discussion of some potential concerns. Many open questions remain even in our small toy scenario, and many more stand between us and a formal description of what it even means for a system to exhibit corrigible behavior.