Designing an agent which is guaranteed to terminate is not, in itself, a solution to AI safety. Indeed, this desideratum is already satisfied by Minsky's ultimate machine. At the very least, we have to design an agent which will be powerful enough to permanently defend us against malicious AIs without adverse side effects. So, we can indeed have AIs that are incentivized to complete some task in a short amount of time, but it is not clear how to formulate the task of "defending against malicious AIs" for such an agent. The closest thing is probably Paul Christiano's approval-directed agents, where the AI generates some output (e.g. plan of defense against malicious AIs) which a human has to approve. There are problems with this: for one thing, a plan which a human would approve might still be a bad plan (or even a dangerous memetic virus), for another, the module inside the AI responsible for modeling humans is susceptible to acausal attack.
I agree it's not a complete solution, but it might be a good path towards creating a task-AI, which is a potentially important unsolved sub-problem.
I spoke with Huw about this idea. I was thinking along similar lines at some point, but only for "safe-shutdown", e.g. if you had a self-driving car that anticipated encountering a dangerous situation and wanted to either:
It seems intuitive to give it a shutdown policy that triggers in such cases, and that aims to minimize a combined objective of time-to-shutdown and risk-of-shutdown. (Of course, this doesn't deal with interrupting the agent, ala Armstrong and Orseau.)
Huw pointed out that a similar strategy can be used for any "genie"-style goal (i.e. you want an agent to do one thing as efficiently as possible, and then shut-down until you give it another command), which made me substantially more interested in it.
This seems similar in spirit to giving your agent a short horizon, but now you also have regular terminations, by default, which has some extra pros and cons.