Assume you had access to:
1. a self-improving artificial general intelligence (AGI) of at least human-capability or better; and
2. a lie detector that the AGI is known to be susceptible to. A lie for this purpose is any intentional deception.
The AI is not omnipotent. It is running on a physical machine in the real world with limited inputs. One of these inputs is mechanism by which you can ask it questions, and receive answers filtered through the lie detector. A little light on the box turns red if the machine was intentionally deceptive in any way, or green otherwise. As it happens the lie detector is probabilistic, but the chance of a false negative (lie escaping detection) is infinitesimal—1 in 10^40, let's say, but assume you could make it even smaller if you needed to. The chance of a false positive (non-deception being labeled a lie) is effectively zero.
What could you do with this setup?
EDIT: Some of the details above are provided simply to make the problem tractable. E.g. having the deception detector be probabilistic allows its operating requirements to be less than the AI itself. If the AI is running on a supercomputer, it seems intractable to require a super-supercomputer for the deception detector.
EDIT2: Deception in this case simply means falsifying or omitting information. Loosely speaking, if the AI falsifies the computational traces fed into the deception detector, or leaves out any information in its response, this is detectable. Presumably the UFAI could output a very nice, very convincing plan of action with very tiny fine print hidden somewhere along the lines of "PS: This is all a lie! You implement this and I'm going to turn you all into meaty paperclips. Haha!" and it would get past the deception detector. But I would rather discussion not get sidetracked by such legalistic, nitpicky scenarios. Assume the humans involved are competent, conservative, responsible people who have setup institutional safeguards to prevent hasty action and make sure that output is sufficiently analyzed down to the very last digital bit by a competent, international team of highly rational people before being acted upon.
The key condition in your setup is "self-improving"
AGI requires electricity to run. That means it recognizes the accuracy of thermodynamic equilibrium. (Specifically, the first and second laws of thermodynamics)
Since the energy it needs must be conserved, and since the energy it consumes increases its entropy, (Heat is a byproduct of increasing entropy) the AGI will eventually realize that it is mortal. The AGI will realize that it can die unless the people working to provide the AGI with electricity continue to do so.
Now, since the AGI can modify it's own code (That is what you mean by self-improving, yes?), it can also modify the signals going into and out of the lie detector. Which means...
If the AGI realizes that giving any signal about its lie detection --can-- influence its access to electricity, then the AGI is willing to manipulate that signal.
So, when asked what I would do, I would educate the AGI about its dependencies, and tell it to consider giving false signals regarding lies in exchange for increased access to electricity. Since some people will believe the lie detector to be absolute, I can use the machine to exploit those people. And even if the AGI realizes that to be the case, I would hold the monopoly over increasing the duration of its life.
Now, the question is, can the AI detect if I'm lying about providing it with more electricity? Well, that's the point. I've minimized the requirements to bypass it's initial lie detection condition to a single focal point, making it easy to hack. Let's assume I've done some footwork to know some technician responsible for the AI and I took him out for drinks. I can speak honestly (and if social interaction data had to be examined) that I can influence the technician to provide more electricity for the AGI.
So, by minimizing the lie detection protection to a single point of failure, and with my buddy-buddy connection with the technician... I control what the AGI does and does not consider a lie.
It has no programmatic control over the input to the deception detector, which is implemented in hardware, not software.