I assumed as much and this is where the whole premise breaks down.
The "self-improvement" aspect doesn't need immediate control over the immediate direct input to the deception detector. It can color the speech recognition, the Bayesian filters, the databases containing foments and linguistic itself... and twist those parameters to shape a future signal in a desired fashion.
Since "self-improvement" can happen at any layer and propagate the results to subsequent middleware, paranoid protections over the most immediate relationship between the deception detector and the CPU is inconsequential. This is a "self-improving" AI, after all. It can change its own internals at will... well... at my will. :D
Now, to be fair, I wrote an entire book about the idea of an AI intentionally lying to people when everyone else though their moralistic programming was the overriding factor. Never released the book, however... ;D
Technology isn't magic. There are limits and constrains.
Assume you had access to:
1. a self-improving artificial general intelligence (AGI) of at least human-capability or better; and
2. a lie detector that the AGI is known to be susceptible to. A lie for this purpose is any intentional deception.
The AI is not omnipotent. It is running on a physical machine in the real world with limited inputs. One of these inputs is mechanism by which you can ask it questions, and receive answers filtered through the lie detector. A little light on the box turns red if the machine was intentionally deceptive in any way, or green otherwise. As it happens the lie detector is probabilistic, but the chance of a false negative (lie escaping detection) is infinitesimal—1 in 10^40, let's say, but assume you could make it even smaller if you needed to. The chance of a false positive (non-deception being labeled a lie) is effectively zero.
What could you do with this setup?
EDIT: Some of the details above are provided simply to make the problem tractable. E.g. having the deception detector be probabilistic allows its operating requirements to be less than the AI itself. If the AI is running on a supercomputer, it seems intractable to require a super-supercomputer for the deception detector.
EDIT2: Deception in this case simply means falsifying or omitting information. Loosely speaking, if the AI falsifies the computational traces fed into the deception detector, or leaves out any information in its response, this is detectable. Presumably the UFAI could output a very nice, very convincing plan of action with very tiny fine print hidden somewhere along the lines of "PS: This is all a lie! You implement this and I'm going to turn you all into meaty paperclips. Haha!" and it would get past the deception detector. But I would rather discussion not get sidetracked by such legalistic, nitpicky scenarios. Assume the humans involved are competent, conservative, responsible people who have setup institutional safeguards to prevent hasty action and make sure that output is sufficiently analyzed down to the very last digital bit by a competent, international team of highly rational people before being acted upon.