[Final Update: Back to 'Discussion'; stroked out the initial framing which was misleading.][Update: Moved to 'Main'. Also, judging by the comments, it appears that most have misunderstood the puzzle and read way too much into it; user 'Manfred' seems to have got the point.]
[Note: This little puzzle is my first article. Preliminary feedback suggests some of you might enjoy it while others might find it too obvious, hence the cautious submission to 'Discussion'; will move it to 'Main' if, and only if, it's well-received.]
In his recent paper "The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents", Nick Bostrom states:
Even an agent that has an apparently very limited final goal, such as “to make 32 paperclips”, could pursue unlimited resource acquisition if there were no relevant cost to the agent of doing so. For example, even after an expected-utility-maximizing agent had built 32 paperclips, it could use some extra resources to verify that it had indeed successfully built 32 paperclips meeting all the specifications (and, if necessary, to take corrective action). After it had done so, it could run another batch of tests to make doubly sure that no mistake had been made. And then it could run another test, and another. The benefits of subsequent tests would be subject to steeply diminishing returns; however, so long as there were no alternative action with a higher expected utility, the agent would keep testing and re-testing (and keep acquiring more resources to enable these tests).
Let us take it on from here.
It is tempting to say that a machine can never halt after achieving its goal because it cannot know with full certainty whether it has achieved its goal; it will continually verify, possibly to increasing degrees of certainty, whether it has achieved its goal, but never halt as such.
What if, from a naive goal G, the machine's goal were then redefined as "achieve 'G' with 'p' probability" for some p < 1? It appears this also would not work, given the machine would never be fully certain of being p certain of having achieved G. (and so on...)
Yet one can specify a set of conditions for which a program will terminate, so how is the argument above fallacious?
Solution in ROT13: Va beqre gb unyg fhpu na ntrag qbrfa'g arrq gb *xabj* vg'f c pregnva, vg bayl arrqf gb *or* c pregnva; nf gur pbaqvgvba vf rapbqrq, gur unygvat jvyy or gevttrerq bapr gur ntrag ragref gur fgngr bs c pregnvagl, ertneqyrff bs jurgure vg unf (shyy) xabjyrqtr bs vgf fgngr.
You could add a section in the AI's main loop that says "if P(G) > p then terminate", and for a non recursively self improving AI that doesn't know it has such a section in its code, this would work. For an AI that isn't powerful enough to rewrite itself, but knows it has this section of code, it seems possible that its best strategy given its bounded abilities is still to maximize P(G) until the termination clause activates, but it may not be true. We humans try work around known ways that our deviations from expected utility maximization limit our abilities to achieve our goals, and AI would likely try to do so as well. For a recursively self improving AI, the termination clause is not likely to survive the AI rewriting itself, as long as the AI understands that expected utility maximization is the most effective way to achieve its goals. (This is a general problem with trying to add safeguards to an AI that deviate from expected utility maximization, unless the deviation endorses itself, which is hard to setup.)
On the other hand, trying to bake "P(G) > p" into the utility function makes the AI care about it's epistemic state in a way that could conflict with instrumental desire for accuracy, and makes it vulnerable to wireheading. (And it has the problem in the OP, where the AI becomes concerned with minimizing the meta-uncertainty about its epistemic state, though perhaps it could be programmed to believed it's inspection of its own epistemic state as 100% accurate, though this would also be difficult to make stable under recursive self improvement.)