[Final Update: Back to 'Discussion'; stroked out the initial framing which was misleading.][Update: Moved to 'Main'. Also, judging by the comments, it appears that most have misunderstood the puzzle and read way too much into it; user 'Manfred' seems to have got the point.]
[Note: This little puzzle is my first article. Preliminary feedback suggests some of you might enjoy it while others might find it too obvious, hence the cautious submission to 'Discussion'; will move it to 'Main' if, and only if, it's well-received.]
In his recent paper "The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents", Nick Bostrom states:
Even an agent that has an apparently very limited final goal, such as “to make 32 paperclips”, could pursue unlimited resource acquisition if there were no relevant cost to the agent of doing so. For example, even after an expected-utility-maximizing agent had built 32 paperclips, it could use some extra resources to verify that it had indeed successfully built 32 paperclips meeting all the specifications (and, if necessary, to take corrective action). After it had done so, it could run another batch of tests to make doubly sure that no mistake had been made. And then it could run another test, and another. The benefits of subsequent tests would be subject to steeply diminishing returns; however, so long as there were no alternative action with a higher expected utility, the agent would keep testing and re-testing (and keep acquiring more resources to enable these tests).
Let us take it on from here.
It is tempting to say that a machine can never halt after achieving its goal because it cannot know with full certainty whether it has achieved its goal; it will continually verify, possibly to increasing degrees of certainty, whether it has achieved its goal, but never halt as such.
What if, from a naive goal G, the machine's goal were then redefined as "achieve 'G' with 'p' probability" for some p < 1? It appears this also would not work, given the machine would never be fully certain of being p certain of having achieved G. (and so on...)
Yet one can specify a set of conditions for which a program will terminate, so how is the argument above fallacious?
Solution in ROT13: Va beqre gb unyg fhpu na ntrag qbrfa'g arrq gb *xabj* vg'f c pregnva, vg bayl arrqf gb *or* c pregnva; nf gur pbaqvgvba vf rapbqrq, gur unygvat jvyy or gevttrerq bapr gur ntrag ragref gur fgngr bs c pregnvagl, ertneqyrff bs jurgure vg unf (shyy) xabjyrqtr bs vgf fgngr.
I think the machine would probably just make more than 32 paperclips. Redundancy helps.
How do you tell the machine to terminate?
The way I see it, the machine exists for an instant. Self-modifying, or even not deleting itself, are just creating a new machine. If you tell it to terminate, you have to specify what you mean by its future self. You could say that it's any machine running the same code, but then the machine will have no reason to keep this code after the first self-modification. In fact, since deleting this code is of itself a self-modification, it can just delete it immediately. You could tell it to stop any machine it programs, but then it might just indirectly program them, and subtly influence someone to make a paperclip maximizer. You could tell it to stop any machine it programs indirectly, but merely by existing in the same universe as us, it will have modified us somehow, and would thus be forced to kill us all.