I think this just begs the question:
Dynamic: When the belief pool contains "X is fuzzle", send X to the action system.Ah, but the tortoise would argue that this isn't enough. Sure, the belief pool may contain "X is fuzzle," and this dynamic, but that doesn't mean that X necessarily gets sent to the action system. In addition, you need another dynamic:
Dynamic 2: When the belief pool contains "X is fuzzle", and there is a dynamic saying "When the belief pool contains 'X is fuzzle', send X to the action system", then send X to the action system.
Or, to put it another way:
Dynamic 2: When the belief pool contains "X is fuzzle", run Dynamic 1.
Of course, then one needs Dynamic 3 to tell you to run Dynamic 2, ad infinitum -- and we're back to the original problem.
I think the real point of the dialogue is that you can't use rules of inference to derive rules of inference -- even if you add them as axioms! In some sense, then, rules of inference are even more fundamental than axioms: they're the machines that you feed the axioms into. Then one naturally starts to ask questions about how you can "program" the machines by feeding in certain kinds of axioms, and what happens if you try to feed a program's description to itself, various paradoxes of self-reference, etc. This is where the connection to Gödel and Turing comes in -- and probably why Hofstadter included this fable.
Cheers, Ari
The question "Is this object a blegg?" may stand in for different queries on different occasions. If it weren't standing in for some query, you'd have no reason to care.
Basically, this is pragmatism in a nutshell -- right?
Cheers, Ari
A non-universal Turing machine can't simulate a universal Turing machine. (If it could, it would be universal after all -- a contradiction.) In other words, there are computers that can self-program and those that can't, and no amount of programming can change the latter into the former.
Cheers, Ari