Thanks for the elaboration; this is a very interesting point that I wasn't aware of. But it does seem to rely on the function having the same domain as its range, which presumably is one of the assumptions going into the niceness. It is not clear to me, although perhaps I'm just not thinking it through, that "future movements of quarks" is the same as "symbols to be interpreted as future movements of quarks".
You could think of it as x is the GLUT output, f(x) is the subject's response, and g(f(x)) is the GLUT's interpretation of the subject's response. f maps from GLUT output to subject response, and g maps from subject response to GLUT output. f and g don't have fixed points, because they don't have the same domain and range. f∘g, however, maps from GLUT output to GLUT output, so it has the same domain and range. I was just calling it f, but this way it might be less confusing.
Suppose I have an exact simulation of a human. Feeling ambitious, I decide to print out a GLUT of the action this human will take in every circumstance; while the simulation of course works at the level of quarks, I have a different program that takes lists of quark movements and translates them into a suitably high-level language, such as "Confronted with the evidence that his wife is also his mother, the subject will blind himself and abdicate".
Now, one possible situation is "The subject is confronted with the evidence that his wife is also his mother, and additionally with the fact that this GLUT predicts he will do X". Is it clear that an accurate X exists? In high-level language, I would say that, whatever the prediction is, the subject may choose to do something different. More formally we can notice that the simulation is now self-referential: Part of the result is to be used as the input to the calculation, and therefore affects the result. It is not obvious to me that a self-consistent solution necessarily exists.
It seems to me that this is somehow reminiscent of the Halting Problem, and can perhaps be reduced to it. That is, it may be possible to show that an algorithm that can produce X for arbitrary Turing machines would also be a Halting Oracle. If so, this seems to say something interesting about limitations on what a simulation can do, but I'm not sure exactly what.