Suppose I have an exact simulation of a human. Feeling ambitious, I decide to print out a GLUT of the action this human will take in every circumstance; while the simulation of course works at the level of quarks, I have a different program that takes lists of quark movements and translates them into a suitably high-level language, such as "Confronted with the evidence that his wife is also his mother, the subject will blind himself and abdicate".
Now, one possible situation is "The subject is confronted with the evidence that his wife is also his mother, and additionally with the fact that this GLUT predicts he will do X". Is it clear that an accurate X exists? In high-level language, I would say that, whatever the prediction is, the subject may choose to do something different. More formally we can notice that the simulation is now self-referential: Part of the result is to be used as the input to the calculation, and therefore affects the result. It is not obvious to me that a self-consistent solution necessarily exists.
It seems to me that this is somehow reminiscent of the Halting Problem, and can perhaps be reduced to it. That is, it may be possible to show that an algorithm that can produce X for arbitrary Turing machines would also be a Halting Oracle. If so, this seems to say something interesting about limitations on what a simulation can do, but I'm not sure exactly what.
Is the fact that the simulated subject is a human important for the proposed thought experiment, besides that it activates all sorts of wrong intuitions about free will and makes the lookup table unimaginably huge or even infinite?
It is not, why should it be? By assumption the subject does whatever the GLUT predicts but it doesn't follow that the GLUT includes a proposition "if the subject is confronted with the information that the GLUT predicts that he will do X, he will do X".
I don't think so, any Turing machine will do.