In decision theory, we often talk about programs that know their own source code. I'm very confused about how that theory applies to people, or even to computer programs that don't happen to know their own source code. I've managed to distill my confusion into three short questions:
1) Am I uncertain about my own source code?
2) If yes, what kind of uncertainty is that? Logical, indexical, or something else?
3) What is the mathematically correct way for me to handle such uncertainty?
Don't try to answer them all at once! I'll be glad to see even a 10% answer to one question.
That's a hypothesis, unproven and untested. Especially if you claim the equivalence between the mind and the simulation -- which you have to do in order to say that the simulation delivers the "source code" of the mind.
A mathematical description of my mind would be beyond the capabilities of my mind to understand (and so, know). Besides, my mind changes constantly both in terms of patterns of neural impulses and, more importantly, in terms of the underlying "hardware". Is neuron growth or, say, serotonin release part of my "source code"?
In the broadest sense, the hypothesis is somewhat trivial. For instance, if we are communicating with an agent over a channel with n bits of information capacity, then there are 2^n possible exchanges. Given any n, it is possible to create a simulation that picks the "right" exchange, such that it is indistinguishable from a human. Where the hypothesis becomes less proven is if the requirement is not for fixed n.