In decision theory, we often talk about programs that know their own source code. I'm very confused about how that theory applies to people, or even to computer programs that don't happen to know their own source code. I've managed to distill my confusion into three short questions:
1) Am I uncertain about my own source code?
2) If yes, what kind of uncertainty is that? Logical, indexical, or something else?
3) What is the mathematically correct way for me to handle such uncertainty?
Don't try to answer them all at once! I'll be glad to see even a 10% answer to one question.
It does not.
The concept of "source code" is of doubtful use when applied to wetware, anyway.
In principle, it is possible to simulate a brain on a computer, and I think it's meaningful to say that if you could do this, you would know your "source code". In general, you can think of something's source code as a (computable) mathematical description of that thing.
Also, the point of the post is to generalize the theory to this domain. Humans don't know their source code, but they do have models of other people, and use these to make complicated decisions. What would a formalization of this kind of process look like?