In decision theory, we often talk about programs that know their own source code. I'm very confused about how that theory applies to people, or even to computer programs that don't happen to know their own source code. I've managed to distill my confusion into three short questions:
1) Am I uncertain about my own source code?
2) If yes, what kind of uncertainty is that? Logical, indexical, or something else?
3) What is the mathematically correct way for me to handle such uncertainty?
Don't try to answer them all at once! I'll be glad to see even a 10% answer to one question.
1) If you were certain about your source code, i.e. if you knew your source code, uploading your mind should be immediately feasible, subject to resource constraints. Since you do not know how would go about immediately uploading your mind, you aren't certain about your source code. Because the answer is binary (tertium non datur), it follows you're uncertain about your own source code. (No, I don't count vague constraints such as "I know it's Turing computable" as "certainty about my own source code", just as you wouldn't say you know a program's source code just because you know it's implemented on a JVM.)
2) The uncertainty falls in several categories, because there are many ways to partition "uncertainty". For example, the uncertainty is mostly epistemic (lack of knowledge of the exact parameters), rather than aleatoric. Using a different partitioning, the uncertainty is structural (we don't know how to correctly model your source code). There are many more true attributes of the relevant uncertainty.
3) I don't understand the question. Handle to what end?