In decision theory, we often talk about programs that know their own source code. I'm very confused about how that theory applies to people, or even to computer programs that don't happen to know their own source code. I've managed to distill my confusion into three short questions:
1) Am I uncertain about my own source code?
2) If yes, what kind of uncertainty is that? Logical, indexical, or something else?
3) What is the mathematically correct way for me to handle such uncertainty?
Don't try to answer them all at once! I'll be glad to see even a 10% answer to one question.
Interesting!
I would say that you (as a real human in the present time) are uncertain about your source code in the traditional sense of the word "uncertain". Once we have brain scans and ems and such, if you get scanned and have access to the scan, you're probably uncertain in something more like a logical uncertainty sense: you have access, and the ability to answer some questions, but you don't "know" everything that is implied by that knowledge.
Indexical uncertainty can apply to a perfect Bayesian reasoner. (Right? I mean, given that those can't exist in the real world,...) So it doesn't feel like it's indexical.
Does it make sense to talk about a "computationally-limited but otherwise perfect Bayesian reasoner"? Because that reasoner can exhibit logical uncertainty, but I don't think it exhibits source code uncertainty in the sense that you do, namely that you have trouble predicting your own future actions or running yourself in simulation.