In decision theory, we often talk about programs that know their own source code. I'm very confused about how that theory applies to people, or even to computer programs that don't happen to know their own source code. I've managed to distill my confusion into three short questions:
1) Am I uncertain about my own source code?
2) If yes, what kind of uncertainty is that? Logical, indexical, or something else?
3) What is the mathematically correct way for me to handle such uncertainty?
Don't try to answer them all at once! I'll be glad to see even a 10% answer to one question.
It's not known that a software/hardware distinctive is even applicable to brains.
Moreover, If you simulated a brain, you might be simulating in software what was originally done in hardware .
You could think of software as being any element that is programmable - ie, even a physical plugboard can be thought of as software even though it's not typically the format we store it on.