1) If you were certain about your source code, i.e. if you knew your source code, uploading your mind should be immediately feasible, subject to resource constraints. Since you do not know how would go about immediately uploading your mind, you aren't certain about your source code. Because the answer is binary (tertium non datur), it follows you're uncertain about your own source code. (No, I don't count vague constraints such as "I know it's Turing computable" as "certainty about my own source code", just as you wouldn't say you know a program's source code just because you know it's implemented on a JVM.)
2) The uncertainty falls in several categories, because there are many ways to partition "uncertainty". For example, the uncertainty is mostly epistemic (lack of knowledge of the exact parameters), rather than aleatoric. Using a different partitioning, the uncertainty is structural (we don't know how to correctly model your source code). There are many more true attributes of the relevant uncertainty.
3) I don't understand the question. Handle to what end?
3) It seems unlikely that subjective bayesian probability would work for this kind of uncertainty. In particular, I would expect the correct theory to violate Cox's assumption of consistency. To illustrate, we can normally calculate P(A,B|X) by either P(A|X)P(B|A,X) or P(B|X)P(A|B,X). But what if A is the proposition that we calculate the probability P(A,B|X) by using P(A|X)*P(B|A,X)? Then we will get different answers depending on how we do the calculation.
1) Yes, presumably; your brain is a vast store of (evolved)(wetware)(non-serial)(ad-hoc)(etc.)algorithms that has so far been difficult for neuroscientists to document.
2) Just plain empirical? There's nothing stopping you from learning your own source code, in principle, it's just that we don't AFAIK have scanners that can view "many" nearby neurons, in real time, individually (as opposed an fMRI).
3) Well that's much more difficult. Not sure why Mark_Friedenbach's comment was downvoted though, except maybe snarkiness; heuristics and biases is a small step towards understanding some of the algorithms you are (and correcting for their systematic errors in a principled way).
I think this is a distinct type of uncertainty which I call "introspective". It is closely related to the expectation value over T in the definition of the updateless intelligence metric.
2) The most obvious obstacle for a human is, that I don't have the power to precisely observe and remember everything that I do, and I absolutely don't have an ability to reason about which specific source code could cause me to do exactly those things. Even the information that in theory is there, I can't process it. I guess this is logical uncertainty. It's like being unable to calculate the millionth digit of pi, especially if I couldn't even count to ten correctly.
But even if I had the super-ability to correctly determine which kinds of source code could produce my behavior and which couldn't, there would still be multiple solutions. I could limit the set of possible source codes to a subset, but I couldn't limit it to exactly one source code. Not even to a group of behaviorally identical source codes, because there are always realistic situations that I have never experienced, and some of the remaining source codes could do different things there. So within the remaining set, this seems like indexical uncertainty. I could be any of them, meaning that different copies of "me" in different possible worlds could have different algorithms within this set, and so far the same experience.
There is a problem with the second part -- if I have an information about maximum possible size of my source code, it means there are only finitely many options, so I could hypothetically gradually reduce it to exactly one, which means removing the indexical uncertainty. On the other hand, this would work for "normal" scenarios, but not for the "brain in the jar" scenarios: if I am in a Matrix, my assumption that my human source code is limited by the size of my body could be wrong.
Interesting!
I would say that you (as a real human in the present time) are uncertain about your source code in the traditional sense of the word "uncertain". Once we have brain scans and ems and such, if you get scanned and have access to the scan, you're probably uncertain in something more like a logical uncertainty sense: you have access, and the ability to answer some questions, but you don't "know" everything that is implied by that knowledge.
Indexical uncertainty can apply to a perfect Bayesian reasoner. (Right? I mean, given that those can't exist in the real world,...) So it doesn't feel like it's indexical.
Does it make sense to talk about a "computationally-limited but otherwise perfect Bayesian reasoner"? Because that reasoner can exhibit logical uncertainty, but I don't think it exhibits source code uncertainty in the sense that you do, namely that you have trouble predicting your own future actions or running yourself in simulation.
I'm very confused about how that theory applies to people
It does not.
The concept of "source code" is of doubtful use when applied to wetware, anyway.
In principle, it is possible to simulate a brain on a computer, and I think it's meaningful to say that if you could do this, you would know your "source code". In general, you can think of something's source code as a (computable) mathematical description of that thing.
Also, the point of the post is to generalize the theory to this domain. Humans don't know their source code, but they do have models of other people, and use these to make complicated decisions. What would a formalization of this kind of process look like?
It's not known that a software/hardware distinctive is even applicable to brains.
Moreover, If you simulated a brain, you might be simulating in software what was originally done in hardware .
You could think of software as being any element that is programmable - ie, even a physical plugboard can be thought of as software even though it's not typically the format we store it on.
You could think of a plugboard as hardware, too, hence there is no longer a clean hardware/software distinction
What I'm getting at is that it doesn't matter if the software is expressed in electron arrangement or plugs or neurons, if it's computable. I don't see any trouble here distinguishing between connectome and neuron.
What I am saying is that ifnyiu can't separate software from hardware, you are dealing with software in a reifiable sense.
Hardware is never computable, in the sense that simulated planes don't fly.
In principle, it is possible to simulate a brain on a computer
That's a hypothesis, unproven and untested. Especially if you claim the equivalence between the mind and the simulation -- which you have to do in order to say that the simulation delivers the "source code" of the mind.
you can think of something's source code as a (computable) mathematical description of that thing.
A mathematical description of my mind would be beyond the capabilities of my mind to understand (and so, know). Besides, my mind changes constantly both in terms of patterns of neural impulses and, more importantly, in terms of the underlying "hardware". Is neuron growth or, say, serotonin release part of my "source code"?
That's a hypothesis, unproven and untested.
In the broadest sense, the hypothesis is somewhat trivial. For instance, if we are communicating with an agent over a channel with n bits of information capacity, then there are 2^n possible exchanges. Given any n, it is possible to create a simulation that picks the "right" exchange, such that it is indistinguishable from a human. Where the hypothesis becomes less proven is if the requirement is not for fixed n.
In the broadest sense, the hypothesis is somewhat trivial.
No, I don't think so.
For instance, if we are communicating with an agent over a channel with n bits of information capacity, then there are 2^n possible exchanges. Given any n, it is possible to create a simulation that picks the "right" exchange, such that it is indistinguishable from a human.
Are you making the Searle's Chinese Room argument?
In any case, even if we accept the purely functional approach, it doesn't seem obvious to me that you must be able to create a simulation which picks the "right" answer in the future. You don't get to run 2^n instances and say "Pick whichever one you satisfies your criteria".
Well, I did say "In the broadest sense", so yes, that does imply a purely functional approach.
You don't get to run 2^n instances and say "Pick whichever one you satisfies your criteria".
The claim was that it is possible in principle. And yes, It is possible, in principle, to run 2^n instances and pick the one that satisfies the criteria.
And yes, It is possible, in principle, to run 2^n instances and pick the one that satisfies the criteria.
That's not simulating intelligence. That's just a crude exhaustive search.
And I am not sure you have enough energy in the universe to run 2^n instances, anyway.
However, "Am I uncertain about my own source code?" is a question I'd love to hear Descartes tackle.
I'd love to hear Descartes tackle
Well, there was an unfortunate accident. One evening he was sitting in a bar and the bartender asked him whether he wanted another glass of wine. "I think not" Descartes answered and poof! was never seen again...
1) You can know about your DNA and your upbringing. Suppose that you are in the Trueman Show and a clone of you will be put through the same script. Even if we don't know about the spesifics of how it compiles I think we are pretty sure the results would be similar to the degree that we can get the DNA / upbringing to match exactly. In this sense no you are not unsure.
1) If you can reliably answer hypotheticals about your actions then you do know how you function. However unreasonable levels of honesty would be required. In this sense you are sure.
1) You probably are not a quine in that your verbal output would contain a representation of you (I am a little uncertain whether sexual reproduction would count as being a quine). If you are highly reflective you can be aware of large part of your thoughts (ie you can meditate). However there must be a toplevel thought that is not reflected upon or that is selfrepresenting for then your finite head would contain infinite amunt of information and as information requires energy to be encoded knowing that your head is only finitely massive you don't have that.
1) You can know about your DNA and your upbringing.
I actually don't know much about how my DNA and which kind of mutations I have that aren't typical and how those effect my decision making. There are tons of experiences I had in my childhood that I don't remember and that influenced me.
Suppose that you are in the Trueman Show and a clone of you will be put through the same script. Even if we don't know about the spesifics of how it compiles I think we are pretty sure the results would be similar to the degree that we can get the DNA / upbringing to match exactly.
Given that the brain is a complex system chaos theory suggests that slight derivations are enough to change outcomes. Having the same script won't be enough.
2) If you can reliably answer hypotheticals about your actions then you do know how you function. However unreasonable levels of honesty would be required. In this sense you are sure.
Humans often don't act in the way they think they would act.
If you are highly reflective you can be aware of large part of your thoughts (ie you can meditate).
Being aware of your thoughts doesn't mean that you are aware of emotional conditioning. If you feel averse towards a woman because you had a very unpleasant experience with another woman who wore the same perfume, that not something you can identify on the level of thoughts.
It takes a high awareness to even know that there something that's triggering you.
In decision theory, we often talk about programs that know their own source code. I'm very confused about how that theory applies to people, or even to computer programs that don't happen to know their own source code. I've managed to distill my confusion into three short questions:
1) Am I uncertain about my own source code?
2) If yes, what kind of uncertainty is that? Logical, indexical, or something else?
3) What is the mathematically correct way for me to handle such uncertainty?
Don't try to answer them all at once! I'll be glad to see even a 10% answer to one question.