But as long as we're trading hypotheticals: what if minds (or rather, the sorts of minds we have) can only be associated with uncopyable physical substrates?
If that turns out to be the case, I don't think it would much diminish either my intellectual curiosity about how problems associated with mind copying ought to be solved nor the practical importance of solving such problems (to help prepare for a future where most minds will probably be copyable, even if my own isn't).
various things that confused me for years and that I discuss in the essay (Newcomb, Boltzmann brains, the "teleportation paradox," Wigner's friend, the measurement problem, Bostrom's observer-counting problems...) all seemed to beckon me in that direction from different angles
It seems likely that in the future we'll be able to build minds that are very human-like, but copyable. For example we could take someone's gene sequence, put them inside a virtual embryo inside a digital simulation, let it grow into an infant and then raise it in a virtual environment similar to a biological human child's. I'm assuming that you don't dispute this will be possible (at least in principle), but are saying that such a mind might not have the same kind of subjective experience as we do. Correct?
Now suppose we built such a mind using your genes, and gave it an upbringing and education similar to yours. Wouldn't you then expect it to be puzzled by all the things that you mentioned above, except it would have to solves those puzzles in some way other than by saying "I can get around these confusions if I'm not copyable"? Doesn't that suggest to you that there have to be solutions to those puzzles that do not involve "I'm not copyable" and therefore the existence of the puzzles shouldn't have beckoned you in the direction of thinking that you're uncopyable?
So I decided that, given the immense perplexities associated with copyable minds (which you know as well as anyone), the possibility that uncopyability is essential to our subjective experience was at least worth trying to "steelman" (a term I learned here) to see how far I could get with it.
If you (or somebody) eventually succeed in showing that uncopyability is essential to our subjective experience, that would mean that by introspecting on the quality of our subjective experience, we would be able to determine whether or not we are copyable, right? Suppose we take a copyable mind (such as the virtual Scott Aaronson clone mentioned above), make another copy of it, then turn one of the two copies into an uncopyable mind by introducing some freebits into it. Do you think these minds would be able to accurately report whether they are copyable, and if so, by what plausible mechanism?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm not interested so much in how they will reason, but in how they should reason.
When you say "we" here, do you literally mean "we" or do you mean "biological humans"? Because I can see how understanding the effect of microscopic noise on the sodium-ion channels might give us insight into whether biological humans are copyable, but it doesn't seem to tell us whether we are biological humans or for example digital simulations (and therefore whether your proposed solution to the philosophical puzzles is of any relevance to us). I thought you were proposing that if your theory is correct then we would eventually be able to determine that by introspection, since you said copyable minds might have no subjective experience or a different kind of subjective experience.
(1) Well, that's the funny thing about "should": if copyable entities have a definite goal (e.g., making as many additional copies as possible, taking over the world...), then we simply need to ask what form of reasoning will best help them achieve the goal. If, on the other hand, the question is, "how should a copy reason, so as to accord with its own subjective experience? e.g., all else equal, will it be twice as likely to 'find itself' in a possible world with twice as many copies?" -- then we need some account of the subjective experience of copyable entities before we can even start to answer the question.
(2) Yes, certainly it's possible that we're all living in a digital simulation -- in which case, maybe we're uncopyable from within the simulation, but copyable by someone outside the simulation with "sysadmin access." But in that case, what can I do, except try to reason based on the best theories we can formulate from within the simulation? It's no different than with any "ordinary" scientific question.
(3) Yes, I raised the possibility that copyable minds might have no subjective experience or a different kind of subjective experience, but I certainly don't think we can determine the truth of that possibility by introspection -- or for that matter, even by "extrospection"! :-) The most we could do, maybe, is investigate whether the physical substrate of our minds makes them uncopyable, and therefore whether it's even logically coherent to imagine a distinction between them and copyable minds.