So what would happen if you progressively replaced the neurons of a brain with elements that simply did not provide an anchor for an extended loop? Let's suppose that, instead of having nano-solenoids anchoring a single conscious flux-loop, you just have an extra type of message-passing between the neurochips, which emulates the spooling of flux-topological information. The answer is that you now have a "zombie", an unconscious entity which has been designed in imitation of a conscious being.
This is done one neuron at a time, though, with the person awake and narrating what they feel so that we can see if everything is going fine. Shouldn't some sequence of neuron replacement lead to the replacement of neurons that were previously providing consciously accessible qualia to the remaining biological neurons that still host most of the person's consciousness? And shouldn't this lead to a noticeable cognitive impairment they can report, if they're still using their biological neurons to control speech (we'd probably want to keep this the case as long as possible)?
Is this really a thing where you can't actually go ahead and say that if the theory is true, the simple neurons-as-black-boxes replacement procedure should lead to progressive cognitive impairment and probably catatonia, and if the person keeps saying everything is fine throughout the procedure, then there might be something to the hypothesis of people being made of parts after all? This isn't building a chatbot that has been explicitly designed to mimic high-level human behavior. The neuron replacers know about neurons, nothing more. If our model of what neurons do is sufficiently wrong, then the aggregate of simulated neurons isn't going to go zombie, it's just not going to work because it's copying the original connectome that only makes sense if all the relevant physics are in play.
My basic point was just that, if consciousness is only a property of a specific physical entity (e.g. a long knotted loop of planck-flux), and if your artificial brain doesn't contain any of those (e.g. it is made entirely of short trivial loops of planck-flux), then it won't be conscious, even if it simulates such an entity.
I will address your questions in a moment, but first I want to put this discussion back in context.
Qualia are part of reality, but they are not part of our current physical theory. Therefore, if we are going to talk about them at all...
In June 2012, Robin Hanson wrote a post promoting plastination as a superior to cryopreservation as an approach to preserving people for later uploading. His post included a paragraph which said:
This left me with the impression that the chances of the average cryopreserved person today of being later revived aren't great, even when you conditionalize on no existential catastrophe. More recently, I did a systematic read-through of the sequences for the first time (about a month 1/2 ago), and Eliezer's post You Only Live Twice convinced me to finally sign up for cryonics for three reasons:
I don't find that terribly encouraging. So now I'm back to being pessimistic about current cryopreservation techniques (though I'm still signing up for cryonics because the cost is low enough even given my current estimate of my chances). But I'd very much be curious to know if anyone knows what, say, Nick Bostrom or Anders Sandberg think about the issue. Anyone?
Edit: I'm aware of estimates given by LessWrong folks in the census of the chances of revival, but I don't know how much of that is people taking things like existential risk into account. There are lots of different ways you could arrive at a ~10% chance of revival overall:
is one way. But:
is a very similar conclusion from very different premises. Gwern has more on this sort of reasoning in Plastination versus cryonics, but I don't know who most of the people he links to are so I'm not sure whether to trust them. He does link to a breakdown of probabilities by Robin, but I don't fully understand the way Robin is breaking the issue down.