What do you think will actually happen, if/when we try to simulate stuff?
I'll tell you what I think won't happen: real feelings, real thoughts, real experiences.
A computational theory of consciousness implies that all conscious experiences are essentially computations, and that the same experience will therefore occur inside anything that performs the same computation, even if the "computer" is a network of toppling dominoes, random pedestrians making marks on walls according to small rulebooks, or any other bizarre thing that implements a state machine.
This belief derives entirely from one theory of one example - the computational theory of consciousness in the human brain. That is, we perceive that thinking and experiencing have something to do with brain activity, and one theory of the relationship, is that conscious states are states of a virtual machine implemented by the brain.
I suggest that this is just a naive idea, and that future neuroscientific and conceptual progress will take us back to the idea that the substrate of consciousness is substance, not computation; and that the real significance of computation for our understanding of consciousness, will be that it is possible to simulate consciousness without creating it.
From a physical perspective, computational states have the vagueness of all functional, user-dependent concepts. What is a chair? Perhaps, anything you can sit on. But people have different tastes, whether you can tolerate sitting on a particular object may vary, and so on. "Chair" is not an objective category; in regions of design-space far from prototypical examples of a chair, there are edge cases whose status is simply disputed or questionable.
Exactly the same may be said of computational states. The states of a transistor are a prototypical example of a physical realization of binary computational states. But as we consider increasingly messy or unreliable instantiations, it becomes increasingly difficult to just say, yes, that's a 0 or a 1.
Consider the implications of this for a theory of consciousness which says, that the necessary and sufficient condition for the occurrence of a given state of consciousness, is the occurrence of a specific "computational state". It means that whether or not a particular consciousness exists, is not a yes-or-no thing - it's a matter of convention or definition or where you draw the line in state space.
This is untenable in exactly the same way that Copenhagenist complacency about the state of reality in quantum mechanics is untenable. It makes no sense to say that the electron has a position, but not a definite position, and it makes no sense to say that consciousness is a physical thing, but that whether or not it exists in a specific physical situation is objectively indeterminate.
If you are going to say that consciousness depends on the state of the physical universe, there must be a mapping which gives unique and specific answers for all possible physical states. There cannot be edge cases that are intrinsically undetermined, because consciousness is an objective reality, whereas chairness is an imputed property.
The eerie dualism of computer theories of consciousness, whereby the simulated experience mystically hovers over or dwells within the computer mainframe, chain of dominos, etc - present in the same way, regardless of what the "computer" is made of - might already have served as a clue that there was something wrong about this outlook. But the problem in developing this criticism is that we don't really know how to make a nondualistic alternative work.
Suppose that the science of tomorrow came to the conclusion that the only things in the world that can be conscious, are knots of flux in elementary force fields. Bravo, it's a microphysically unambiguous criterion... but it's still going to be property dualism. The physical property "knotted in a certain madly elaborate shape", and the subjective property "having a certain intricate experience", are still not the same thing. The eerie dualism is still there, it's just that it's now limited to lines of flux, and doesn't extend to bitstreams of toppling dominoes, Searlean language rooms, and so on. We would still have the strictly physical picture of the universe, and then streams of consciousness would be an extra thing added to that picture of reality, according to some laws of psychophysical correlation.
However, I think this physical turn, away from the virtual-machine theory of consciousness, at least brings us a little closer to nondualism. It's still hard to imagine, but I see more potential on this path, for a future theory of nature in which there is a conscious self, that is also a physical entity somewhere on the continuum of physical entities in nature, and in which there's no need to say "physically it's this, but subjectively it's that" - a theory in which we can speak of the self's conscious state, and its causal physical interactions, in the same unified language. But I do not see how that will ever happen with a purely computational theory, where there will always be a distinction between the purely physical description, and the coarse-grained computational description that is in turn associated with conscious experience.
How do you respond to the thought experiment where your neurons (and glial cells and whatever) are replaced one-by-one with tiny workalikes made out of non-biological material? Specifically, would you be able to tell the difference? Would you still be conscious when the replacement process was complete? (Or do you think the thought experiment contains flawed assumptions?)
Feel free to direct me to another comment if you've answered this elsewhere.
In June 2012, Robin Hanson wrote a post promoting plastination as a superior to cryopreservation as an approach to preserving people for later uploading. His post included a paragraph which said:
This left me with the impression that the chances of the average cryopreserved person today of being later revived aren't great, even when you conditionalize on no existential catastrophe. More recently, I did a systematic read-through of the sequences for the first time (about a month 1/2 ago), and Eliezer's post You Only Live Twice convinced me to finally sign up for cryonics for three reasons:
I don't find that terribly encouraging. So now I'm back to being pessimistic about current cryopreservation techniques (though I'm still signing up for cryonics because the cost is low enough even given my current estimate of my chances). But I'd very much be curious to know if anyone knows what, say, Nick Bostrom or Anders Sandberg think about the issue. Anyone?
Edit: I'm aware of estimates given by LessWrong folks in the census of the chances of revival, but I don't know how much of that is people taking things like existential risk into account. There are lots of different ways you could arrive at a ~10% chance of revival overall:
is one way. But:
is a very similar conclusion from very different premises. Gwern has more on this sort of reasoning in Plastination versus cryonics, but I don't know who most of the people he links to are so I'm not sure whether to trust them. He does link to a breakdown of probabilities by Robin, but I don't fully understand the way Robin is breaking the issue down.