Related articles: Nonperson predicates, Zombies! Zombies?, & many more.
ETA: This argument appears to be a rehash of the Chinese room, which I had previously thought had nothing to do with consciousness, only intelligence. I nonetheless find this one instructive in that it makes certain things explicit which the Chinese room seems to gloss over.
ETA2: I think I may have made a mistake in this post. That mistake was in realizing what ontology functionalism would imply, and thinking that ontology too weird to be true. An argument from incredulity, essentially. Double oops.
Consciousness belongs to a class of topics I think of as my 'sore teeth.' I find myself thinking about them all the time: in the middle of bathing, running, cooking. I keep thinking about consciousness because no matter how much I read on the subject, I find I am still confused.
Now, to the heart of the matter. A major claim on which the desirability of uploading (among other things) depends, is that the upload would be conscious (as distinct from intelligent). I think I found a reductio of this claim at about 4:00 last night while staring up at my bedroom ceiling.
Simulating a person
The thought experiment that is supposed to show us that the upload is conscious goes as follows. (You can see an applied version in Eliezer's bloggingheads debate with Massimo Pigliucci, here. I also made a similar argument to Massimo here.)
Let us take an unfortunate member of the public, call her Simone, and simulate her brain (plus inputs and outputs along the nervous system) on an arbitrarily powerful philosophical supercomputer (this also works if you simulate her whole body plus surroundings). This simulation can be at any level of complexity you like, but it's probably best if we stick to an atom-by-atom (or complex amplitudes) approach, since that leaves less room for doubt.
Since Simone is a lawful entity within physics, there ought to be nothing in principle stopping us from doing so, and we should get behavioural isomorphism between the simulation and the biological Simone.
Now, we can also simulate inputs and outputs to and from the visual, auditory and language regions of her brain. It follows that with the right expertise, we can ask her questions - questions like "Are you experiencing the subjective feeling of consciousness you had when you were in a biological body?" - and get answers.
I'm almost certain she'll say "Yes." (Take a moment to realize why the alternative, if we take her at her word, implies Cartesian dualism.)
The question is, do we believe her when she says she is conscious? 10 hours ago, I would have said "Of course!" because the idea of a simulation of Simone that is 100% behaviourally isomorphic and yet unconscious seemed very counterintuitive; not exactly a p-zombie by virtue of not being atom-by-atom identical with Simone, but definitely in zombie territory.
A different kind of simulation
There is another way to do this thought experiment, however, and it does not require that infinitely powerful computer the philosophy department has (the best investment in the history of academia, I'd say).
(NB: The next few paragraphs are the crucial part of this argument.)
Observe that ultimately, the computer simulation of Simone above would output nothing but a huge sequence of zeroes and ones, process them into visual and audio outputs, and spit them out of a monitor and speakers (or whatever).
So what's to stop me just sitting down and crunching the numbers myself? All I need is a stupendous amount of time, a lot of pencils, a lot (!!!) of paper, and if you're kind to me, a calculator. Atom by tedious atom, I'll simulate inputs to Simone's auditory system asking her if she's conscious, then compute her (physically determined) answer to that question.
Take a moment to convince yourself that there is nothing substantively different between this scenario and the previous one, except that it contains approximately 10,000 times the maximum safe dosage of in principle.
Once again, Simone will claim she's conscious.
...Yeah, I'm sorry, but I just don't believe her.
I don't claim certain knowledge about the ontology of consciousness, but if I can summon forth a subjective consciousness ex nihilo by making the right series of graphite squiggles (which don't even mean anything outside human minds), then we might as well just give up and admit consciousness is magic.
Pigliucci is going to enjoy watching me eat my hat.
What was our mistake?
I've thought about this a lot in the last ~10 hours since I came up with the above.
I think when we imagined a simulated human brain, what we were picturing in our imaginations was a visual representation of the simulation, like a scene in Second Life. We saw mental images of simulated electrical impulses propagating along simulated neurons, and the cause & effect in that image is pretty clear...
...only it's not. What we should have been picturing was a whole series of logical operations happening all over the place inside the computer, with no physical relation between them and the represented basic units of the simulation (atoms, or whatever).
Basically, the simulated consciousness was isomorphic to biological consciousness in a similar way to how my shadow is isomorphic to me. Just like the simulation, if I spoke ASL I could get my shadow to claim conscious awareness, but it wouldn't mean much.
In retrospect, it should have given us pause that the physical process happening in the computer - zeroes and ones propagating along wires & through transistors - can only be related to consciousness by virtue of outsiders choosing the right interpretations (in their own heads!) for the symbols being manipulated. Maybe if you interpret that stream of zeroes and ones differently, it outputs 5-day weather predictions for a city that doesn't exist.
Another way of putting it is that, if consciousness is "how the algorithm feels from the inside," a simulated consciousness is just not following the same algorithm.
But what about the Fading Qualia argument?
The fading qualia argument is another thought experiment, this one by David Chalmers.
Essentially, we strap you into a chair and open up your skull. Then we replace one of your neurons with a silicon-based artificial neuron. Don't worry, it still outputs the same electrical signals along the axons; your behaviour won't be affected.
Then we do this for a second neuron.
Then a third, then a kth... until your brain contains only artificial neurons (N of them, where N ≈ 1011).
Now, what happens to your conscious experience in this process? A few possibilities arise:
- Conscious experience is initially the same, then shuts off completely at some discrete number of replaced neurons: maybe 1, maybe N/2. Rejected by virtue of being ridiculously implausible.
- Conscious experience fades continuously as k → N. Certainly more plausible than option 1, but still very strange. What does "fading" consciousness mean? Half a visual field? A full visual field with less perceived light intensity? Having been prone to (anemia-induced) loss of consciousness as a child, I can almost convince myself that fading qualia make some sort of sense, but not really...
- Conscious experience is unaffected by the transition.
If we were just talking about names this wouldn't matter, but we are talking about explanations. Vagueness in a name just means that the applicability of the name is a little undetermined. But there is no such thing as objective vagueness. The objective properties of things are "exact", even when we can only specify them vaguely.
This is what we all object to in the Copenhagen interpretation of quantum mechanics, right? It makes no sense to say that a particle has a position, if it doesn't have a definite position. Either it has a definite position, or the concept of position just doesn't apply. There's no problem in saying that the position is uncertain, or in specifying it only approximately; it's the reification of uncertainty - the particle is somewhere, but not anywhere in particular - which is nonsense. Either it's somewhere particular (or even everywhere, if you're a many-worlder), or it's nowhere.
Neil flirts with reifying vagueness about consciousness in a similarly untenable fashion. We can be vague about how we describe a subjective state of consciousness, we can be vague about how we describe the physical brain. But we cannot identify an exact property of a conscious state with an inherently vague physical predicate. The possibility of exact description of states on both sides, and of exactly specifying the mapping between them, must exist in any viable theory of consciousness. Otherwise, it reifies uncertainty in a way that has the same fundamental illogicality as the "particle without a definite position".
By the way, if you haven't read Dennett's "Real Patterns" then I can recommend it as an excellent explanation of how fuzzily defined, 'not-always-a-fact-of-the-matter-whether-they're-present' patterns, of which folk-psychological states like beliefs and desires are just a special case, can meaningfully find a place in a physicalist universe.