Related articles: Nonperson predicates, Zombies! Zombies?, & many more.
ETA: This argument appears to be a rehash of the Chinese room, which I had previously thought had nothing to do with consciousness, only intelligence. I nonetheless find this one instructive in that it makes certain things explicit which the Chinese room seems to gloss over.
ETA2: I think I may have made a mistake in this post. That mistake was in realizing what ontology functionalism would imply, and thinking that ontology too weird to be true. An argument from incredulity, essentially. Double oops.
Consciousness belongs to a class of topics I think of as my 'sore teeth.' I find myself thinking about them all the time: in the middle of bathing, running, cooking. I keep thinking about consciousness because no matter how much I read on the subject, I find I am still confused.
Now, to the heart of the matter. A major claim on which the desirability of uploading (among other things) depends, is that the upload would be conscious (as distinct from intelligent). I think I found a reductio of this claim at about 4:00 last night while staring up at my bedroom ceiling.
Simulating a person
The thought experiment that is supposed to show us that the upload is conscious goes as follows. (You can see an applied version in Eliezer's bloggingheads debate with Massimo Pigliucci, here. I also made a similar argument to Massimo here.)
Let us take an unfortunate member of the public, call her Simone, and simulate her brain (plus inputs and outputs along the nervous system) on an arbitrarily powerful philosophical supercomputer (this also works if you simulate her whole body plus surroundings). This simulation can be at any level of complexity you like, but it's probably best if we stick to an atom-by-atom (or complex amplitudes) approach, since that leaves less room for doubt.
Since Simone is a lawful entity within physics, there ought to be nothing in principle stopping us from doing so, and we should get behavioural isomorphism between the simulation and the biological Simone.
Now, we can also simulate inputs and outputs to and from the visual, auditory and language regions of her brain. It follows that with the right expertise, we can ask her questions - questions like "Are you experiencing the subjective feeling of consciousness you had when you were in a biological body?" - and get answers.
I'm almost certain she'll say "Yes." (Take a moment to realize why the alternative, if we take her at her word, implies Cartesian dualism.)
The question is, do we believe her when she says she is conscious? 10 hours ago, I would have said "Of course!" because the idea of a simulation of Simone that is 100% behaviourally isomorphic and yet unconscious seemed very counterintuitive; not exactly a p-zombie by virtue of not being atom-by-atom identical with Simone, but definitely in zombie territory.
A different kind of simulation
There is another way to do this thought experiment, however, and it does not require that infinitely powerful computer the philosophy department has (the best investment in the history of academia, I'd say).
(NB: The next few paragraphs are the crucial part of this argument.)
Observe that ultimately, the computer simulation of Simone above would output nothing but a huge sequence of zeroes and ones, process them into visual and audio outputs, and spit them out of a monitor and speakers (or whatever).
So what's to stop me just sitting down and crunching the numbers myself? All I need is a stupendous amount of time, a lot of pencils, a lot (!!!) of paper, and if you're kind to me, a calculator. Atom by tedious atom, I'll simulate inputs to Simone's auditory system asking her if she's conscious, then compute her (physically determined) answer to that question.
Take a moment to convince yourself that there is nothing substantively different between this scenario and the previous one, except that it contains approximately 10,000 times the maximum safe dosage of in principle.
Once again, Simone will claim she's conscious.
...Yeah, I'm sorry, but I just don't believe her.
I don't claim certain knowledge about the ontology of consciousness, but if I can summon forth a subjective consciousness ex nihilo by making the right series of graphite squiggles (which don't even mean anything outside human minds), then we might as well just give up and admit consciousness is magic.
Pigliucci is going to enjoy watching me eat my hat.
What was our mistake?
I've thought about this a lot in the last ~10 hours since I came up with the above.
I think when we imagined a simulated human brain, what we were picturing in our imaginations was a visual representation of the simulation, like a scene in Second Life. We saw mental images of simulated electrical impulses propagating along simulated neurons, and the cause & effect in that image is pretty clear...
...only it's not. What we should have been picturing was a whole series of logical operations happening all over the place inside the computer, with no physical relation between them and the represented basic units of the simulation (atoms, or whatever).
Basically, the simulated consciousness was isomorphic to biological consciousness in a similar way to how my shadow is isomorphic to me. Just like the simulation, if I spoke ASL I could get my shadow to claim conscious awareness, but it wouldn't mean much.
In retrospect, it should have given us pause that the physical process happening in the computer - zeroes and ones propagating along wires & through transistors - can only be related to consciousness by virtue of outsiders choosing the right interpretations (in their own heads!) for the symbols being manipulated. Maybe if you interpret that stream of zeroes and ones differently, it outputs 5-day weather predictions for a city that doesn't exist.
Another way of putting it is that, if consciousness is "how the algorithm feels from the inside," a simulated consciousness is just not following the same algorithm.
But what about the Fading Qualia argument?
The fading qualia argument is another thought experiment, this one by David Chalmers.
Essentially, we strap you into a chair and open up your skull. Then we replace one of your neurons with a silicon-based artificial neuron. Don't worry, it still outputs the same electrical signals along the axons; your behaviour won't be affected.
Then we do this for a second neuron.
Then a third, then a kth... until your brain contains only artificial neurons (N of them, where N ≈ 1011).
Now, what happens to your conscious experience in this process? A few possibilities arise:
- Conscious experience is initially the same, then shuts off completely at some discrete number of replaced neurons: maybe 1, maybe N/2. Rejected by virtue of being ridiculously implausible.
- Conscious experience fades continuously as k → N. Certainly more plausible than option 1, but still very strange. What does "fading" consciousness mean? Half a visual field? A full visual field with less perceived light intensity? Having been prone to (anemia-induced) loss of consciousness as a child, I can almost convince myself that fading qualia make some sort of sense, but not really...
- Conscious experience is unaffected by the transition.
On a related note, is anyone familiar with the following variation on the fading qualia argument? It's inspired by (and very similar to) a response to Chalmers given in the paper "Counterfactuals Cannot Count" by M. Bishop. (Unfortunately, I couldn't find an ungated version.) Chalmers's reply to Bishop is here.
The idea is as follows. Let's imagine a though experiment under the standard computationalist assumptions. Suppose you start with an electronic brain B1 consisting of a huge number of artificial neurons, and you let it run for a while from some time T1 to T2 with an input X, so that during this interval, the brain goes through a vivid conscious experience full of colors, sounds, etc. Suppose further that we're keeping a detailed log of each neuron's changes of state during the entire period. Now, if we reset the brain to the initial state it had at T1 and start it again, giving it the same input X, it should go through the exact same conscious experience.
But now imagine that we take the entire execution log and assemble a new brain B2 precisely isomorphic to B1, whose neurons are however not sensitive to their inputs. Instead, each neuron in B2 is programmed to recreate the sequence of states through which its corresponding neuron from B1 passed during the interval (T1, T2) and generate the corresponding outputs. This will result in what Chalmers calls a "wind-up" system, which the standard computationalist view (at least to my knowledge) would not consider conscious, since it completely lacks the causal structure of the original computation, and merely replays it like a video recording.
You can probably see where this is going now. Suppose we restart B1 with the same initial state from T1 and the same input X, and while it's running, we gradually replace the neurons from B1 with their "wind-up" versions from B2. At the start at T1, we have the presumably conscious B1, and at the end at T2, the presumably unconscious B2 -- but the transition between the two is gradual just like in the original fading qualia argument. Thus, there must be some sort of "fading qualia" process going on after all, unless either B1 is not conscious to begin with, or B2 is conscious after all. (The latter however gets us into the problem that every physical system implements a "wind-up" version of every computation if only some numbers from arbitrary physical measurements are interpreted suitably.)
I don't find Chalmers's reply satisfactory. In particular, it seems to me that the above argument is damaging for significant parts of his original fading qualia thought experiment where he explains why he finds the possibility of fading qualia implausible. It is however possible that I've misunderstood either the original paper or his brief reply to Bishop, so I'd definitely like to see him address this point in more detail.
Well, this bit seems wrong on Bishop's part:
This is a false distinction if (as I believe) counterfactual sensitivity is part of what happens. For example, if what happens is that Y causes Z, then part of that is the counterfactual fact that if Y hadn't happened then Z wouldn't have happened. (Maybe this particular example can be nitpicked, bu... (read more)