I don't see how this differs at all from Searle's Chinese room.
The "puzzle" is created by the mental picture we form in our heads when hearing the description. For Searle's room, it's a clerk in a room full of tiles, shuffling them between boxes; for yours, it's a person sitting at a desk scratching on paper. Since the consciousness isn't that of the human in the room, where is it? Surely not in a few scraps of paper.
But plug in the reality for how complex such simulations would actually have to be, if they were to actually simulate a human brain. Picture what the scenarios would look like running on sufficient fast-forward that we could converse with the simulated person.
You (the clerk inside) would be utterly invisible; you'd live billions of subjective years for every simulated nanosecond. And, since you're just running a deterministic program, you would appear no more conscious to us than an electron appears conscious as it "runs" the laws of physics.
What we might see instead is a billion streams of paper, flowing too fast for the eye to follow, constantly splitting and connecting and shifting. Cataracts of fresh paper and pencils would be flowing in, someho...
It is, of course, utterly absurd to think that meat could be the substrate for true consciousness. And what if Simone chooses herself to spend eons simulating a being by hand? Are we to accept the notion of simulations all the way down?
In all honesty, I don't think the the simulation necessarily has to be very fine-grained. Plenty of authors will tell you about a time when one of their characters suddenly "insisted" on some action that the author had not foreseen, forcing the author to alter her story to compensate. I think it plausible that, were I to dedicate my life to it, I could imagine a fictional character and his experiences with such fidelity that the character would be correct in claiming to be conscious. (I suspect such a simulation would be taking advantage of the machinery of my own consciousness, in much the same manner as a VMWare virtual machine can, if properly configured, use the optical drive in its host computer.)
What, then, are the obligations of an author to his characters, or of a thinker to her thoughts? My memory is fallible and certainly I may wish to do other things with my time than endlessly simulate another being. Yet "fairness" and...
I think it plausible that, were I to dedicate my life to it, I could imagine a fictional character and his experiences with such fidelity that the character would be correct in claiming to be conscious.
Personally, I would be more surprised if you could imagine a character who was correct in claiming not to be conscious.
On a related note, is anyone familiar with the following variation on the fading qualia argument? It's inspired by (and very similar to) a response to Chalmers given in the paper "Counterfactuals Cannot Count" by M. Bishop. (Unfortunately, I couldn't find an ungated version.) Chalmers's reply to Bishop is here.
The idea is as follows. Let's imagine a though experiment under the standard computationalist assumptions. Suppose you start with an electronic brain B1 consisting of a huge number of artificial neurons, and you let it run for a while from some time T1 to T2 with an input X, so that during this interval, the brain goes through a vivid conscious experience full of colors, sounds, etc. Suppose further that we're keeping a detailed log of each neuron's changes of state during the entire period. Now, if we reset the brain to the initial state it had at T1 and start it again, giving it the same input X, it should go through the exact same conscious experience.
But now imagine that we take the entire execution log and assemble a new brain B2 precisely isomorphic to B1, whose neurons are however not sensitive to their inputs. Instead, each neuron in B2 is programmed to re...
I once took this reductio in the opposite direction and ended up becoming convinced that consciousness is what it feels like inside a logically consistent description of a mind-state, whether or not it is instantiated anywhere. I'm still confused about some of the implications of this, but somewhat less confused about consciousness itself.
Take a moment to convince yourself that there is nothing substantively different between this scenario and the previous one, except that it contains approximately 10,000 times the maximum safe dosage of in principle.
Once again, Simone will claim she's conscious.
...Yeah, I'm sorry, but I just don't believe her.
I don't claim certain knowledge about the ontology of consciousness, but if I can summon forth a subjective consciousness ex nihilo by making the right series of graphite squiggles (which don't even mean anything outside human minds), then we might as well just give up and admit consciousness is magic.
"If I can summon forth a subjective consciousness ex nihilo by making the right blobs of protein throw around the right patterns of electrical impulses and neurotransmitters (which don't even mean anything outside human minds), then we ...
One of the problems here is that of using our intuition on consciousness as a guide to processes well outside our experience. Why should we believe our common-sense intuition on whether a computer has consciousness, or whether a pencil and paper simulation has consciousness when both are so far beyond our actual experience? It's like applying our common sense understanding of physics to the study of atoms, or black holes. There's no reason to assume we can extrapolate that far intuitively with any real chance of success.
After that, there's a straight choice. Consciousness may be something that arises purely out of a rationally modellable process, or not. If the former, then the biological, computer program and pencil and paper Simone's will all be conscious, genuinely. If not, then there is something to consciousness that lies outside the reach of rational description - this is not inherently impossible in my opinion, but it does suggest that some entities which claim to be conscious actually won't be, and that there will be no rational means to show whether they are or not.
"It's the same thing. Just slower."
You hand calculated simulation is still conscious, and it is the logical relations of cause and effect within that calculation, not "real" geometry, that makes it so, the same as in the computer simulation and biological brains.
This might be a case where flawed intuition is correct.
The chain of causality leading to the 'yes' is MUCH weaker in the pencil and paper version. You imagine squiggles as mere squiggles, not as signals that inexorably cause you to carry them through a zillion steps of calculation. No human as we know them would be so driven, so it looks like that Simone can't exist as a coherent, caused thing.
But it's very easy and correct to see a high voltage on a wire as a signal which will reliably cause a set of logic gates to carry it through a zillion steps. So that Simone can get to yes without her universe locking up first.
If functionalism is true then dualism is true. You have the same experience E hovering over the different physical situations A, B, and C, even when they are as materially diverse as neurons, transistors, and someone in a Chinese room.
It should already be obvious that an arrangement of atoms in space is not identical to any particular experience you may claim to somehow be inhabiting it, and so it should already be obvious that the standard materialistic approach to consciousness is actually property dualism. But perhaps the observation that the experience is supposed to be exactly the same, even when the arrangement of atoms is really different, will help a few people to grasp this.
The thought experiments proposed in the post and the comments hint at at a strictly simpler problem that we need to solve before tackling consciousness anyway: what is "algorithmicness"? What constitutes a "causal implementation" of an algorithm and distinguishes it from a video feed replay? How can we remove the need for vague "bridging laws" between algorithmicness and physical reality?
I'm close to your conclusion, but I don't accept your Searle-esque argument. I accept Chalmers's reasoning, roughly, on the fading qualia argument, and agree with you that it doesn't justify the usual conception of the joys of uploading.
And I think that's the whole core of what needs to be said on the topic. That is, we have a good argument for attributing consciousness-as-we-know-it to a fine-grained functional duplicate of ourselves. And that's all. We don't have any reason to believe that a coarse-grained functional duplicate - a being that gives ...
Here's a thought experiment that helps me think about uploading (which I perceive as the real, observable-consequences-having issue here):
Suppose that you believed in souls (it is not that hard to get into that mindset - lots of people can do it). Also suppose that you believed in transmigration or reincarnation of souls. Finally, suppose that you believe that souls move around between bodies during the night, when people are asleep. Despite your belief in souls, you know that memories, skills, personality, goals are all located in the brain, not the soul....
When we simulate a brain on a general purpose computer, however, there is no physically similar pattern of energy/matter flow. If I had to guess, I suspect this is the rub: you must need a certain physical pattern of energy flow to get consciousness.
I invite you to evaluate the procedural integrity of your reasoning.
Do you really expect that "a certain physical pattern of energy flow" causes consciousness? Why? Can you even begin to articulate what that pattern might consist of? What is it about a computer model that fails to adequately accou...
I think that the clarification you want is pointless. When I write a difficult program (or section of a program), the first thing I do is write the algorithm out on paper in words, a flow chart, or whatever makes sense at the time. Then I play around with it to make sure it can handle any possible input so it will not crash. the reason I do it that way is so I only have to worry about problems with the steps I am following, not issues like syntax, but whether I draw the data flow on paper, visualize it in my mind, run it on my computer, etc, etc. it is ALW...
Basically, the simulated consciousness was isomorphic to biological consciousness in a similar way to how my shadow is isomorphic to me. Just like the simulation, if I spoke ASL I could get my shadow to claim conscious awareness, but it wouldn't mean much.
The simulation is "all the information you contain (and then possibly some more)" running through an algorithm at least as complex as your own.
The shadow is "a very very small subset of the information about you, none of which is even particularly relevant to consciousness", and isn't being run at all.
So, I would disagree fundamentally with your claim that they are in any way similar.
When we simulate a brain on a general purpose computer, however, there is no physically similar pattern of energy/matter flow.
There isn't?
(Not a rhetorical question)
First:
A major claim on which the desirability of uploading (among other things) depends, is that the upload would be conscious (as distinct from intelligent). I think I found a reductio of this claim at about 4:00 last night while staring up at my bedroom ceiling.
...but then...
If I had to guess, I suspect this is the rub: you must need a certain physical pattern of energy flow to get consciousness.
A strong claim in the headline - but then a feeble one in the supporting argument.
This is John Searle's Chinese room argument. Searle, John (1980), "Minds, Brains and Programs", Behavioral and Brain Sciences 3 (3): 417–457. Get the original article and the many refutations of it appended after the article. I don't remember if 457 is the last page of Searle's article, or of the entire collection.
Upvoted and disagreed.
There is no particular difference between a simulation that uses true physics[tm] or at least the abstraction necessary and the 'real' action.
The person that you are is also not implied by the matter or the actual hardware you happen to run on, but by the informational link between all the things that are currently implemented in your brain. (Memory and connections - to simplify that.) But there is no difference between a solution in hardware or software. One is easier to maintain and change, but it can easily behave the same from th...
I have no doubt in my mind that some time in the future nervous systems with be simulated with all their functions including consciousness. Perhaps not a particular person's nervous system at a particular time, but a somewhat close approximation, a very similar nervous system with consciousness but no magic. However, I definitely doubt that it will be done on a general purpose computer running algorithms. I doubt that step-by-step calculations will be the way that the simulation will be done. Here is why:
1.The brain is massively parallel and complex feedba...
I cannot agree at all; simSimone is plainly conscious if meatSimone is conscious; there are no magic pattern of electrical impulses in physical space which the universe will "notice" and imbue with consciousness.
Related articles: Nonperson predicates, Zombies! Zombies?, & many more.
ETA: This argument appears to be a rehash of the Chinese room, which I had previously thought had nothing to do with consciousness, only intelligence. I nonetheless find this one instructive in that it makes certain things explicit which the Chinese room seems to gloss over.
ETA2: I think I may have made a mistake in this post. That mistake was in realizing what ontology functionalism would imply, and thinking that ontology too weird to be true. An argument from incredulity, essentially. Double oops.
Consciousness belongs to a class of topics I think of as my 'sore teeth.' I find myself thinking about them all the time: in the middle of bathing, running, cooking. I keep thinking about consciousness because no matter how much I read on the subject, I find I am still confused.
Now, to the heart of the matter. A major claim on which the desirability of uploading (among other things) depends, is that the upload would be conscious (as distinct from intelligent). I think I found a reductio of this claim at about 4:00 last night while staring up at my bedroom ceiling.
Simulating a person
The thought experiment that is supposed to show us that the upload is conscious goes as follows. (You can see an applied version in Eliezer's bloggingheads debate with Massimo Pigliucci, here. I also made a similar argument to Massimo here.)
Let us take an unfortunate member of the public, call her Simone, and simulate her brain (plus inputs and outputs along the nervous system) on an arbitrarily powerful philosophical supercomputer (this also works if you simulate her whole body plus surroundings). This simulation can be at any level of complexity you like, but it's probably best if we stick to an atom-by-atom (or complex amplitudes) approach, since that leaves less room for doubt.
Since Simone is a lawful entity within physics, there ought to be nothing in principle stopping us from doing so, and we should get behavioural isomorphism between the simulation and the biological Simone.
Now, we can also simulate inputs and outputs to and from the visual, auditory and language regions of her brain. It follows that with the right expertise, we can ask her questions - questions like "Are you experiencing the subjective feeling of consciousness you had when you were in a biological body?" - and get answers.
I'm almost certain she'll say "Yes." (Take a moment to realize why the alternative, if we take her at her word, implies Cartesian dualism.)
The question is, do we believe her when she says she is conscious? 10 hours ago, I would have said "Of course!" because the idea of a simulation of Simone that is 100% behaviourally isomorphic and yet unconscious seemed very counterintuitive; not exactly a p-zombie by virtue of not being atom-by-atom identical with Simone, but definitely in zombie territory.
A different kind of simulation
There is another way to do this thought experiment, however, and it does not require that infinitely powerful computer the philosophy department has (the best investment in the history of academia, I'd say).
(NB: The next few paragraphs are the crucial part of this argument.)
Observe that ultimately, the computer simulation of Simone above would output nothing but a huge sequence of zeroes and ones, process them into visual and audio outputs, and spit them out of a monitor and speakers (or whatever).
So what's to stop me just sitting down and crunching the numbers myself? All I need is a stupendous amount of time, a lot of pencils, a lot (!!!) of paper, and if you're kind to me, a calculator. Atom by tedious atom, I'll simulate inputs to Simone's auditory system asking her if she's conscious, then compute her (physically determined) answer to that question.
Take a moment to convince yourself that there is nothing substantively different between this scenario and the previous one, except that it contains approximately 10,000 times the maximum safe dosage of in principle.
Once again, Simone will claim she's conscious.
...Yeah, I'm sorry, but I just don't believe her.
I don't claim certain knowledge about the ontology of consciousness, but if I can summon forth a subjective consciousness ex nihilo by making the right series of graphite squiggles (which don't even mean anything outside human minds), then we might as well just give up and admit consciousness is magic.
Oops!
Pigliucci is going to enjoy watching me eat my hat.
What was our mistake?
I've thought about this a lot in the last ~10 hours since I came up with the above.
I think when we imagined a simulated human brain, what we were picturing in our imaginations was a visual representation of the simulation, like a scene in Second Life. We saw mental images of simulated electrical impulses propagating along simulated neurons, and the cause & effect in that image is pretty clear...
...only it's not. What we should have been picturing was a whole series of logical operations happening all over the place inside the computer, with no physical relation between them and the represented basic units of the simulation (atoms, or whatever).
Basically, the simulated consciousness was isomorphic to biological consciousness in a similar way to how my shadow is isomorphic to me. Just like the simulation, if I spoke ASL I could get my shadow to claim conscious awareness, but it wouldn't mean much.
In retrospect, it should have given us pause that the physical process happening in the computer - zeroes and ones propagating along wires & through transistors - can only be related to consciousness by virtue of outsiders choosing the right interpretations (in their own heads!) for the symbols being manipulated. Maybe if you interpret that stream of zeroes and ones differently, it outputs 5-day weather predictions for a city that doesn't exist.
Another way of putting it is that, if consciousness is "how the algorithm feels from the inside," a simulated consciousness is just not following the same algorithm.
But what about the Fading Qualia argument?
The fading qualia argument is another thought experiment, this one by David Chalmers.
Essentially, we strap you into a chair and open up your skull. Then we replace one of your neurons with a silicon-based artificial neuron. Don't worry, it still outputs the same electrical signals along the axons; your behaviour won't be affected.
Then we do this for a second neuron.
Then a third, then a kth... until your brain contains only artificial neurons (N of them, where N ≈ 1011).
Now, what happens to your conscious experience in this process? A few possibilities arise: