I'm a bit tired today, having stayed up until 3AM writing yesterday's >6000-word post on zombies, so today I'll just reply to Richard, and tie up a loose end I spotted the next day.
Besides, TypePad's nitwit, un-opt-out-able 50-comment pagination "feature", that doesn't work with the Recent Comments sidebar, means that we might as well jump the discussion here before we go over the 50-comment limit.
(A) Richard Chappell writes:
A terminological note (to avoid unnecessary confusion): what you call 'conceivable', others of us would merely call "apparently conceivable".
The gap between "I don't see a contradiction yet" and "this is logically possible" is so huge (it's NP-complete even in some simple-seeming cases) that you really should have two different words. As the zombie argument is boosted to the extent that this huge gap can be swept under the rug of minor terminological differences, I really think it would be a good idea to say "conceivable" versus "logically possible" or maybe even have a still more visible distinction. I can't choose professional terminology that has already been established, but in a case like this, I might seriously refuse to use it.
Maybe I will say "apparently conceivable" for the kind of information that zombie advocates get by imagining Zombie Worlds, and "logically possible" for the kind of information that is established by exhibiting a complete model or logical proof. Note the size of the gap between the information you can get by closing your eyes and imagining zombies, and the information you need to carry the argument for epiphenomenalism.
That is, your view would be characterized as a form of Type-A materialism, the view that zombies are not even (genuinely) conceivable, let alone metaphysically possible.
Type-A materialism is a large bundle; you shouldn't attribute the bundle to me until you see me agree with each of the parts. I think that someone who asks "What is consciousness?" is asking a legitimate question, has a legitimate demand for insight; I don't necessarily think that the answer takes the form of "Here is this stuff that has all the properties you would attribute to consciousness, for such-and-such reason", but may to some extent consist of insights that cause you to realize you were asking the question the wrong way.
This is not being eliminative about consciousness. It is being realistic about what kind of insights to expect, faced with a problem that (1) seems like it must have some solution, (2) seems like it cannot possibly have any solution, and (3) is being discussed in a fashion that has a great big dependence on the not-fully-understood ad-hoc architecture of human cognition.
(1) You haven't, so far as I can tell, identified any logical contradiction in the description of the zombie world. You've just pointed out that it's kind of strange. But there are many bizarre possible worlds out there. That's no reason to posit an implicit contradiction. So it's still completely mysterious to me what this alleged contradiction is supposed to be.
Okay, I'll spell it out from a materialist standpoint:
- The zombie world, by definition, contains all parts of our world that are within the closure of the "caused by" or "effect of" relation of any observable phenomenon. In particular, it contains the cause of my visibly saying, "I think therefore I am."
- When I focus my inward awareness on my inward awareness, I shortly thereafter experience my internal narrative saying "I am focusing my inward awareness on my inward awareness", and can, if I choose, say so out loud.
- Intuitively, it sure seems like my inward awareness is causing my internal narrative to say certain things, and that my internal narrative can cause my lips to say certain things.
- The word "consciousness", if it has any meaning at all, refers to that-which-is or that-which-causes or that-which-makes-me-say-I-have inward awareness.
- From (3) and (4) it would follow that if the zombie world is closed with respect to the causes of my saying "I think therefore I am", the zombie world contains that which we refer to as "consciousness".
- By definition, the zombie world does not contain consciousness.
- (3) seems to me to have a rather high probability of being empirically true. Therefore I evaluate a high empirical probability that the zombie world is logically impossible.
You can save the Zombie World by letting the cause of my internal narrative's saying "I think therefore I am" be something entirely other than consciousness. In conjunction with the assumption that consciousness does exist, this is the part that struck me as deranged.
But if the above is conceivable, then isn't the Zombie World conceivable?
No, because the two constructions of the Zombie World involve giving the word "consciousness" different empirical referents, like "water" in our world meaning H20 versus "water" in Putnam's Twin Earth meaning XYZ. For the Zombie World to be logically possible, it does not suffice that, for all you knew about how the empirical world worked, the word "consciousness" could have referred to an epiphenomenon that is entirely different from the consciousness we know. The Zombie World lacks consciousness, not "consciousness"—it is a world without H20, not a world without "water". This is what is required to carry the empirical statement, "You could eliminate the referent of whatever is meant by "consciousness" from our world, while keeping all the atoms in the same place."
Which is to say: I hold that it is an empirical fact, given what the word "consciousness" actually refers to, that it is logically impossible to eliminate consciousness without moving any atoms. What it would mean to eliminate "consciousness" from a world, rather than consciousness, I will not speculate.
(2) It's misleading to say it's "miraculous" (on the property dualist view) that our qualia line up so neatly with the physical world. There's a natural law which guarantees this, after all. So it's no more miraculous than any other logically contingent nomic necessity (e.g. the constants in our physical laws).
It is the natural law itself that is "miraculous"—counts as an additional complex-improbable element of the theory to be postulated, without having been itself justified in terms of things already known. One postulates (a) an inner world that is conscious (b) a malfunctioning outer world that talks about consciousness for no reason (c) that the two align perfectly. C does not follow from A and B, and so is a separate postulate.
I agree that this usage of "miraculous" conflicts with the philosophical sense of violating a natural law; I meant it in the sense of improbability appearing from no apparent source, a la perpetual motion belief. Hence the word was ill-chosen in context. But is this not intuitively the sort of thing we should call a miracle? Your consciousness doesn't really cause you to say you're conscious, there's a separate physical thing that makes you say you're conscious, but also there's a law aligning the two - this is indeed an event on a similar order of wackiness to a cracker taking on the substance of Christ's flesh while possessing the exact appearance and outward behavior of a cracker, there's just a natural law which guarantees this, you know.
That is, Zombie (or 'Outer') Chalmers doesn't actually conclude anything, because his utterances are meaningless. A fortiori, he doesn't conclude anything unwarrantedly. He's just making noises; these are no more susceptible to epistemic assessment than the chirps of a bird.
Looking at this from an AI-design standpoint, it seems to me like you should be able to build an AI that systematically refines an inner part of itself that correlates (in the sense of mutual information or systematic relations) to the environment, perhaps including floating-point numbers of a sort that I would call "probabilities" because they obey the internal relations mandated by Cox's Theorems when the AI encounters new information—pardon me, new sense inputs.
You will say that, unless the AI is more than mere transistors—unless it has the dual aspect—the AI has no beliefs.
I think my views on this were expressed pretty clearly in "The Simple Truth".
To me, it seems pretty straightforward to construct maps that correlate to territories in systematic ways, without mentioning anything other than things of pure physical causality. The AI outputs a map of Texas. Another AI flies with the map to Texas and checks to see if the highways are in the corresponding places, chirping "True" when it detects a match and "False" when it detects a mismatch. You can refuse to call this "a map of Texas" but the AIs themselves are still chirping "True" or "False", and the said AIs are going to chirp "False" when they look at Chalmers's belief in an epiphenomenal inner core, and I for one would agree with them.
It's clear that the function of mapping reality is performed strictly by Outer Chalmers. The whole business of producing belief representations is handled by Bayesian structure in causal interactions. There's nothing left for the Inner Chalmers to do, but bless the whole affair with epiphenomenal meaning. Where now 'meaning' is something entirely unrelated to systematic map-territory correspondence or the ability to use that map to navigate reality. So when it comes to talking about "accuracy", let alone "systematic accuracy", it seems to me like we should be able to determine it strictly by looking at the Outer Chalmers.
(B) In yesterday's text, I left out an assumption when I wrote:
If a self-modifying AI looks at a part of itself that concludes "B" on condition A—a part of itself that writes "B" to memory whenever condition A is true—and the AI inspects this part, determines how it (causally) operates in the context of the larger universe, and the AI decides that this part systematically tends to write false data to memory, then the AI has found what appears to be a bug, and the AI will self-modify not to write "B" to the belief pool under condition A.
...
But there's no possible warrant for the outer Chalmers or any reflectively coherent self-inspecting AI to believe in this mysterious correctness. A good AI design should, I think, be reflectively coherent intelligence with a testable theory of how it operates as a causal system, hence with a testable theory of how that causal system produces systematically accurate beliefs on the way to achieving its goals.
Actually, you need an additional assumption to the above, which is that a "good AI design" (the kind I was thinking of, anyway) judges its own rationality in a modular way; it enforces global rationality by enforcing local rationality. If there is a piece that, relative to its context, is locally systematically unreliable—for some possible beliefs "B_i" and conditions A_i, it adds some "B_i" to the belief pool under local condition A_i, where reflection by the system indicates that B_i is not true (or in the case of probabilistic beliefs, not accurate) when the local condition A_i is true, then this is a bug. This kind of modularity is a way to make the problem tractable, and it's how I currently think about the first-generation AI design. [Edit 2013: The actual notion I had in mind here has now been fleshed out and formalized in Tiling Agents for Self-Modifying AI, section 6.]
The notion is that a causally closed cognitive system—such as an AI designed by its programmers to use only causally efficacious parts; or an AI whose theory of its own functioning is entirely testable; or the outer Chalmers that writes philosophy papers—which believes that it has an epiphenomenal inner self, must be doing something systematically unreliable because it would conclude the same thing in a Zombie World. A mind all of whose parts are systematically locally reliable, relative to their contexts, would be systematically globally reliable. Ergo, a mind which is globally unreliable must contain at least one locally unreliable part. So a causally closed cognitive system inspecting itself for local reliability must discover that at least one step involved in adding the belief of an epiphenomenal inner self, is unreliable.
If there are other ways for minds to be reflectively coherent which avoid this proof of disbelief in zombies, philosophers are welcome to try and specify them.
The reason why I have to specify all this is that otherwise you get a kind of extremely cheap reflective coherence where the AI can never label itself unreliable. E.g. if the AI finds a part of itself that computes 2 + 2 = 5 (in the surrounding context of counting sheep) the AI will reason: "Well, this part malfunctions and says that 2 + 2 = 5... but by pure coincidence, 2 + 2 is equal to 5, or so it seems to me... so while the part looks systematically unreliable, I better keep it the way it is, or it will handle this special case wrong." That's why I talk about enforcing global reliability by enforcing local systematic reliability—if you just compare your global beliefs to your global beliefs, you don't go anywhere.
This does have a general lesson: Show your arguments are globally reliable by virtue of each step being locally reliable, don't just compare the arguments' conclusions to your intuitions. [Edit 2013: See this on valid logic being locally valid.]
(C) An anonymous poster wrote:
A sidepoint, this, but I believe your etymology for "n'shama" is wrong. It is related to the word for "breath", not "hear". The root for "hear" contains an ayin, which n'shama does not.
Now that's what I call a miraculously misleading coincidence—although the word N'Shama arose for completely different reasons, it sounded exactly the right way to make me think it referred to an inner listener.
Oops.
The continuing zombie discussion has reminded me of Raymond Smullyan, and conveniently someone has posted the essay I wanted from This Book Needs No Title: "The Unfortunate Dualist." A shorter piece, "Is Man a Machine?" connects this topic to Joy in the Merely Real. Essential paragraph: