A Dr. Nigel Thomas has tried to show a logical self-contradiction in Chalmers' "Zombie World" or zombiphile argument, in a way that would convince Chalmers (given perfect rationality, of course). The argument concerns the claim that we can conceive of a near-duplicate of our world sharing all of its physical laws and containing apparent duplicates of us, who copy everything we say or do for the same physical reasons that lead us to do it, but who lack consciousness. Chalmers says this shows the logical possibility of Zombie World (though nobody believes that it actually exists.) He concludes from this that our world has a nonphysical "bridging law" saying that "systems with the same functional organization have the same sort of conscious experiences," and we must regard this as logically independent of the world's physical laws. Thomas' response includes the following point:
Thus zombiphiles normally (and plausibly) insist that we know of our own consciousness directly, non-inferentially. Even so, there must be some sort of cognitive process that takes me from the fact of my consciousness to my (true) belief that I am conscious. As my zombie twin is cognitively indiscernible from me, an indiscernible process, functioning in just the same way, must lead it from the fact of its non-consciousness to the equivalent mistaken belief. Given either consciousness or non-consciousness (and the same contextual circumstances: ex hypothesis, ceteris is entirely paribus) the process leads one to believe that one is conscious. It is like a stuck fuel gauge, that reads FULL whether or not there is any gas in the tank.
While I think his full response has some flaws, it seems better than anything I produced on the subject — perhaps because I didn't try very hard to find a contradiction. Thomas tries to form a trilemma by arguing that we can't regard statements made by Zombie Chalmers about consciousness as true, or false, or meaningless. (If you think they deserve the full range of probabilistic truth values, then for the moment let "true" mean at least as much confidence as we place in our own equivalent statements and let "false" mean any lesser value.) But the important lemma requires an account of knowledge in order to work. To run through the other two lemmas: we know the zombie statements have meaning or truth values for us, the people postulating them, so let the rest of the argument apply to those meanings. And if we call the statements true then they must by assumption refer to something other than consciousness — call it consciousness-Z. But then Zombie Hairyfigment will (by assumption) say, "I have real consciousness and not just consciousness-Z." (This also seems to reduce probabilistically "false" statements to ordinary falsity, still by assumption.)
The remaining lemma tries to draw a contradiction out of the zombiphile argument's assumption that Chalmers has knowledge of his own consciousness. We need some way to recognize or rule out knowledge in order for this to work. Happily, on this one question standard philosophy seems to point clearly towards the answer we want.
(One Robert Bass apparently makes a related point in his paper, Chalmers and the Self-Knowledge Problem (pdf). But I took the argument in a slightly different direction by asking what the philosophers actually meant.)
Gettier intuitions: How do they work?
The famous Gettier Problem illustrates a flaw in the verbal definition of knowledge as "justified true belief". Gettier's original case takes a situation in the works of Plato involving Socrates (Theatetus 142d), and modifies it to make the situation clearer. In Gettier's version, S and his friend Jones have both applied for a job. S heard the president of the company say the job would go to Jones. S has also just counted the ten coins in Jones' pocket, and therefore believes that "The man who will get the job has ten coins in his pocket." But it turns out the job goes to S, who unbeknown to himself happened to have ten coins in his own pocket. He therefore had a true belief that seems justified, but I don't know of anyone who believes it should count as knowledge.
Some philosophers responded to this by saying that in addition to justification and truth, a belief needs to have no false lemmas hiding in its past in order to count as knowledge. But this led to the appearance of the following counterexample, as told in the historical summary here:
Fake Barn Country: Henry is looking at a (real) barn, and has impeccable visual and other evidence that it is a barn. He is not gettiered; his justification is sound in every way. However, in the neighborhood there are a number of fake, papiere-mâché barns, any of which would have fooled Henry into thinking it was a barn.
Henry does not appear to use any false lemmas in forming his belief, at least not explicit lemmas like S in the first problem. Yet most philosophers do not believe Henry has knowledge when he says, 'Hey, a barn,' since he would have thought this whether he saw a real barn or a barn facade. Interestingly, a lot of ordinary people may not share this intuition with the philosophers or may take a different position on the matter in different contexts. I will try to spell out what the Gettier intuitions actually point towards before judging them, in the hope that they point to something useful. For now we can call their object 'G-knowledge' or 'G-nosis'. (That part doesn't seem like proper Rationalist Taboo, but as far as I can tell my laziness has no fatal consequences.)
At one time I thought we could save the verbal definition and sweep all the Gettier cases into the No False Lemmas basket by requiring S to reject any practical possibility of deception or self-deception before his or her belief could possibly count as knowledge. This however does not work. The reason why it fails gives me an excuse to quote a delightful Gettier case (also from Lycan's linked historical summary) involving an apparent AI who knows better than to take anything humans say at face value:
Noninferential Nogot (Lehrer 1965; 1970). Mr. Nogot in S’s office has given S evidence that he, Nogot, owns a Ford. By a single probabilistic inference, S moves directly (without passing through ‘Nogot owns a Ford’) to the conclusion that someone in S’s office owns a Ford. (As in any such example, Mr. Nogot does not own a Ford, but S’s belief happens to be true because Mr. Havit owns one.)
Cautious Nogot (Lehrer 1974; sometimes called ‘Clever Reasoner’). This is like the previous example, except that here S, not caring at all who it might be that owns the Ford and also being cautious in matters doxastic, deliberately refrains from forming the belief that Nogot owns it.
The Cautious AI has evidently observed a link between claims of Ford ownership and the existence of Fords which seem to 'belong' to some human in the vicinity. But this S believes humans may have a greater tendency to say they own a Ford when somebody nearby owns one. I can think of postulates that would justify this belief, but let's assume none of them hold true. Then S will modify some of its numerical 'assumptions' if it learns the truth about the link. In principle we could keep using my first attempt at a definition if not for this:
And there is the obvious sort of counterexample to the necessity of ‘no-false-lemmas’ (Saunders and Champawat 1964; Lehrer 1965). Nondefective Chain: If S has at least one epistemically justifying and non-Gettier-defective line of justification, then S knows even if S has other justifying grounds that contain Gettier gaps. For example (Lehrer), suppose S has overwhelming evidence that Nogot owns a Ford and also overwhelming evidence that Havit owns one. S then knows that someone in the office owns a Ford, because S knows that Havit does and performs existential generalization; it does not matter that one of S’s grounds (S’s belief that Nogot owns a Ford) is false.
By the time I saw this problem I'd already tried to add something about an acceptable margin of error ε, changing what my definition said about "practical possibility" to make it agree with Bayes' Theorem. But at this point I had to ask if the rest of my definition actually did anything. (No.)
From this perspective it seems clear that in each Gettier case where S lacks G-nosis, the reader has more information than S and that leads to a different set of probabilities. I'll start nailing down what that means shortly. First let's look at the claim that G-nosis obeys Bayes.
My new definition leads to a more generous view of Henry or S in the simple case of No Fake Barns. Previously I would have said that S lacked knowledge both in Fake Barn Country and in the more usual case. But assume that S has unstated estimates of probability, which would change if a pig in a cape appeared to fly out of the barn and take to the sky. (If we assume a lack of even potential self-doubt, I have no problem saying that S lacks knowledge.) It looks like in many cases the Gettier intuitions allow vague verbal or even implied estimates, so long as we could translate each into a rough number range, neither end of which differs by more than ε from the 'correct' value. S would then have G-nosis for sufficiently forgiving values of ε.
And I do mean values, one ε for each 'number' that S uses. G-nosis must include a valid chain of probabilistic reasoning that starts from the starting point of S, and in which no value anywhere differs by more than ε from that which an omniscient human reader would assign. If you think that last part hides a problem or five, give yourself a pat on the back. But we can make it seem less circular by taking "omniscient" to mean that for every true claim which requires Bayesian adjustment, our reader uses the appropriate numbers. (I'd call the all-knowing reader 'Kyon,' except I suspect we'll wind up excluding too many people even without this.)
Note for newcomers: this applies to logical evidence as well. We can treat standard true/false logic as a special case or limit of probability as we approach total certainty. 'Evidence' in probability means any fact that you gave a greater chance of truth on the assumption of some belief A than on the assumption of not-A. This clearly applies to seeing a proof of the belief. If you assume that you'll never see a valid proof of A and then later have to accept not-A as true, then you can just plug in zero for the probability of seeing the proof in the world of not-A and get a probability of 100% for A. So our proposed definition of knowledge seems general enough to include abstract math and concrete everyday 'knowledge'.
Another note about our definition, chiefly for newcomers: the phrase "every true claim" needs close attention. In principle Gödel tells us that every useful logical system (including Bayes) will produce statements it can't prove or disprove with logical certainty. In particular it can't prove its own logical self-consistency (unless it actually contradicts itself). If we regard the statement with the greater probability for us as true, that gives us a new system that creates a new unprovable statement, and so on. But none of these new axioms would change the truth values of statements we could prove within the old system. If we treat mathematically proven statements as overwhelmingly likely but not certain — if we say that for any real-world system that acts like math in certain basic ways, mathematical theorems have overwhelming probability — then it seems like none of the new axioms would have much effect on the probability of any statement we've been claiming to know (or have G-nosis of). In fact, that seems like our reason for wanting to call the "axioms" true. So I don't think they affect any practical case of G-nosis. We already more or less defined our reader as the limit of a human updating by Bayes (as the set of true statements that no longer require changes approaches the set of all true statements). Hopefully for any given statement we could have G-nosis of, we can define a no-more-than-countably-infinite set of true statements or pieces of evidence that get the job done by creating such a limit. I think I'll just assert that for every such G-knowable statement A, at worst there exists a sequence of evidence such that A's new probabilities (as we take more of the sequence into account) converge to some limit, and this limit does not change for any additional truth(s) we could throw into the sequence. Humanity's use of math acts like part of a sequence that works in this way, and because of this the set of unprovable Gödel "axioms" looks unnecessary. At this point we might not even need the part about a human reader. But I'll keep it for a little longer in case some non-human reader lacks (say) the Gettier intuition regarding counterfactuals. More on that later.
Third note chiefly for newcomers: I haven't addressed the problem of getting the numbers we plug in to the equation, or 'priors'. We know that by choosing sufficiently wrong priors, we can resist any push towards the right answer by finite real-world evidence. This seems less important if we use my limit-definition but still feels like a flaw, in theory. In practice, humans seem to form explanations by comparing the world to the output of black boxes that we carry inside of ourselves, and to which we've affixed labels such as "anger" or "computation". I don't think it matters if we call these boxes 'spiritual' or 'physical' or 'epiphenomenal', so long as we admit that stuff goes in, stuff comes out and for the most part we don't know how the boxes work. Now a vast amount of evidence suggests that if you started from the fundamental nature of reality and tried using it to duplicate the output of the "anger" box (or one of the many 'love' boxes), you'd need to add more conditions or assumptions than the effects of "computation" would require. Even if you took the easy way out and tried to copy a truly opaque box without understanding it, you'd need complicated materials and up to nine months of complex labor for the smallest model. (Also, how would a philosopher postulate love without postulating at least the number 2?) New evidence about reality could of course change this premise. Evidence always changes some of the priors for future calculations. But right now, whenever all else seems equal, I have to assign a greater probability to an explanation which uses only my "computation" box than to one which uses other boxes. (Very likely the "beauty" box plays a role in all cases. But if you tell me you can explain the world through beauty alone, I probably won't believe you.) This means I must tentatively conclude our omniscient reader would use a similar way of assigning priors. And further differences seem unlikely to matter in the limit. Even for real-world scenarios, the assumption of "sufficiently wrong priors" now looks implausible. (Humans didn't actually evolve to get the wrong answer. We just didn't evolve to get the right one. Our ancestors slid in under the margin of error.) All of which seems to let our reader assign a meaning to Eliezer's otherwise fallacious comment here.
(I had a line that I can't bear to remove entirely about the two worlds of Parmenides, prior probability vs the evidence, and timeless physics compared to previous attempts at reconciliation. But I don't think spelling it out further adds to this argument.)
Having established the internal consistency of our definition, we still need to look for Gettier counterexamples before we can call it an account of G-nosis. The ambiguity of Nogot allows for one test. If we assume that people do in fact show a greater chance of saying they own a Ford when somebody nearby owns one, and that S would not need to adjust any prior by more than ε, it seems to me that S does have knowledge. But we need more than hindsight to prove our case. Apparent creationist nut Robert C. Koons has three attempts at a counterexample in his book Realism regained (though he doesn't seem to look at our specific Bayesian definition). We can dismiss one attempt as giving S an obvious false prior. Another says that S would have used a false prior if not for a blow to the head. This implies that S has no Bayes-approved chain of reasoning from his/her actual starting point to the conclusion. Finally, Koons postulates that an "all-powerful genie" controlled the evidence for reasons unrelated to the truth of the belief A, and the result happens to lead to the 'correct' value. But if our 'human reader' would not consider this result knowledge then 'correct' must not mean what I've called G-nosis. Apparently the reader imagines the genie making a different whimsical decision, and calculates different results for most of the many other possible whims it could follow. This results in a high reader-assigned probability which we call P(E|¬A), or P of E if not-A, for the evidence to appear the way it does to S even if one treats S's belief as false. And so S still would not have G-nosis by our definition.
This seems to pinpoint the disagreement between intuitions in the case of Fake Barn. People who deny S in Fake Barn knowledge believe that P(E|¬A) has a meaning even if we think P(¬A)=0 — in the limit, perhaps, or in a case where someone mistakenly set the prior probability of A at 100% because the evidence seems so directly and clearly known that they counted it twice. Obviously if we also require that no value anywhere differ by too much from the reader's, then a sufficiently 'wrong' P(E|¬A) rules out G-nosis. (This seems particularly true if it equals P(E|A), since E would no longer count as evidence at all.) If the reader adds what S might call a 'no-silliness' assumption, ruling out any effect of the barn facades on P(E|¬A), then S does have G-nosis. Though of course this would make less sense if our reader brought out the 'silly' counterfactual in the first place.
I'd love to see how the belief in knowledge for Fake Barn correlates with getting the wrong answer on a logic test due to judging the conclusion ("sharks are fish,") instead of the argument or evidence ("fish live in the water," & "sharks live in the water.")
Finally, the Zombie World story itself has a form similar to a Gettier case. If we just assume two 'logically possible worlds' then the probability of Chalmers arriving at the truth about his own experiences, given only the process he used to reach his beliefs, appears to equal 50%. Clearly this does not count as G-nosis, nor would we expect it to. But let's adjust the number of worlds so that the probability of consciousness, given the process by which Chalmers came to believe in his consciousness, equals 1-ε. (Technically we should also subtract any uncertainty Chalmers will admit to, but I think he agrees that if this number grows large enough he no longer has G-nosis of the belief in question, rather then the contrary belief or the division of probability between them.) His belief now fits the definition. And it intuitively seems like knowledge, since ε means the chance of error that I'd find acceptable. This seems like very good news for our definition.
What about the zombies?
Before going on to the obvious question, I want to make sure we double-tap. Back to the first zombie! Until now I've tried to treat "G-nosis" as a strange construct that philosophers happen to value. Now that we have the definition, however, it seems to spell out what we need in order for our beliefs to fit the facts. Even the assumption that P(E|¬A) obeys a reasonable definition of counterfactuals in the limit seems necessary in that, if we assume this doesn't happen, we suddenly have no reason to think our beliefs fit the facts. "G-nosis" illustrates the process of seeking reality. If our beliefs don't fit the definition, then extending this process will bring them into conflict with reality — unless we somehow arrived at the right answer by chance without having any reason to believe it. I think the zombiphile argument does assume that we have reason to believe in our own consciousness and no omniscient reader could disagree.
We gave Chalmers a 50% chance of having conscious experiences, what philosophers call "qualia," given his evidence and the assumption of the two worlds. But he would object that he used qualia to reach his conclusion of having them — not in the sense of causation, but in the sense of giving his conclusion meaning. When he says he knows qualia this sounds exactly like Z-Chalmers' belief, but gains added content from his actual qualia. Our definition however requires him to use probabilistic reasoning on some level before he can trust his belief. This appears to mean that some time elapses between the qualia he (seemingly) uses to form his belief, and the formally acceptable conclusion. If we assume Zombie World exists, then the evidence available when the conclusion appears seems just like the evidence available to Z-Chalmers. So it seems like this added content appears after the fact, like the content we give to statements by Z-Chalmers. And if the real Chalmers treats it as new evidence then the same argument applies. So much for Zombie #1.
So does the addition of worlds save the zombiphile argument? I don't know. (We always see one bloody zombie left at the end of the film!) But anything that could deceive me about the existence of my own qualia seems able to deceive me about '2+2=4'. I therefore argue that, in this particular case, ε should not exceed the chance that Descartes' Demon is deceiving me now, or that arithmetic contradicts itself, and 2 plus 2 can equal 3. Obviously it seems like a stretch to call this a logical possibility.
Even in the zombiphile argument, we can't regard the "bridging law" that creates consciousness as wholly independent from physical or inter-subjectively functional causes. We must therefore admit a strong a priori connection between a set of physical processes (a part of what Chalmers calls "microphysical truths") and human experience or "phenomenal truths". This brings us a lot closer to agreement, since the Bayesian forms of physicalism claim only an overwhelming probability for the claim that necessity links a particular set of causes to consciousness. And if we trust that our own consciousness exists, I argue this shows we must not believe in Zombie World as a meaningful possibility.
I feel like I should get this posted now, so I won't try to say what this means for a not-so-Giant Look-Up Table that simulates a Boltzmann Brain concluding it has qualia. Feel free to solve that in the comments.
I see your objection, and I see more clearly than before the need for thesis statements.
I want to show that given only a traditional assumption of philosophy (a premise of cogito ergo sum, I think), we must believe in the claim: "physical causes which duplicate Chalmers' actions and speech, and which we could never physically distinguish from Chalmers himself, would produce qualia." Let's call this belief A for convenience. (I called a different statement A in a previous comment, but that whole version of the argument seems flawed.) It so happens that if we accept this claim we must reject the existence of p-zombies, but we care about p-zombies only for what they might tell us about A.
To that end, I argue that if we accept Chalmers' zombiphile or anti-A argument C, which includes the assumption I just mentioned, we must logically believe A.
Therefore, I don't argue that "the cognitive mechanism by which we conclude we have qualia is not 100% reliable." I argue that we would have to accept a slightly more precise form of that claim if we accept C, and then I show some of the consequences. (Poorly, I think, but I can improve that part.)
Likewise, I don't argue that "we can never determine what is and is not logically possible." I argue that we must believe certain claims about logic and math, like the claim B that "arithmetic will never tell us '2+2=3'," due to the same thought process we use to judge all rational beliefs. Now that seems less important if the argument in that previous comment fails. But I still think intuitively that if the probability of you the judge having qualia (call this belief Q) would not equal 1 in the limit, then lim{P(B)} would not equal 1. This of course seems consistent, since we don't need to assign P(B)=1 now, and we'd have to if we believed with certainty that the limit = 1. But on this line of thinking we have to call ¬B logically possible without qualification, thereby destroying any practical or philosophical use for this kind of possibility unless we supplement it with more Bayesian reasoning. The same argument leads me to view adding a new postulate, like a bridging law or a string of Gödel statements, as entirely the wrong approach.
I think this allows me to make a stronger statement than Robert Bass, who as I mentioned near the start of the post turns out to make a closely related argument in the linked PDF, but does not explicitly try to define what philosophers normally call "knowledge". (I don't know if this accounts for the dearth of Google or Google Scholar results for "robert bass" and either chalmers or zombie.) Once I give my definition perhaps I should have just pointed out that P(A|C)=P(Q|C) in the limit. Thus if someone who treats Q as certain has knowledge of Q (as I think C asserts), we can only escape the conclusion that we know A when we treat it as certain by giving P(A|C) a smaller acceptable difference ε from the limit. (Edited to remove mistake in expression.) Now I can certainly think of scenarios where the exact P(A|Q) would matter a lot. But since A has more specific conditions than 'uploading' and rules out more possible problems than either this or sleep, and since P(A|Q)>P(A|C), I think knowing the latter has a margin of error no greater than P(Q) would fully reassure me. (I guess we're imagining Omega telling me he'll reset me at some later time to exactly my current physical state, which carries other worries but doesn't make me fear zombie-hood as such.) And it seems inarguable that P(A|Q)>P(A|C), since C makes the further assumption that we'll never find a contradiction in ¬A. You'll notice this works out to P(A|Q)>P(Q|C), which by assumption seems pretty fraking certain.
To clarify before proceeding:
As written this is under-defined and doesn't even obviously contradict anything Chalmers says. What set of worlds does this 'would' apply to?