Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Zombies Redacted
Comment author: jkaufman 12 July 2016 07:58:08PM 1 point [-]

I was curious about the diff, specifically what sections were being removed. This is too long for a comment, so I'll post each one as a reply to this comment.

In response to comment by jkaufman on Zombies Redacted
Comment author: jkaufman 12 July 2016 07:59:54PM 1 point [-]

Mind you, I am not saying this is a substitute for careful analytic refutation of Chalmers's thesis. System 1 is not a substitute for System 2, though it can help point the way. You still have to track down where the problems are specifically.

Chalmers wrote a big book, not all of which is available through free Google preview. I haven't duplicated the long chains of argument where Chalmers lays out the arguments against himself in calm detail. I've just tried to tack on a final refutation of Chalmers's last presented defense, which Chalmers has not yet countered to my knowledge. Hit the ball back into his court, as it were.

But, yes, on a core level, the sane thing to do when you see the conclusion of the zombie argument, is to say "That can't possibly be right" and start looking for a flaw.

In response to Zombies Redacted
Comment author: jkaufman 12 July 2016 07:58:08PM 1 point [-]

I was curious about the diff, specifically what sections were being removed. This is too long for a comment, so I'll post each one as a reply to this comment.

In response to comment by jkaufman on Zombies Redacted
Comment author: jkaufman 12 July 2016 07:59:47PM 1 point [-]

I have a nonstandard perspective on philosophy because I look at everything with an eye to designing an AI; specifically, a self-improving Artificial General Intelligence with stable motivational structure.

When I think about designing an AI, I ponder principles like probability theory, the Bayesian notion of evidence as differential diagnostic, and above all, reflective coherence. Any self-modifying AI that starts out in a reflectively inconsistent state won't stay that way for long.

If a self-modifying AI looks at a part of itself that concludes "B" on condition A—a part of itself that writes "B" to memory whenever condition A is true—and the AI inspects this part, determines how it (causally) operates in the context of the larger universe, and the AI decides that this part systematically tends to write false data to memory, then the AI has found what appears to be a bug, and the AI will self-modify not to write "B" to the belief pool under condition A.

Any epistemological theory that disregards reflective coherence is not a good theory to use in constructing self-improving AI. This is a knockdown argument from my perspective, considering what I intend to actually use philosophy for. So I have to invent a reflectively coherent theory anyway. And when I do, by golly, reflective coherence turns out to make intuitive sense.

So that's the unusual way in which I tend to think about these things. And now I look back at Chalmers:

The causally closed "outer Chalmers" (that is not influenced in any way by the "inner Chalmers" that has separate additional awareness and beliefs) must be carrying out some systematically unreliable, unwarranted operation which in some unexplained fashion causes the internal narrative to produce beliefs about an "inner Chalmers" that are correct for no logical reason in what happens to be our universe.

But there's no possible warrant for the outer Chalmers or any reflectively coherent self-inspecting AI to believe in this mysterious correctness. A good AI design should, I think, look like a reflectively coherent intelligence embodied in a causal system, with a testable theory of how that selfsame causal system produces systematically accurate beliefs on the way to achieving its goals.

So the AI will scan Chalmers and see a closed causal cognitive system producing an internal narrative that is uttering nonsense. Nonsense that seems to have a high impact on what Chalmers thinks should be considered a morally valuable person.

This is not a necessary problem for Friendly AI theorists. It is only a problem if you happen to be an epiphenomenalist. If you believe either the reductionists (consciousness happens within the atoms) or the substance dualists (consciousness is causally potent immaterial stuff), people talking about consciousness are talking about something real, and a reflectively consistent Bayesian AI can see this by tracing back the chain of causality for what makes people say "consciousness".

In response to Zombies Redacted
Comment author: jkaufman 12 July 2016 07:58:08PM 1 point [-]

I was curious about the diff, specifically what sections were being removed. This is too long for a comment, so I'll post each one as a reply to this comment.

In response to comment by jkaufman on Zombies Redacted
Comment author: jkaufman 12 July 2016 07:59:29PM 1 point [-]

... (Argument from career impact is not valid, but I say it to leave a line of retreat.)

Chalmers critiques substance dualism on the grounds that it's hard to see what new theory of physics, what new substance that interacts with matter, could possibly explain consciousness. But property dualism has exactly the same problem. No matter what kind of dual property you talk about, how exactly does it explain consciousness?

When Chalmers postulated an extra property that is consciousness, he took that leap across the unexplainable. How does it help his theory to further specify that this extra property has no effect? Why not just let it be causal?

If I were going to be unkind, this would be the time to drag in the dragon—to mention Carl Sagan's parable of the dragon in the garage. "I have a dragon in my garage." Great! I want to see it, let's go! "You can't see it—it's an invisible dragon." Oh, I'd like to hear it then. "Sorry, it's an inaudible dragon." I'd like to measure its carbon dioxide output. "It doesn't breathe." I'll toss a bag of flour into the air, to outline its form. "The dragon is permeable to flour."

One motive for trying to make your theory unfalsifiable, is that deep down you fear to put it to the test. Sir Roger Penrose (physicist) and Stuart Hameroff (neurologist) are substance dualists; they think that there is something mysterious going on in quantum, that Everett is wrong and that the "collapse of the wave-function" is physically real, and that this is where consciousness lives and how it exerts causal effect upon your lips when you say aloud "I think therefore I am." Believing this, they predicted that neurons would protect themselves from decoherence long enough to maintain macroscopic quantum states.

This is in the process of being tested, and so far, prospects are not looking good for Penrose—

—but Penrose's basic conduct is scientifically respectable. Not Bayesian, maybe, but still fundamentally healthy. He came up with a wacky hypothesis. He said how to test it. He went out and tried to actually test it.

As I once said to Stuart Hameroff, "I think the hypothesis you're testing is completely hopeless, and your experiments should definitely be funded. Even if you don't find exactly what you're looking for, you're looking in a place where no one else is looking, and you might find something interesting."

So a nasty dismissal of epiphenomenalism would be that zombie-ists are afraid to say the consciousness-stuff can have effects, because then scientists could go looking for the extra properties, and fail to find them.

I don't think this is actually true of Chalmers, though. If Chalmers lacked self-honesty, he could make things a lot easier on himself.

(But just in case Chalmers is reading this and does have falsification-fear, I'll point out that if epiphenomenalism is false, then there is some other explanation for that-which-we-call consciousness, and it will eventually be found, leaving Chalmers's theory in ruins; so if Chalmers cares about his place in history, he has no motive to endorse epiphenomenalism unless he really thinks it's true.)

In response to Zombies Redacted
Comment author: jkaufman 12 July 2016 07:58:08PM 1 point [-]

I was curious about the diff, specifically what sections were being removed. This is too long for a comment, so I'll post each one as a reply to this comment.

In response to comment by jkaufman on Zombies Redacted
Comment author: jkaufman 12 July 2016 07:59:08PM 1 point [-]

The zombie argument does not rest solely on the intuition of the passive listener. If this was all there was to the zombie argument, it would be dead by now, I think. The intuition that the "listener" can be eliminated without effect, would go away as soon as you realized that your internal narrative routinely seems to catch the listener in the act of listening.

In response to Zombies Redacted
Comment author: jkaufman 12 July 2016 07:58:08PM 1 point [-]

I was curious about the diff, specifically what sections were being removed. This is too long for a comment, so I'll post each one as a reply to this comment.

In response to comment by jkaufman on Zombies Redacted
Comment author: jkaufman 12 July 2016 07:58:58PM 1 point [-]

By supposition, the Zombie World is atom-by-atom identical to our own, except that the inhabitants lack consciousness. Furthermore, the atoms in the Zombie World move under the same laws of physics as in our own world. If there are "bridging laws" that govern which configurations of atoms evoke consciousness, those bridging laws are absent. But, by hypothesis, the difference is not experimentally detectable. When it comes to saying whether a quark zigs or zags or exerts a force on nearby quarks—anything experimentally measurable—the same physical laws govern.

The Zombie World has no room for a Zombie Master, because a Zombie Master has to control the zombie's lips, and that control is, in principle, experimentally detectable. The Zombie Master moves lips, therefore it has observable consequences. There would be a point where an electron zags, instead of zigging, because the Zombie Master says so. (Unless the Zombie Master is actually in the world, as a pattern of quarks—but then the Zombie World is not atom-by-atom identical to our own, unless you think this world also contains a Zombie Master.)

When a philosopher in our world types, "I think the Zombie World is possible", his fingers strike keys in sequence: Z-O-M-B-I-E. There is a chain of causality that can be traced back from these keystrokes: muscles contracting, nerves firing, commands sent down through the spinal cord, from the motor cortex—and then into less understood areas of the brain, where the philosopher's internal narrative first began talking about "consciousness".

And the philosopher's zombie twin strikes the same keys, for the same reason, causally speaking. There is no cause within the chain of explanation for why the philosopher writes the way he does, which is not also present in the zombie twin. The zombie twin also has an internal narrative about "consciousness", that a super-fMRI could read out of the auditory cortex. And whatever other thoughts, or other causes of any kind, led to that internal narrative, they are exactly the same in our own universe and in the Zombie World.

So you can't say that the philosopher is writing about consciousness because of consciousness, while the zombie twin is writing about consciousness because of a Zombie Master or AI chatbot. When you trace back the chain of causality behind the keyboard, to the internal narrative echoed in the auditory cortex, to the cause of the narrative, you must find the same physical explanation in our world as in the zombie world.

In response to Zombies Redacted
Comment author: jkaufman 12 July 2016 07:58:08PM 1 point [-]

I was curious about the diff, specifically what sections were being removed. This is too long for a comment, so I'll post each one as a reply to this comment.

In response to comment by jkaufman on Zombies Redacted
Comment author: jkaufman 12 July 2016 07:58:48PM 1 point [-]

One of the great battles in the Zombie Wars is over what, exactly, is meant by saying that zombies are "possible". Early zombie-ist philosophers (the 1970s) just thought it was obvious that zombies were "possible", and didn't bother to define what sort of possibility was meant.

Because of my reading in mathematical logic, what instantly comes into my mind is logical possibility. If you have a collection of statements like (A->B),(B->C),(C->~A) then the compound belief is logically possible if it has a model—which, in the simple case above, reduces to finding a value assignment to A, B, C that makes all of the statements (A->B),(B->C), and (C->~A) true. In this case, A=B=C=0 works, as does A=0, B=C=1 or A=B=0, C=1.

Something will seem possible—will seem "conceptually possible" or "imaginable"—if you can consider the collection of statements without seeing a contradiction. But it is, in general, a very hard problem to see contradictions or to find a full specific model! If you limit yourself to simple Boolean propositions of the form ((A or B or C) and (B or ~C or D) and (D or ~A or ~C) ...), conjunctions of disjunctions of three variables, then this is a very famous problem called 3-SAT, which is one of the first problems ever to be proven NP-complete."

So just because you don't see a contradiction in the Zombie World at first glance, it doesn't mean that no contradiction is there. It's like not seeing a contradiction in the Riemann Hypothesis at first glance. From conceptual possibility ("I don't see a problem") to logical possibility in the full technical sense, is a very great leap. It's easy to make it an NP-complete leap, and with first-order theories you can make it arbitrarily hard to compute even for finite questions. And it's logical possibility of the Zombie World, not conceptual possibility, that is needed to suppose that a logically omniscient mind could know the positions of all the atoms in the universe, and yet need to be told as an additional non-entailed fact that we have inner listeners.

In response to Zombies Redacted
Comment author: jkaufman 12 July 2016 07:58:08PM 1 point [-]

I was curious about the diff, specifically what sections were being removed. This is too long for a comment, so I'll post each one as a reply to this comment.

In response to comment by jkaufman on Zombies Redacted
Comment author: jkaufman 12 July 2016 07:58:36PM 1 point [-]

Zombie-ism is not the same as dualism. Descartes thought there was a body-substance and a wholly different kind of mind-substance, but Descartes also thought that the mind-substance was a causally active principle, interacting with the body-substance, controlling our speech and behavior. Subtracting out the mind-substance from the human would leave a traditional zombie, of the lurching and groaning sort.

And though the Hebrew word for the innermost soul is N'Shama, that-which-hears, I can't recall hearing a rabbi arguing for the possibility of zombies. Most rabbis would probably be aghast at the idea that the divine part which God breathed into Adam doesn't actually do anything.

In response to Zombies Redacted
Comment author: jkaufman 12 July 2016 07:58:08PM 1 point [-]

I was curious about the diff, specifically what sections were being removed. This is too long for a comment, so I'll post each one as a reply to this comment.

In response to comment by jkaufman on Zombies Redacted
Comment author: jkaufman 12 July 2016 07:58:28PM 1 point [-]

(Warning: Long post ahead. Very long 6,600-word post involving David Chalmers ahead. This may be taken as my demonstrative counterexample to Richard Chappell's Arguing with Eliezer Part II, in which Richard accuses me of not engaging with the complex arguments of real philosophers.)

In response to Zombies Redacted
Comment author: jkaufman 12 July 2016 07:58:08PM 1 point [-]

I was curious about the diff, specifically what sections were being removed. This is too long for a comment, so I'll post each one as a reply to this comment.

Comment author: Gleb_Tsipursky 18 May 2016 01:18:17AM -1 points [-]

some of them do

I think we're on slightly different semantic grounds here. "Paid likes" is a specific practice, one that we've never engaged in, because it's highly counterproductive to creating an engaged FB community.

Now, are there people we pay who also like our FB posts? Sure. They are the ones who most consistently like them. This is one reason we hired them to work for us. It's a pretty typical thing to do for a nonprofit to hire on volunteers who are passionate about the cause.

getting into effective altruism

I accept that you're skeptical. Here's an example of one of our virtual assistants describing his getting into EA.

Comment author: jkaufman 18 May 2016 11:44:34AM 1 point [-]

"Paid likes" is a specific practice, one that we've never engaged in

Sorry, yes, you're interpreting my use of "paid likes" as being a very specific thing, and I mean it differently. Specifically, I'm talking about accounts that (a) click like and (b) are operated by someone who received money from InIn and (c) wouldn't have done (a) without (b).

View more: Next