The eliminativist responds: The world would look the same to me (a complex brain process) if dualism were true. But it would not look the same to the immaterial ghost possessing me, and we could write a computer program that simulates an epiphenomenal universe, i.e., one where every brain causally produces a ghost that has no effects of its own. So dualism is meaningful and false, not meaningless.
The dualist responds in turn: I agree that those two scenarios make sense. However, I disagree about which of those possible worlds the evidence suggests is our world. And I disagree about what sort of agent we are — experience reveals us to be phenomenal consciousnesses learning about whether there's also a physical world, not brains investigating whether there's also an invisible epiphenomenal spirit-world. The mental has epistemic priority over the physical.
We do have good reason to think we are epiphenomenal ghosts: Our moment-to-moment experience of things like that (ostending a patch of redness in my visual field) indicates that there is something within experience that is not strictly entailed by the physical facts. This category of experiential 'thats' I assign the label 'phenomenal consciousness' as a useful shorthand, but the evidence for this category is a perception-like introspective acquaintance, not an inference from other items of knowledge.
You and I agree, eliminativist, that we can ostend something about our moment-to-moment introspective data. For instance, we can gesture at optical illusions. I simply insist that one of those somethings is epistemically impossible given physicalism; we couldn't have such qualitatively specific experiences as mere arrangements of atoms, though I certainly agree we could have unconscious mental states that causally suffice for my judgments to that effect.
Eliminativist: Aren't you giving up the game the moment you concede that your judgments are just as well predicted by my interpretation of the data as by yours? If your judgments are equally probable given eliminativism as given dualism, then eliminativism wins purely on grounds of parsimony.
Dualist: But the datum, the explanandum, isn't my judgment. I don't go 'Oh, I seem to be judging that I'm experiencing redness; I'll conclude that I am in fact experiencing redness'. Rather, I go 'Oh, I seem to be experiencing redness; I'll conclude that I am in fact experiencing redness'. This initial seeming is a perception-like access to a subjective field of vision, not something propositional or otherwise linguistically structured. And this seeming really does include phenomenal redness, over and above any disposition to linguistically judge (or behave at all!) in any specific way.
Eliminativist: But even those judgments are predicted by my theory as well. How can you trust in judgments of yours that are causally uncorrelated with the truth? If you know that in most possible worlds where you arrive at your current state of overall belief, you're wrong about X, then you should conclude that you are in fact wrong about X. (And there are more possible worlds where your brain exists than where your brain and epiphenomenal ghost exist.)
Dualist: Our disagreement is that I don't see my epistemic status as purely causal. On your view, knowledge and the object known are metaphysically distinct, with the object known causing our state of knowledge. You conclude that epistemic states are only reliable when they are correlated with the right extrinsic state of the world.
I agree with you that knowledge and the object known are generally distinct, but we should expect an exception to that rule when knowledge turns upon itself, i.e., when the thing we're aware of is the very fact of awareness. In that case, my knowledge is not causally, spatially, or temporally separated from its object — at this very moment, without any need to appeal to a past or present at all, I can know that I am having this particular experience of a text box. I can be wrong in my inferences, wrong in my speculations about the world outside my experience; and I can be wrong in my subvocalized judgments about my experience; but my experience can't be wrong about itself. You can design a map in such a way that it differs from (i.e., misrepresents) a territory, but you can't design a map in such a way that it differs from itself; the relation of a map to itself is one of identity, not of representation or causality, and it is the nature of my map, as revealed by itself (and to itself!), that we're discussing here.
Eliminativist: I just don't think that model of introspection is tenable, given the history of science. Maybe your introspection gives you some evidence that physicalism is false, but the frequency with which we've turned out to be wrong about other aspects of our experience has to do a great deal to undermine your confidence in your map of the nature of your epistemic access to maps. I'm not having an argument with your visual field; I'm having an argument with a linguistic reasoner that has formed certain judgments about that visual field, and it's always possible that the reasoner is wrong about its own internal states, no matter how obvious, manifest, self-evident, etc. those states appear.
Dualist: A fair point. And I can appreciate the force of your argument in the abstract, when I think about an arbitrary reasoner from the third person. Yet when I attend once more to my own stream of consciousness, I become just as confused all over again. Your philosophical position's appeal is insufficient to overcome the perceptual obviousness of my own consciousness — and that obviousness includes the perceptual obviousness of irreducibility. I can't make myself pretend to not believe in something that seems to me so self-evident.
Eliminativist: Then you aren't trying hard enough. For I share your intuitions when I reflect on my immediate experiences, yet I've successfully deferred to science and philosophy in a way that blocks these semblances before they can mutate into beliefs. It can be done.
Dualist: It can be done. But should it? From my perspective, you've talked yourself into a lunatic position by reasoning only in impersonal, third-person terms. You've forgotten that the empirical evidence includes not only the history of science, but also your own conscious states. To me it appears that you've fallen into the error of the behaviorists, denying a mental state (phenomenal consciousness) just because it doesn't fit neatly into a specific invented set of epistemological social standards. No matter how much I'd love to join you in asserting a theory as elegant and simple as physicalism, I can't bring myself to do so when it comes at the cost of denying the manifest.
... and the discussion continues from there. I don't think either position is meaningless. Claims like 'nothing exists' aren't meaningless just because agents like us couldn't confirm them if they were true; they're meaningful and false. And it's certainly conceivable that if the above discussion continued long enough, a consensus could be reached, simply by continuing to debate the extent to which science undermines phenomenology.
This is an excellent and fair summary of the debate. I think the one aspect it leaves out is that eliminativists differ from dualists in that they have internalized Quine's lessons about how we can always revise our conceptual schemes. I elaborated on this long ago in this post at my old blog.
Let's say Bob's terminal value is to travel back in time and ride a dinosaur.
It is instrumentally rational for Bob to study physics so he can learn how to build a time machine. As he learns more physics, Bob realizes that his terminal value is not only utterly impossible but meaningless. By definition, someone in Bob's past riding a dinosaur is not a future evolution of the present Bob.
There are a number of ways to create the subjective experience of having gone into the past and ridden a dinosaur. But to Bob, it's not the same because he wanted both the subjective experience and the knowledge that it corresponded to objective fact. Without the latter, he might as well have just watched a movie or played a video game.
So if we took the original, innocent-of-physics Bob and somehow calculated his coherent extrapolated volition, we would end up with a Bob who has given up on time travel. The original Bob would not want to be this Bob.
But, how do we know that _anything_ we value won't similarly dissolve under sufficiently thorough deconstruction? Let's suppose for a minute that all "human values" are dangling units; that everything we want is as possible and makes as much sense as wanting to hear the sound of blue or taste the flavor of a prime number. What is the rational course of action in such a situation?
PS: If your response resembles "keep attempting to XXX anyway", please explain what privileges XXX over any number of other alternatives other than your current preference. Are you using some kind of pre-commitment strategy to a subset of your current goals? Do you now wish you had used the same strategy to precommit to goals you had when you were a toddler?