Esar comments on By Which It May Be Judged - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (934)
Daniel Dennett's 'Quining Qualia' (http://ase.tufts.edu/cogstud/papers/quinqual.htm) is taken ('round these parts) to have laid the theory of qualia to rest. Among philosophers, the theory of qualia and the classical empiricism founded on it are also considered to be dead theories, though it's Sellers "Empiricism and the Philosophy of Mind" (http://www.ditext.com/sellars/epm.html) that is seen to have done the killing.
What about “I'd need to think more about this”?
I just read 'Quining Qualia'. I do not see it as a solution to the hard problem of consciousness, at all. However, I did find it brilliant - it shifted my intuition from thinking that conscious experience is somehow magical and inexplicable to thinking that it is plausible that conscious experience could, one day, be explained physically. But to stop here would be to give a fake explanation...the problem has not yet been solved.
-- Eliezer Yudkowsky, Dissolving the Question
Also, does anyone disagree with anything that Dennett says in the paper, and, if so, what, and why?
I think I have qualia. I probably don't have qualia as defined by Dennett, as simultaneously ineffable, intrinsic, etc, but there are nonetheless ways things seem to me.
It maybe just my opinion, but please don't quote people and then insert edits into the quotation. Although at least you did do that with parenthesis.
By doing so you seem to say that free will and qualia are the same or interchangeable topics that share arguments for and against. But that is not the case. The question of free will is often misunderstood and is much easier to handle.
Qualia is, in my opinion, the abstract structure of consciousness. So on the underlying basic level you have physics and purely physical things, and on the more abstract level you have structure that is transitive with the basic level.
To illustrate what this means, I think Eliezer had an excellent example(though I'm not sure if his intention was similar): The spiking pattern of blue and actually seeing blue. But even the spiking pattern is far from completely reduced. But the idea is the same. On the level of consciousness you have experience which corresponds to a basic level thing. Very similar to the map and the territory analogue. Colorvision is hard to approach though, and it might be easier to start of with binary vision of 1 pixel. It's either 1 or 0. Imagine replacing your entire visual cortex with something that only outputs 1 or 0 - though brain is not binary - your entire field of vision having only 2 distinct experienced states. Although if you do that it certainly will result into mind-projection fallacy, since you can't actually change your visual cortex to only output 1 or 0. Anyway the rest of your consciousness has access to that information, and it's very very much easier to see how this binary state affects the decisions you make. And it's also much easier to do the transition from experience to physics and logic. Anyway then you can work your way back up to the normal vision by going several different pixels that are either 1 or 0.. To grayscale vision. But then colors make it much harder. But this doesn't resolve the qualia issue - how would feel like to have a 1-bit vision? How do you produce a set of rules that is transitive with the experience of vision?
Even if you grind everything down to the finest powder it still will be hard to see where this qualia business comes from, because you exist between the lines.
I agree that that doesn't resolve the qualia issue. To begin with, we'd need to write a SeeRed() function, that will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function. Even epiphenomenalists agree that this can be done, since they say consciousness has no physical effect on behavior. But here is my intuition (and pretty much every other reductionist's, I reckon) that leads me to reject epiphenomenalism: When I say, out loud (so there is a physical effect) "Wow, this flower I am holding is beautiful!", I am saying it because it actually looks beautiful to me! So I believe that, somehow, the perception is explainable, physically. And, at least for me, that intuition is much stronger than the intuition that conscious perception and computation are in separate magisteria.
We'll be able to get a lot further in this discussion once someone actually writes a SeeRed() function, which both epiphenomenalists and reductionists agree can be done.
Meanwhile, dualists think writing such a SeeRed() function is impossible. Time will tell.
It's possible for physicalism to be true, and computationalism false.
I'll say. Solving the problem does tend to solve the problem.
I haven't read either of those but will read them. Also I totally think there was a respectable hard problem and can only stare somewhat confused at people who don't realize what the fuss was about. I don't agree with what Chalmers tries to answer to his problem, but his attempt to pinpoint exactly what seems so confusing seems very spot-on. I haven't read anything very impressive yet from Dennett on the subject; could be that I'm reading the wrong things. Gary Drescher on the other hand is excellent.
It could be that I'm atypical for LW.
EDIT: Skimmed the Dennett one, didn't see much of anything relatively new there; the Sellers link fails.
So you do have a solution to the problem?
I'll take a look at Drescher, I haven't seen that one.
Try this link? http://selfpace.uconn.edu/class/percep/SellarsEmpPhilMind.pdf
Sellars is important to contemporary philosophy, to the extent that a standard course in epistemology will often end with EPM. I'm not sure it's entirely worth your time though, because an argument against classical (not Bayesian) empiricism.
Pryor and BonJour explain Sellars better than Sellars does. See: http://www.jimpryor.net/teaching/courses/epist/notes/given.html
The basic question is over whether our beliefs are purely justified by other beliefs, or whether our (visual, auditory, etc.) perceptions themselves 'represent the world as being a certain way' (i.e., have 'propositional content') and, without being beliefs themselves, can lend some measure of support to our beliefs. Note that this is a question about representational content (intentionality) and epistemic justification, not about phenomenal content (qualia) and physicalism.
Do you have evidence of this? The PhilPapers survey suggests that only 56.5% of philosophers identify as 'physicalists,' and 59% think that zombies are conceivable (though most of these think zombies are nevertheless impossible). It would also help if you explained what you mean by 'the theory of qualia.'
Sellars' argument, I think, rests on a few confusions and shaky assumptions. I agree this argument is still extremely widely cited, but I think that serious epistemologists no longer consider it conclusive, and a number reject it outright. Jim Pryor writes:
I mentioned in a subsequent post that there was an ambiguity in my original claim. Qualia have been used by philosophers to do two different jobs: 1) as the basis of the hard problem of consciousness, and 2) as the foundation of foundationalist theories of empiricism. Sellars essay, in particular is aimed at (2), not (1), and the mention of 'qualia' to which I was responding was probably a case of (1). The question of physicalism and the conceivability of p-zombies isn't directly related to the epistemic role of qualia, and one could reject classical empiricism on the basis of Sellars' argument while still believing that the reality of irreducible qualia speak against physicalism and for the conceivability of p-zombies.
That may be, it's a bit outside my ken. Thanks for posting the quote. I won't go trying to defend the overall organization EPM, which is fairly labyrinthine, but I have some confidence in its critiques: I'd need more familiarity with Pryor's work to level a serious criticism, but he on the basis of your quote he seems to me to be missing the point: Sellars is not arguing that something's appearing to you in a certain way is a state (like a belief) which requires justification. He argues that it is not tenable to think of this state as being independent of (e.g. a foundation for) a whole battery of concepts including epistemic concepts like 'being in standard perceptual conditions'. Looking a certain way is posterior (a sophistication of) its being that way. Looking red is posterior to simply being red. And this is an attack on the epistemic role of qualia insofar as this theory implies that 'looking red' is in some way fundamental and conceptually independent.
Yes, that is the argument. And I think its soundness is far from obvious, and that there's a lot of plausibility to the alternative view. The main problem is that this notion of 'conceptual content' is very hard to explicate; often it seems to be unfortunately confused with the idea of linguistic content; but do we really think that the only things that should add or take away any of my credence in any belief is the words I think to myself? In any case, Pryors' paper Is There Non-Inferential Justification? is probably the best starting point for the rival view. And he's an exceedingly lucid thinker.
I'll read the Pryor article, in more detail, but from your gloss and from a quick scan, I still don't see where Pryor and Sellars are even supposed to disagree. I think, without being totally sure, that Sellars would answer the title question of Pryor's article with an emphatic 'yes!'. Experience of a red car justifies belief that the car is red. While experience of a red car also presupposes a battery of other concepts (including epistemic concepts), these concepts are not related to the knowledge of the redness of the car as premises to a conclusion.
Here's a quote from EPM p148, which illustrates that the above is Sellars' view (italics mine). Note that in the following, Sellars is sketching the view he wants to attack:
So Sellars wants to argue that empiricism has no foundation because experience (as an epistemic success term) is not possible without knowledge of a bunch of other facts. But it does not follow from this that a) Sellars thinks knowledge derived from experience is inferential, or b) Sellars thinks non-inferential knowledge as such is a problem.
But that said, I haven't read enough of Pryor's paper(s) to understand his critiques. I'll take a look.
I'm not at all convinced that all LWers have been persuaded that they don't have qualia.
Amongst some philosophers.
Hmmm. The only enthusiast for Sellars I know finds it necessary to adopt Direct Realism, which is a horribly flawed theory. In fact most of the problems with it consist of reconciling it with a naturalistic world view.
Well, it's probably important to distinguish between to uses to which the theory of qualia is put: first as the foundation of foundationalist empiricism, and second as the basis for the 'hard problem of consciousness'. Foundationalist theories of empiricism are largely dead, as is the idea that qualia are a source of immediate, non-conceptual knowledge. That's the work that Sellars (a strident reductivist and naturalist) did.
Now that I read it again, I think my original post was a bit misleading because I implied that the theory of qualia as establishing the 'hard problem' is also a dead theory. This is not the case, and important philosophers still defend the hard problem on these grounds. Mea Culpa.
Once direct realism as an epistemic theory is properly distinguished from a psychological theory of perception, I think it becomes an extremely plausible view. I think I'd probably call myself a direct realist.
I'd have said that qualia are not a source of unprocessed knowledge, but the processing isn't conceptual.
I take 'conceptual' to mean thought which is at least somewhat conscious and which probably can be represented verbally. What do you mean by the word?
I mean 'of such a kind as to be a premise or conclusion in an inference'. I'm not sure whether I agree with your assessment or not: if by 'non-conceptual processing' you mean to refer to something like a physiological or neurological process, then I think I disagree (simply because physiological processes can't be any part of an inference, even granting that often times things that are part of an inference are in some way identical to a neurological process).
I think we're looking at qualia from different angles. I agree that the process which leads to qualia might well be understood conceptually from the outside (I think that's what you meant). However, I don't think there's an accessible conceptual process by which the creation of qualia can be felt by the person having the qualia.
Right - to hammer on the point, the common-ish (EDIT: Looks like I was hastily generalizing) LW opinion is that there never was any "hard problem of consciousness" (EDIT: meaning one that is distinct from "easy" problems of consciousness, that is, the ones we know roughly how to go about solving). It's just that when we meet a problem that we're very ignorant about, a lot of people won't go "I'm very ignorant about this," they'll go "This has a mysterious substance, and so why would learning more change that inherent property?"
It should be remembered though that the guy who's famous for formulating the hard problem of consciousness is:
1) A fan of EY's TDT, who's made significant efforts to get the theory some academic attention. 2) A believer in the singularity, and its accompanying problems. 3) The student of Douglas Hofstrader. 4) Someone very interested in AI. 5) Someone very well versed and interested in physics and psychology. 6) A rare, but sometimes poster on LW. 7) Very likely one of the smartest people alive. etc. etc.
I think consciousness is reducible too, but David Chalmers is a serious dude, and the 'hard problem' is to be taken very, very seriously. It's very easy to not see a philosophical problem, and very easy to think that the problem must be solved by psychology somewhere, much harder to actually explain a solution/dissolution.
I agree with you about how smart Chalmers is and that he does very good philosophical work. But I think you have a mistake in terminology when you say
It is an understandable mistake, because it is natural to take "the hard problem" as meaning just "understanding consciousness", and I agree that this is a hard problem in ordinary terms and that saying "there is a reduction/dissolution" is not enough. But Chalmers introduced the distinction between the "hard problem" and the "easy problems" by saying that understanding the functional aspects of the mind, the information processing, etc, are all "easy problems". So a functionalist/computationalist materialist, like most people on this site, cannot buy into the notion that there is a serious "hard problem" in Chalmers' sense. This notion is defined in a way that begs the question assuming that qualia are irreducible. We should say instead that solving the "easy problems" is at the same time much less trivial than Chalmers makes it seem, and enough to fully account for consciousness.
No it isn't. Here is what Chalmers says:
"It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does."
There is no statement of irreducubility there. There is a statement that we have "no good explanaion" and we don't.
However, see how he contrasts it with the "easy problems" (from Consciousness and its Place in Nature - pdf):
It seems clear that for Chalmers any description in terms of behavior and cognitive function is by definition not addressing the hard problem.
But that is not to say that qualia are irreducibole things, that is to say that mechanical explanations of qualia have not worked to date
What does this mean by "why"? What evolutionary advantage is there? Well, it enables imagination, which lets us survive a wider variety of dangers. What physical mechanism is there? That's an open problem in neurology, but they're making progress.
I've read this several times, and I don't see a hard philosophical problem.
It's definitely a how-it-happens "why" and not how-did-it-evolve "why"
There's more to qualia than free-floating representations. There is no reason to suppose an AI's internal maps have phenomenal feels, no way of testing that they do, and no way of engineering them in.
It's a hard scientific problem. How could you have a theory that tells you how the world seems to a bat on LSD? How can you write a SeeRed() function?
Presumably, the exact same way you'd write any other function.
In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.
If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human's "redness qualia". If prompted and sufficiently intelligent, this program will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function.
Of course, I'm arguing a bit by the premises here with "correct behavior" being "fully and coherently maintained". The space of inputs and outputs to take into account in order to make a program that would convince you of its possession of the redness qualia is too vast for us at the moment.
TL;DR: It all depends on what the SeeRed() function will be used for / how we want it to behave.
False. In this case what matters is the perception of a red colour that occurs between input and ouput. That is what the Hard Problem, the problem of qualia is about.
That doesn't mean there are no qualia (I have them so I know there are). That also doesn't mean qualia just serendiptously arrive whenever the correct mapping from inputs to outputs is in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough.
None of these were among my claims. For a program to reliably pass turing-like tests for seeing redness, a GLUT or zombielike would not cut it, you'd need some sort of internal system that generates certain inner properties and behaviors, one that would be effectively indistinguishable from qualia (this is my claim), and may very well be qualia (this is not my core claim, but it is something I find plausible).
Obviously I haven't solved the Hard Problem just by saying this. However, I do greatly dislike your apparent premise* that qualia can never be dissolved to patterns and physics and logic.
* If this isn't among your premises or claims, then it still does appear that way, but apologies in advance for the strawmanning.
Is there a reason to suppose that anybody else's maps have phenomenal feels, a way of testing that they do, or a way of telling the difference? Why can't those ways be generalized to Intelligent entities in general?
Yes: naturalism. It would be naturalistcially anomalous if their brains worked very smilarly , but their phenomenology were completely different.
No. So what? Are you saying we are all p-zombies?
I don't know about Decius, but...
I am.
I'm also saying that it doesn't matter. The p-zombies are still conscious. They just don't have any added "conscious" XML tags as per some imaginary, crazy-assed unnecessary definition of "consciousness".
Tangential to that point: I think any morality system which relies on an external supernatural thinghy in order to make moral judgments or to assign any terminal value to something is broken and not worth considering.
I'm saying that there is no difference between a p-zombie and the alternative.
Though on the other hand, we don't have room to take everything serious dudes say seriously - too many dudes, not enough time.
If a problem happens not to exist, then I suppose one will just have to nerve onesself and not see it. Yes, there are non-hard problems of consciousness, where you explain how a certain process or feeling occurs in the brain, and sure, there are some non-hard problems I'd wave away with "well, that's solved by psychology somewhere." But no amount of that has any bearing on the "hard problem," which will remain in scare quotes as befits its effective nonexistence - finding a solution to a problem that is not a problem would be silly.
(EDIT: To clarify, I am not saying qualia do not exist, I am saying some mysterious barrier of hardness around qualia does not exist.)
OK. Then demonstrate that the HP does not exist, in terms of Chalmer's specification, by showing that we do have a good explanation.
Well, said Achilles, everybody knows that if you have A and B and "A and B imply Z," then you have Z.
How an Algorithm Feels From Inside.
The Visual Cortex is Used to Imagine
Stimulating the Visual Cortex Makes the Blind See
This sort of thing is sufficient for me, like Achilles' explanations were enough for Achilles. But if, say, the perception of the hard problem was causally unrelated to the actual existence of a hard problem (for epiphenominalism, this is literally what is going on), then gosh, it would seem like no matter what explanations you heard, the hard problem wouldn't go away - so it must be either a proof of dualism or a mistake.
But not for me. Indeed. I am pretty sure none of those articles is even intended as a solution to the HP. And if they are, why not publish them is a journal and become famous?
Intended as a solution to FW.
So? Every living qualiaphile accepts some sort of relationship between brain states and qualia.
So? I said nothing about epiphenomenalism
The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.
Other than that, I don't have much to respond to here, since you're just going "So?"
I can't find the posting, and I don't see how the MPF would relate to e12ism anyway.
How did you expect to convive me? I am familar with all the stuff you are quoting, and I still think there is an HP. So do many people.
For practical reasons, I think that's fair enough...so long as we're clear that the above is a fully general counterargument.
Right. I have not said any actual arguments against the hard problem of consciousness.
EDIT: Was true when I said it, then I replied to PeterD, not that it worked (as I noted in that very post, the direct approach has little chance against a confusion)
Argument for the importance of the HP: it is about the only thing that would motivate an educated 21st century person into doubting physcalism.
The rest mostly go, "this could only be explained by a mysterious substance, there are no mysterious substances, therefore this does not exist."
I don't know why you guys keep harping about substances. Substance dualism has been out of favour for a good century.
Sorry, I was misusing terminology. Any ignorance-generating / ignorance-embodying explanation (e.g.s quantum mysticism / elan vital) uses what I'm calling "mysterious substance."
Basically I'm calling "quantum" a mysterious substance (for the quantum mystics), even though it's not like you can bottle it.
Maybe I should have said "mysterious form?" :D
There is a Hard Prolem, becuase there is basically no (non eliminative) science or technology of qualia at all. We cna get a start on the problem of building cognition, memory and perception into an AI, but we can;t get a start on writing code for Red or Pain or Salty. You can thell there is basically no non-eliminative science or technology of qualia because the best LWers' can quote is Dennett's eliminative theory.