Nominull comments on By Which It May Be Judged - Less Wrong

35 Post author: Eliezer_Yudkowsky 10 December 2012 04:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (934)

You are viewing a single comment's thread.

Comment author: Nominull 10 December 2012 05:38:32AM 1 point [-]

You talk like you've solved qualia. Have you?

Comment author: CronoDAS 10 December 2012 08:10:48AM 11 points [-]

"Qualia" is something our brains do. We don't know how our brains do it, but it's pretty clear by now that our brains are indeed what does it.

Comment author: Peterdjones 10 December 2012 12:38:55PM 6 points [-]

That's about 10% of a solution. The "how" is enough to keep most contemorary dualism afloat.

Comment author: BerryPick6 10 December 2012 01:41:35PM 1 point [-]

Aren't the details of the "how" more a question of science than philosophy?

Comment author: Peterdjones 10 December 2012 02:29:47PM 1 point [-]

If science had them, there would be no mileage in the philosphical project, any more than there is currently mileage in trying to found dualism on the basis that matter can't think.

Comment author: ThisDan 12 December 2012 05:43:26AM -2 points [-]

There is mileage in philosophy? Says you. Are you talking about in context of general population of a country? Of "intellectuals? Your mates?

If philosophy has mileage (compared to science) then so does any other religion. I guess that's all dualism is though.

Comment author: MugaSofer 12 December 2012 10:53:54AM 1 point [-]

If philosophy has mileage (compared to science) then so does any other religion.

Eh?

Comment author: ThisDan 13 December 2012 06:02:49AM 0 points [-]

I just went to reply you but after reading back on what was said I'm seeing a different context. My stupid comment was about popularity not about usefulness. I was rambling about general public opinion on belief systems not what the topic was really about- if philosophy could move something forward.

Comment author: RobbBB 11 December 2012 01:23:49AM 2 points [-]

We have prima facie reason to accept both of these claims:

  1. A list of all the objective, third-person, physical facts about the world does not miss any facts about the world.
  2. Which specific qualia I'm experiencing is functionally/causally underdetermined; i.e., there doesn't seem even in principle to be any physically exhaustive reason redness feels exactly as it does, as opposed to feeling like some alien color.

1 is physicalism; 2 is the hard problem. Giving up 1 means endorsing dualism or idealism. Giving up 2 means endorsing reductive or eliminative physicalism. All of these options are unpalatable. Reductionism without eliminating anything seems off the table, since the conceivability of zombies seems likely to be here to stay, to remain as an 'explanatory gap.' But eliminativism about qualia means completely overturning our assumption that whatever's going on when we speak of 'consciousness' involves apprehending certain facts about mind. I think this last option is the least terrible out of a set of extremely terrible options; but I don't think the eliminative answer to this problem is obvious, and I don't think people who endorse other solutions are automatically crazy or unreasonable.

That said, the problem is in some ways just academic. Very few dualists these days think that mind isn't perfectly causally correlated with matter. (They might think this correlation is an inexplicable brute fact, but fact it remains.) So none of the important work Eliezer is doing here depends on monism. Monism just simplifies matters a great deal, since it eliminates the worry that the metaphysical gap might re-introduce an epistemic gap into our model.

Comment author: Eugine_Nier 11 December 2012 01:53:39AM 1 point [-]
  1. A list of all the objective, third-person, physical facts about the world does not miss any facts about the world.

What's your reason for believing this? The standard empiricist argument against zombies is that they don't constrain anticipated experience.

One problem with this line of thought is that we've just thrown out the very concept of "experience" which is the basis of empiricism. The other problem is that the statement is false: the question of whether I will become a zombie tomorrow does constrain my anticipated experiences; specifically, it tells me whether I should anticipate having any.

Comment author: RobbBB 11 December 2012 02:12:00AM *  2 points [-]

I'm not a positivist, and I don't argue like one. I think nearly all the arguments against the possibility of zombies are very silly, and I agree there's good prima facie evidence for dualism (though I think that in the final analysis the weight of evidence still favors physicalism). Indeed, it's a good thing I don't think zombies are impossible, since I think that we are zombies.

What's your reason for believing this?

My reason is twofold: Copernican, and Occamite.

Copernican reasoning: Most of the universe does not consist of humans, or anything human-like; so it would be very surprising to learn that the most fundamental metaphysical distinction between facts ('subjective' v. 'objective,' or 'mental' v. 'physical,' or 'point-of-view-bearing' v. 'point-of-view-lacking, 'or what-have-you) happens to coincide with the parts of the universe that bear human-like things, and the parts that lack human-like things. Are we really that special? Is it really more likely that we would happen to gain perfect, sparkling insight into a secret Hidden Side to reality, than that our brains would misrepresent their own ways of representing themselves to themselves?

Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds). It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description -- the impersonal, 'objective' kind, which states a fact without specifying for whom the fact is. The world didn't need to turn out to be that way, just as it didn't need to look causally structured. This should give us reason to think that there may not be distinctions between fundamental kinds of facts, rather than that we happen to have lucked out and ended up in one of the universes with very few distinctions of this sort.

Neither of these considerations, of course, is conclusive. But they give us some reason to at least take seriously physicalist hypotheses, and to weight their theoretical costs and benefits against the dualists'.

One problem with this line of thought is that we've just thrown out the very concept of "experience" which is the basis of empiricism.

We've thrown out the idea of subjective experience, of pure, ineffable 'feels,' of qualia. But we retain any functionally specifiable analog of such experience. In place of qualitative red, we get zombie-red, i.e., causal/functional-red. In place of qualitative knowledge, we get zombie-knowledge.

And since most dualists already accepted the causal/functional/physical process in question (they couldn't even motivate the zombie argument if they didn't consider the physical causally adequate), there can be no parsimony argument against the physicalists' posits; the only argument will have to be a defense of the claim that there is some sort of basic, epistemically infallible acquaintance relation between the contents of experience and (themselves? a Self??...). But making such an argument, without begging the question against eliminativism, is actually quite difficult.

Comment author: Peterdjones 13 December 2012 07:25:40PM *  1 point [-]

Copernican reasoning: Most of the universe does not consist of humans, or anything human-like; so it would be very surprising to learn that the most fundamental metaphysical distinction between facts ('subjective' v. 'objective,' or 'mental' v. 'physical,' or 'point-of-view-bearing' v. 'point-of-view-lacking, 'or what-have-you) happens to coincide with the parts of the universe that bear human-like things, and the parts that lack human-like things. Are we really that special? Is it really more likely that we would happen to gain perfect, sparkling insight into a secret Hidden Side to reality, than that our brains would misrepresent their own ways of representing themselves to themselves?

It's not surprising that a system should have special insight into itself. If a type of system had special insight into some other, unrelated, type of system, then that would be peculiar. If every systems had insights (panpsychism) that would also be peculiar. But a system, one capable of haing insights, having special insights into itself is not unexpected

Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds).

That is not obvious. If the two kinds of stuff (or rather property) are fine-grainedly picked from some space of stuffs (or rather properties), then that would be more unlikely that just one being picked.

OTOH, if you have a just one, coarse-grained kind of stuff, and there is just one other coarse-grained kind of stuff, such that the two together cover the space of stuffs, then it is a mystery why you do not have both, ie every possible kind of stuff. A concrete example is the predominance of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.

(It's all about information and probability. Adding one fine grained kind of stuff to another means that two low probabilities get multiplies together, leading to a very low one that needs a lot of explainging. Having every logically possible kind of stuff has a high probability, because we don't need a lot of information to pinpoint the universe).

So..if you think of Mind as some very specific thing, the Occamite objection goes through. However, modern dualists are happy that most aspects of consciousness have physical explanations. Chalmers-style dualism is about explaining qualia, phenomenal qualities. The quantitative properties (Chalmers calls them stuctural-functional) of physicalism and intrinsically qualitative properties form a dyad that covers property-space in the same way that the matter-antimatter dyad covers stuff-space. In this way, modern dualism can avoid the Copernican Objection.

It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description -- the impersonal, 'objective' kind, which states a fact without specifying for whom the fact is.

(Here comes the shift from properties to aspects).

Although it does specify that the fact is outside me. If physical and mental properties are both intrinsic to the world, then the physical properties seem to be doing most of the work, and the mental ones seem redundant. However, if objectivity is seen as a perspective, ie an external perspective, it is no longer an empirical fact. It is then a tautology that the external world will seem, from the outside, to be objective, becaue objectivity just is the view from outside. And subjectivity, likewise, is the view from inside, and not any extra stuff, just another way of looking at the same stuff. There are in any case, a set of relations between a thing-and-itself, and another set between a thing-and-other-things Nothing novel is being introduced by noting the existence of inner and outer aspects. The novel content of the Dual Aspect solution lies on identifying the Objective Perspective with quantities (broadly including structures and functions) and the Subjective Perspective with qualities, so that Subjective Qualities, qualia, are just how neuronal processing seems from the inside. This point needs justication, which I believe I have, but will not nmention here.

As far as physicalism is concerned: physicalism has many meanings. Dual aspect theory is incompatible with the idea that the world is instrinsically objective and physical, since these are not intrinsic charateristics, accoding to DAT. DAT is often and rightly associated with neutral monism, the idea that the world is in itself neither mental nor physical, neither objective nor subjective. However, this in fact changes little for most physicalists: it does not suggest that there are any ghostly substances or indetectable properties. Nothing changes methodologically; naturalism, inerpreted as the investigation of the world from the objetive perspective can continue. The Strong Physicalist claim that a complete phyiscal description of the world is a complete dsecription tout court becomes problematic. Although such a description is a description of everything, it nonetheless leaves out the subjective perspectives embedded in it, which cannot be recovered just as Mary the superscientist cannot recover the subjective sensation of Red from the information she has. I believe that a correct understanding of the nature of information shows that "complete information" is a logically incoherent notion in any case, so that DAT does not entail the loss of anything that was ever available in that respect. Furthermore, the absence of complete information has little practical upshot because of the unfeasability of constructing such a complete decription in the first place. All in all, DAT means physicalism is technically false in a way that changes little in practice. The flipside of DAT is Neutral Monism. NM is an inherently attractive metaphsycis, because it means that the universe has no overall characteristic left dangling in need of an explanation -- no "why physical, rather than mental?".

As far as causality is concerned, the fact that a system's physical or objective aspects are enough to predict its behaviour does not mean that its subjective aspects are an unnecessary multiplication of entities, since they are only a different perspective on the same reality. Causal powers are vested in the neutral reality of which the subjective and the objective are just aspects. The mental is neither causal in itself, or causally idle in itself, it is rather a persepctive on what is causally empowered. There are no grounds for saying that either set of aspects is exclusively responsible for the causal behaviour of the system, since each is only a perspective on the system.

I have avoided the Copernican problem, special pleading for human consciousness by pinning mentality, and particualrly subjectivity to a system's internal and self-refexive relations. The counterpart to excesive anthropocentricism is insufficient anthopocentricism, ie free-wheeling panpsychism, or the Thinking Rock problem. I believe I have a way of showing that it is logically ineveritable that simple entities cannot have subjective states that are significantly different from their objective descriptions.

Comment author: RobbBB 14 December 2012 12:55:42AM *  -1 points [-]

Nothing novel is being introduced by noting the existence of inner and outer aspects.

I'm not sure I understand what an 'aspect' is, in your model. I can understand a single thing having two 'aspects' in the sense of having two different sets of properties accessible in different viewing conditions; but you seem to object to the idea of construing mentality and physicality as distinct property classes.

I could also understand a single property or property-class having two 'aspects' if the property/class itself were being associated with two distinct sets of second-order properties. Perhaps "being the color of chlorophyll" and "being the color of emeralds" are two different aspects of the single property green. Similarly, then, perhaps phenomenal properties and physical properties are just two different second-order construals of the same ultimately physical, or ultimately ideal, or perhaps ultimately neutral (i.e., neither-phenomenal-nor-physical), properties.

I call the option I present in my first paragraph Property Dualism, and the option I present in my second paragraph Multi-Label Monism. (Note that these may be very different from what you mean by 'property dualism' and 'neutral monism;' some people who call themselves 'neutral monists' sound more to me like 'neutral trialists,' in that they allow mental and physical properties into their ontology in addition to some neutral substrate. True monism, whether neutral or idealistic or physicalistic, should be eliminative or reductive, not ampliative.) Is Dual Aspect Theory an intelligible third option, distinct from Property Dualism and Multi-Label Monism as I've distinguished them? And if so, how can I make sense of it? Can you coax me out of my parochial object/property-centric view, without just confusing me?

I'm also not sure I understand how reflexive epistemic relations work. Epistemic relations are ordinarily causal. How does reflexive causality work? And how do these 'intrinsic' properties causally interact with the extrinsic ones? How, for instance, does positing that Mary's brain has an intrinsic 'inner dimension' of phenomenal redness Behind The Scenes somewhere help us deterministically explain why Mary's extrinsic brain evolves into a functional state of surprise when she sees a red rose for the first time? What would the dynamics of a particle or node with interactively evolving intrinsic and extrinsic properties look like?

A third problem: You distinguish 'aspects' by saying that the 'subjective perspective' differs from the 'objective perspective.' But this also doesn't help, because it sounds anthropocentric. Worse, it sounds mentalistic; I understand the mental-physical distinction precisely inasmuch as I understand the mental as perspectival, and the physical as nonperspectival. If the physical is itself 'just a matter of perspective,' then do we end up with a dualistic or monistic theory, or do we instead end up with a Berkeleian idealism? I assume not, and that you were speaking loosely when you mentioned 'perspectives;' but this is important, because what individuates 'perspectives' is precisely what lends content to this 'Dual-Aspect' view.

All in all, DAT means physicalism is technically false in a way that changes little in practice.

Yes, I didn't consider the 'it's not physicalism!!' objection very powerful to begin with. Parsimony is important, but 'physicalism' is not a core methodological principle, and it's not even altogether clear what constraints physicalism entails.

Comment author: RobbBB 13 December 2012 10:25:30PM *  -1 points [-]

It's not surprising that a system should have special insight into itself.

It's not surprising that an information-processing system able to create representations of its own states would be able to represent a lot of useful facts about its internal states. It is surprising if such a system is able to infallibly represent its own states to itself; and it is astounding if such a system is able to self-represent states that a third-person observer, dissecting the objective physical dynamics of the system, could never in principle fully discover from an independent vantage point. So it's really a question of how 'special' we're talking.

If a type of system had special insight into some other, unrelated, type of system, then that would be peculiar.

I'm not clear on what you mean. 'Insight' is, presumably, a causal relation between some representational state and the thing represented. I think I can more easily understand a system's having 'insight' into something else, since it's easier for me to model veridical other-representation than veridical self-representation. (The former, for instance, leads to no immediate problems with recursion.) But perhaps you mean something special by 'insight.' Perhaps by your lights, I'm just talking about outsight?

If every systems had insights (panpsychism) that would also be peculiar.

If some systems have an automatic ability to non-causally 'self-grasp' themselves, by what physical mechanism would only some systems have this capacity, and not all?

if you have a just one, coarse-grained kind of stuff, and there is just one other coarse-grained kind of stuff, such that the two together cover the space of stuffs, then it is a mystery why you do not have both, ie every possible kind of stuff. A concrete example is the predominance of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.

If you could define a thingspace that meaningfully distinguishes between and admits of both 'subjective' and 'objective' facts (or properties, or events, or states, or thingies...), and that non-question-beggingly establishes the impossibility or incoherence of any other fact-classifications of any analogous sorts, then that would be very interesting. But I think most people would resist the claim that this is the one unique parameter of this kind (whatever kind that is, exactly...) that one could imagine varying over models; and if this parameter is set to value '2,' then it remains an open question why the many other strangely metaphysical or strangely anthropocentric parameters seem set to '1' (or to '0,' as the case may be).

But this is all very abstract. It strains comprehension just to entertain a subjective/objective distinction. To try to rigorously prove that we can open the door to this variable without allowing any other Aberrant Fundamental Categorical Variables into the clubhouse seems a little quixotic to me. But I'd be interested to see an attempt at this.

A concrete example is the predominance of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.

Sure, though there's a very important disparity between observed asymmetries between actual categories of things, and imagined asymmetries between an actual category and a purely hypothetical one (or, in this case, a category with a disputed existence). In principle the reasoning should work the same, but in practice our confidence in reasoning coherently (much less accurately!) about highly abstract and possibly-not-instantiated concepts should be extremely low, given our track record.

The quantitative properties (Chalmers calls them stuctural-functional) of physicalism and intrinsically qualitative properties form a dyad that covers property-space

How do we know that? If we were zombies, prima facie it seems as though we'd have no way of knowing about, or even positing in a coherent formal framework, phenomenal properties. But in that case, any analogous possible-but-not-instantiated-property-kinds that would expand the dyad into a polyad would plausibly be unknowable to us. (We're assuming for the moment that we do have epistemic access to phenomenal and physical properties.) Perhaps all carbon atoms, for instance, have unobservable 'carbonomenal properties,' (Cs) which are related to phenomenal and physical properties (P1s and P2s) in the same basic way that P1s are related to P2s and Cs, and that P2s are related to P1s and Cs. Does this make sense? Does it make sense to deny this possibility (which requires both that it be intelligible and that we be able to evaluate its probability with any confidence), and thereby preserve the dyad? I am bemused.

Comment author: Eugine_Nier 11 December 2012 02:47:02AM 1 point [-]

Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds). It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description -- the impersonal, 'objective' kind, which states a fact without specifying for whom the fact is. The world didn't need to turn out to be that way, just as it didn't need to look causally structured. This should give us reason to think that there may not be distinctions between fundamental kinds of facts, rather than that we happen to have lucked out and ended up in one of the universes with very few distinctions of this sort.

The problem is that we already have two kinds of fundamental facts, (and I would argue we need more). Consider Eliezer's use of "magical reality fluid" in this post. If you look at context, it's clear that he's trying to ask whether the inhabitants of the non-causally stimulated universes poses qualia without having to admit he cares about qualia.

Comment author: RobbBB 11 December 2012 02:55:52AM *  2 points [-]

Eliezer thinks we'll someday be able to reduce or eliminate Magical Reality Fluid from our model, and I know of no argument (analogous to the Hard Problem for phenomenal properties) that would preclude this possibility without invoking qualia themselves. Personally, I'm an agnostic about Many Worlds, so I'm even less inclined than EY to think that we need Magical Reality Fluid to recover the Born probabilities.

I also don't reify logical constructs, so I don't believe in a bonus category of Abstract Thingies. I'm about as monistic as physicalists come. Mathematical platonists and otherwise non-monistic Serious Scientifically Minded People, I think, do have much better reason to adopt dualism than I do, since the inductive argument against Bonus Fundamental Categories is weak for them.

Comment author: Eugine_Nier 13 December 2012 04:11:14AM *  1 point [-]

Eliezer thinks we'll someday be able to reduce or eliminate Magical Reality Fluid from our model, and I know of no argument (analogous to the Hard Problem for phenomenal properties) that would preclude this possibility without invoking qualia themselves.

I could define the Hard Problem of Reality, which really is just an indirect way of talking about the Hard Problem of Consciousness.

Personally, I'm an agnostic about Many Worlds, so I'm even less inclined than EY to think that we need Magical Reality Fluid to recover the Born probabilities.

As Eliezer discuses in the post, Reality Fluid isn't just for Many Worlds, it also relates to questions about stimulation.

I also don't reify logical constructs

Here's my argument for why you should.

Comment author: RobbBB 13 December 2012 04:41:04AM *  -1 points [-]

As Eliezer discuses in the post, Reality Fluid isn't just for Many Worlds, it also relates to questions about [simulation].

Only as a side-effect. In all cases, I suspect it's an idle distraction; simulation, qualia, and born-probability models do have implications for each other, but it's unlikely that combining three tough problems into a single complicated-and-tough problem will help gin up any solutions here.

Here's my argument for why you should.

Give me an example of some logical constructs you think I should believe in. Understand that by 'logical construct' I mean 'causally inert, nonspatiotemporal object.' I'm happy to sort-of-reify spatiotemporally instantiated properties, including relational properties. For instance, a simple reason why I consistently infer that 2 + 2 = 4 is that I live in a universe with multiple contiguous spacetime regions; spacetime regions are similar to each other, hence they instantiate the same relational properties, and this makes it possible to juxtapose objects and reason with these recurrent relations (like 'being two arbitrary temporal intervals before' or 'being two arbitrary spatial intervals to the left of').

Comment author: thomblake 13 December 2012 07:42:43PM 0 points [-]

In place of qualitative red, we get zombie-red, i.e., causal/functional-red. In place of qualitative knowledge, we get zombie-knowledge.

At this point, you're just using the language wrong. "knowledge" refers to what you're calling "zombie-knowledge" - whenever we point to an instance of knowledge, we mean whatever it is humans are doing. So "humans are zombies" doesn't work, unless you can point to some sort of non-human non-zombies that somehow gave us zombies the words and concepts of non-zombies.

Comment author: RobbBB 13 December 2012 09:26:20PM *  0 points [-]

At this point, you're just using the language wrong.

That assumes a determinate answer to the question 'what's the right way to use language?' in this case. But the facts on the ground may underdetermine whether it's 'right' to treat definitions more ostensively (i.e., if Berkeley turns out to be right, then when I say 'tree' I'm picking out an image in my mind, not a non-existent material plant Out There), or 'right' to treat definitions as embedded in a theory, an interpretation of the data (i.e., Berkeley doesn't really believe in trees as we do, he just believes in 'tree-images' and misleadingly calls those 'trees'). Either of these can be a legitimate way that linguistic communities change over time; sometimes we keep a term's sense fixed and abandon it if the facts aren't as we thought, whereas sometimes we're more intensionally wishy-washy and allow terms to get pragmatically redefined to fit snugly into the shiny new model. Often it depends on how quickly, and how radically, our view of the world changes.

(Though actually, qualia may raise a serious problem for ostension-focused reference-fixing: It's not clear what we're actually ostending, if we think we're picking out phenomenal properties but those properties are not only misconstrued, but strictly non-existent. At least verbal definitions have the advantage that we can relatively straightforwardly translate the terms involved into our new theory.)

Moreover, this assumes that you know how I'm using the language. I haven't said whether I think 'knowledge' in contemporary English denotes q-knowledge (i.e., knowledge including qualia) or z-knowledge (i.e., causal/functional/behavioral knowledge, without any appeal to qualia). I think it's perfectly plausible that it refers to q-knowledge, hence I hedge my bets when I need to speak more precisely and start introducing 'zombified' terms lest semantic disputes interfere in the discussion of substance. But I'm neutral both on the descriptive question of what we mean by mental terms (how 'theory-neutral' they really are), and on the normative question of what we ought to mean by mental terms (how 'theory-neutral' they should be). I'm an eliminativist on the substantive questions; on the non-substantive question of whether we should be revisionist or traditionalist in our choice of faux-mental terminology, I'm largely indifferent, as long as we're clear and honest in whatever semantic convention we adopt.

Comment author: Oligopsony 11 December 2012 02:58:29AM 0 points [-]

1) If you embrace SSA, then you being you should be more likely on humans being important than on panpsychism, yes? (You may of course have good reasons for preferring SIA.)

2) Suppose again redundantly dual panpsychism. Is there any a priori reason (at this level of metaphysical fancy) to rule out that experiences could causally interact with one another in a way that is isomorphic to mechanical interactions? Then we have a sort of idealist field describable by physics, perfectly monist. Or is this an illegitimate trick?

(Full disclosure: I'd consider myself a cautious physicalist as well, although I'd say psi research constitutes a bigger portion of my doubt than the hard problem.)

Comment author: Vertigo 11 December 2012 03:32:15AM 3 points [-]

Ooo! Seldom do I get to hear someone else voice my version of idealism. I still have a lot of thinking to do on this, but so far it seems to me perfectly legitimate. An idealism isomorphic to mechanical interactions dissolves the Hard Problem of consciousness by denying a premise. It also does so with more elegance than reductionism since it doesn't force us through that series of flaming hoops that orbits and (maybe) eventually collapses into dualism.

This seems more likely to me so far than all the alternatives, so I guess that means I believe it, but not with a great deal of certainty. So far every objection I've heard or been able to imagine has amounted to something like, "But but but the world's just got to be made out of STUFF!!!" But I'm certainly not operating under the assumption that these are the best possible objections. I'd love to see what happens with whatever you've got to throw at my position.

Comment author: Alejandro1 11 December 2012 05:07:33PM *  2 points [-]

The theory you propose in (2) seems close to Neutral Monism. It has fallen into disrepute (and near oblivion) but was the preferred solution to the mind-body problem of many significant philosophers of the late 19th-early 20th, in particular of Bertrand Russell (for a long period). A quote from Russell:

We shall seek to construct a metaphysics of matter which shall make the gulf between physics and perception as small, and the inferences involved in the causal theory of perception as little dubious, as possible. We do not want the percept to appear mysteriously at the end of a causal chain composed of events of a totally different nature; if we can construct a theory of the physical world which makes its events continuous with perception, we have improved the metaphysical status of physics, even if we cannot prove more than that our theory is possible.

Comment author: CronoDAS 11 December 2012 05:05:13AM *  0 points [-]

Which specific qualia I'm experiencing is functionally/causally underdetermined; i.e., there doesn't seem even in principle to be any physically exhaustive reason redness feels exactly as it does, as opposed to feeling like some alien color.

If I knew how the brain worked in sufficient detail, I think I'd be able to explain why this was wrong; I'd have a theory that would predict what qualia a brain experiences based on its structure (or whatever). No, I don't know what the theory is, but I'm pretty confident that there is one.

Comment author: RobbBB 11 December 2012 05:17:29AM 0 points [-]

Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?

Comment author: Decius 12 December 2012 09:46:12PM 1 point [-]

Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?

Taboo experiences.

Comment author: RobbBB 12 December 2012 10:40:55PM 0 points [-]

It sounds like you're asking me to do what I just asked you to do. I don't know what experiences are, except by listing synonyms or by acts of brute ostension — hey, check out that pain! look at that splotch of redness! — so if I could taboo them away, it would mean I'd already solved the hard problem. This may be an error mode of 'tabooing' itself; that decision procedure, applied to our most primitive and generic categories (try tabooing 'existence' or 'feature'), seems to either yield uninformative lists of examples, implausible eliminativisms (what would a world without experience, without existence, or without features, look like?), or circular definitions.

But what happens when we try to taboo a term is just more introspective data; it doesn't give us any infallible decision procedure, on its own, for what conclusion we should draw from problem cases. To assert 'if you can't taboo it, then it's meaningless!', for example, is itself to commit yourself to a highly speculative philosophical and semantic hypothesis.

Comment author: Vaniver 11 December 2012 05:55:16AM *  1 point [-]

Can you give me an example of how, even in principle, this would work?

In general, I would suggest as much looking at sensory experiences that vary among humans; there's already enough interesting material there without wondering if there are even other differences. Can we explain enough interesting things about the difference between normal hearing and pitch perfect hearing without talking about qualia?

Once we've done that, are we still interested in discussing qualia in color?

Comment author: CronoDAS 11 December 2012 05:19:47AM *  -1 points [-]
Comment author: RobbBB 11 December 2012 07:01:28AM *  1 point [-]

http://lesswrong.com/lw/p5/brain_breakthrough_its_made_of_neurons/

So your argument is "Doing arithmetic requires consciousness; and we can tell that something is doing arithmetic by looking at its hardware; so we can tell with certainty by looking at certain hardware states that the hardware is sentient"?

http://lesswrong.com/lw/p3/angry_atoms/

So your argument is "We have explained some things physically before, therefore we can explain consciousness physically"?

Also, we can cause certain sensations on demand by electrically stimulating certain brain parts.

So your argument is "Mental states have physical causes, so they must be identical with certain brain-states"?

Set aside whether any of these would satisfy a dualist or agnostic; should they satisfy one?

Comment author: CronoDAS 12 December 2012 03:57:14AM *  0 points [-]

So your argument is "Doing arithmetic requires consciousness; and we can tell that something is doing arithmetic by looking at its hardware; so we can tell with certainty by looking at certain hardware states that the hardware is sentient"?

Well, it's certainly possible to do arithmetic without consciousness; I'm pretty sure an abacus isn't conscious. But there should be a way to look at a clump of matter and tell it is conscious or not (at least as well as we can tell the difference between a clump of matter that is alive and a clump of matter that isn't).

So your argument is "We have explained some things physically before, therefore we can explain consciousness physically"?

It's a bit stronger than that: we have explained basically everything physically, including every other example of anything that was said to be impossible to explain physically. The only difference between "explaining the difference between conscious matter and non-conscious matter" and "explaining the difference between living and non-living matter" is that we don't yet know how to do the former.

I think we're hitting a "one man's modus ponens is another man's modus tollens" here. Physicalism implies that the "hard problem of consciousness" is solvable; physicalism is true; therefore the hard problem of consciousness has a solution. That's the simplest form of my argument.

Basically, I think that the evidence in favor of physicalism is a lot stronger than the evidence that the hard problem of consciousness isn't solvable, but if you disagree I don't think I can persuade you otherwise.

Comment author: Decius 12 December 2012 09:45:23PM 1 point [-]

No abacus can do arithmetic. An abacus just sits there.

No backhoe can excavate. A backhoe just sits there.

A trained agent can use an abacus to do arithmetic, just as one can use a backhoe to excavate. Can you define "do arithmetic" in such a manner that it is at least as easy to prove that arithmetic has been done as it is to prove that excavation has been done?

Comment author: CronoDAS 13 December 2012 02:38:51AM 0 points [-]

Does a calculator do arithmetic?

Comment author: RobbBB 12 December 2012 04:22:42AM *  0 points [-]

The only difference between "explaining the difference between conscious matter and non-conscious matter" and "explaining the difference between living and non-living matter" is that we don't yet know how to do the former.

It's impossible to express a sentence like this after having fully appreciated the nature of the Hard Problem. In fact, whether you're a dualist or a physicalist, I think a good litmus test for whether you've grasped just how hard the Hard Problem is is whether you see how categorically different the vitalism case is from the dualism case. See: Chalmers, Consciousness and its Place in Nature.

Physicalism implies that the "hard problem of consciousness" is solvable; physicalism is true; therefore the hard problem of consciousness has a solution.

Physicalism, plus the unsolvability of the Hard Problem (i.e., the impossibility of successful Type-C Materialism), implies that either Type-B Materialism ('mysterianism') or Type-A Materialism ('eliminativism') is correct. Type-B Materialism despairs of a solution while for some reason keeping the physicalist faith; Type-A Materialism dissolves the problem rather than solving it on its own terms.

Basically, I think that the evidence in favor of physicalism is a lot stronger than the evidence that the hard problem of consciousness isn't solvable

The probability of physicalism would need to approach 1 in order for that to be the case.

Comment author: CronoDAS 12 December 2012 05:09:02AM *  0 points [-]

It's impossible to express a sentence like this after having fully appreciated the nature of the Hard Problem. In fact, whether you're a dualist or a physicalist, I think a good litmus test for whether you've grasped just how hard the Hard Problem is is whether you see how categorically different the vitalism case is from the dualism case. See: Chalmers, Consciousness and its Place in Nature.

::follows link::

Call me the Type-C Materialist subspecies of eliminativist, then. I think that a sufficient understanding of the brain will make the solution obvious; the reason we don't have a "functional" explanation of subjective experience is not because the solution doesn't exist, but that we don't know how to do it.

Van Gulick (1993) suggests that conceivability arguments are question-begging, since once we have a good explanation of consciousness, zombies and the like will no longer be conceivable.

This is where I think we'll end up.

Comment author: CronoDAS 12 December 2012 05:13:12AM -1 points [-]

Basically, I think that the evidence in favor of physicalism is a lot stronger than the evidence that the hard problem of consciousness isn't solvable

The probability of physicalism would need to approach 1 in order for that to be the case.

It's a lot closer to 1 than a clever-sounding impossibility argument. See: http://lesswrong.com/lw/ph/can_you_prove_two_particles_are_identical/

Comment author: CronoDAS 11 December 2012 05:26:16AM 0 points [-]

Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?

Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are computations causally determined by non-computations. How would examining anything about the non-computations tell us that the computations exist, or what particular functions those computations are computing?

Comment author: RobbBB 11 December 2012 07:10:47AM 0 points [-]

My initial response is that any physical interaction in which the state of one thing differentially tracks the states of another can be modeled as a computation. Is your suggestion that an analogous response would solve the Hard Problem, i.e., are you endorsing panpsychism ('everything is literally conscious')?

Comment author: CronoDAS 12 December 2012 03:55:24AM *  1 point [-]

Sorry, bad example... Let's try again.

Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are living things causally determined by non-living things? How would examining anything about the non-living things tell us that the living things exist, or what particular way those living things are alive?

"Explain how consciousness arises from non-conscious matter" doesn't seem any more of an impossible problem than "Explain how life arises from non-living matter".

Comment author: RobbBB 12 December 2012 04:16:29AM 0 points [-]

We can define and analyze 'life' without any reference to life: As high-fidelity self-replicating macromolecules that interact with their environments to assemble and direct highly responsive cellular containers around themselves. There doesn't seem to be anything missing from our ordinary notion of life here; or anything that is missing could be easily added by sketching out more physical details.

What might a purely physical definition of consciousness that made no appeal to mental concepts look like? How could we generate a first-person facts from a complex of third-person facts?

Comment author: TsviBT 11 December 2012 07:42:31AM 1 point [-]

What you described as computation could apply to literally any two things in the same causal universe. But you meant two things that track each other much more tightly than usual. It may be that a rock is literally conscious, but if so, then not very much so. So little that it really does not matter at all. Humans are much more conscious because they reflect the world much more, reflect themselves much more, and [insert solution to Hard Problem here].

Comment author: RobbBB 11 December 2012 08:51:37AM *  0 points [-]

It may be that a rock is literally conscious, but if so, then not very much so. So little that it really does not matter at all.

I dunno. I think if rocks are even a little bit conscious, that's pretty freaky, and I'd like to know about it. I'd certainly like to hear more about what they're conscious of. Are they happy? Can I alter them in some way that will maximize their experiential well-being? Given how many more rocks there are than humans, it could end up being the case that our moral algorithm is dominated by rearranging pebbles on the beach.

Humans are much more conscious because they reflect the world much more, reflect themselves much more, and [insert solution to Hard Problem here].

Hah. Luckily, true panpsychism dissolves the Hard Problem. You don't need to account for mind in terms of non-mind, because there isn't any non-mind to be found.

Comment author: TsviBT 11 December 2012 05:02:40PM 1 point [-]

I think if rocks are even a little bit conscious, that's pretty freaky, and I'd like to know about it.

I meant, I'm pretty sure that rocks are not conscious. It's just that the best way I'm able to express what I mean by "consciousness" may end up apparently including rocks, without me really claiming that rocks are conscious like humans are - in the same way that your definition of computation literally includes air, but you're not really talking about air.

Luckily, true panpsychism dissolves the Hard Problem. You don't need to account for mind in terms of non-mind, because there isn't any non-mind to be found.

I don't understand this. How would saying "all is Mind" explain why qualia feel the way they do?

Comment author: [deleted] 10 December 2012 03:01:30PM 4 points [-]

Daniel Dennett's 'Quining Qualia' (http://ase.tufts.edu/cogstud/papers/quinqual.htm) is taken ('round these parts) to have laid the theory of qualia to rest. Among philosophers, the theory of qualia and the classical empiricism founded on it are also considered to be dead theories, though it's Sellers "Empiricism and the Philosophy of Mind" (http://www.ditext.com/sellars/epm.html) that is seen to have done the killing.

Comment author: ArisKatsaris 10 December 2012 04:07:24PM 6 points [-]

Daniel Dennett's 'Quining Qualia' (http://ase.tufts.edu/cogstud/papers/quinqual.htm) is taken ('round these parts) to have laid the theory of qualia to rest.

I've not actually read this essay (will do so later today), but I disagree that most people here consider the issue of qualia and the "hard problem of consciousness" to be a solved one.

Time for a poll.

Submitting...

Comment author: [deleted] 12 December 2012 01:43:29PM 0 points [-]

What about “I'd need to think more about this”?

Comment author: [deleted] 11 December 2012 03:08:10AM 4 points [-]

I just read 'Quining Qualia'. I do not see it as a solution to the hard problem of consciousness, at all. However, I did find it brilliant - it shifted my intuition from thinking that conscious experience is somehow magical and inexplicable to thinking that it is plausible that conscious experience could, one day, be explained physically. But to stop here would be to give a fake explanation...the problem has not yet been solved.

A triumphant thundering refutation of [qualia], an absolutely unarguable proof that [qualia] cannot exist, feels very satisfying—a grand cheer for the home team. And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.

-- Eliezer Yudkowsky, Dissolving the Question

Also, does anyone disagree with anything that Dennett says in the paper, and, if so, what, and why?

Comment author: Peterdjones 11 December 2012 12:42:21PM 2 points [-]

I think I have qualia. I probably don't have qualia as defined by Dennett, as simultaneously ineffable, intrinsic, etc, but there are nonetheless ways things seem to me.

Comment author: [deleted] 13 December 2012 10:20:02AM *  0 points [-]

It maybe just my opinion, but please don't quote people and then insert edits into the quotation. Although at least you did do that with parenthesis.

By doing so you seem to say that free will and qualia are the same or interchangeable topics that share arguments for and against. But that is not the case. The question of free will is often misunderstood and is much easier to handle.

Qualia is, in my opinion, the abstract structure of consciousness. So on the underlying basic level you have physics and purely physical things, and on the more abstract level you have structure that is transitive with the basic level.

To illustrate what this means, I think Eliezer had an excellent example(though I'm not sure if his intention was similar): The spiking pattern of blue and actually seeing blue. But even the spiking pattern is far from completely reduced. But the idea is the same. On the level of consciousness you have experience which corresponds to a basic level thing. Very similar to the map and the territory analogue. Colorvision is hard to approach though, and it might be easier to start of with binary vision of 1 pixel. It's either 1 or 0. Imagine replacing your entire visual cortex with something that only outputs 1 or 0 - though brain is not binary - your entire field of vision having only 2 distinct experienced states. Although if you do that it certainly will result into mind-projection fallacy, since you can't actually change your visual cortex to only output 1 or 0. Anyway the rest of your consciousness has access to that information, and it's very very much easier to see how this binary state affects the decisions you make. And it's also much easier to do the transition from experience to physics and logic. Anyway then you can work your way back up to the normal vision by going several different pixels that are either 1 or 0.. To grayscale vision. But then colors make it much harder. But this doesn't resolve the qualia issue - how would feel like to have a 1-bit vision? How do you produce a set of rules that is transitive with the experience of vision?

Even if you grind everything down to the finest powder it still will be hard to see where this qualia business comes from, because you exist between the lines.

Comment author: [deleted] 13 December 2012 05:22:38PM 0 points [-]

But this doesn't resolve the qualia issue - how would feel like to have a 1-bit vision? How do you produce a set of rules that is transitive with the experience of vision?

I agree that that doesn't resolve the qualia issue. To begin with, we'd need to write a SeeRed() function, that will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function. Even epiphenomenalists agree that this can be done, since they say consciousness has no physical effect on behavior. But here is my intuition (and pretty much every other reductionist's, I reckon) that leads me to reject epiphenomenalism: When I say, out loud (so there is a physical effect) "Wow, this flower I am holding is beautiful!", I am saying it because it actually looks beautiful to me! So I believe that, somehow, the perception is explainable, physically. And, at least for me, that intuition is much stronger than the intuition that conscious perception and computation are in separate magisteria.

We'll be able to get a lot further in this discussion once someone actually writes a SeeRed() function, which both epiphenomenalists and reductionists agree can be done.

Meanwhile, dualists think writing such a SeeRed() function is impossible. Time will tell.

Comment author: Peterdjones 13 December 2012 05:50:14PM 0 points [-]

So I believe that, somehow, the perception is explainable, physically. And, at least for me, that intuition is much stronger than the intuition that conscious perception and computation are in separate magisteria.

It's possible for physicalism to be true, and computationalism false.

We'll be able to get a lot further in this discussion once someone actually writes a SeeRed() function, which both epiphenomenalists and reductionists agree can be done.

I'll say. Solving the problem does tend to solve the problem.

Comment author: Eliezer_Yudkowsky 10 December 2012 11:25:28PM 2 points [-]

I haven't read either of those but will read them. Also I totally think there was a respectable hard problem and can only stare somewhat confused at people who don't realize what the fuss was about. I don't agree with what Chalmers tries to answer to his problem, but his attempt to pinpoint exactly what seems so confusing seems very spot-on. I haven't read anything very impressive yet from Dennett on the subject; could be that I'm reading the wrong things. Gary Drescher on the other hand is excellent.

It could be that I'm atypical for LW.

EDIT: Skimmed the Dennett one, didn't see much of anything relatively new there; the Sellers link fails.

Comment author: Karl 11 December 2012 03:52:51AM 3 points [-]

Also I totally think there was a respectable hard problem

So you do have a solution to the problem?

Comment author: [deleted] 11 December 2012 01:26:57AM *  0 points [-]

I'll take a look at Drescher, I haven't seen that one.

Try this link? http://selfpace.uconn.edu/class/percep/SellarsEmpPhilMind.pdf

Sellars is important to contemporary philosophy, to the extent that a standard course in epistemology will often end with EPM. I'm not sure it's entirely worth your time though, because an argument against classical (not Bayesian) empiricism.

Comment author: RobbBB 11 December 2012 02:46:03AM *  -1 points [-]

Pryor and BonJour explain Sellars better than Sellars does. See: http://www.jimpryor.net/teaching/courses/epist/notes/given.html

The basic question is over whether our beliefs are purely justified by other beliefs, or whether our (visual, auditory, etc.) perceptions themselves 'represent the world as being a certain way' (i.e., have 'propositional content') and, without being beliefs themselves, can lend some measure of support to our beliefs. Note that this is a question about representational content (intentionality) and epistemic justification, not about phenomenal content (qualia) and physicalism.

Comment author: RobbBB 11 December 2012 02:38:20AM *  1 point [-]

Among philosophers, the theory of qualia and the classical empiricism founded on it are also considered to be dead theories

Do you have evidence of this? The PhilPapers survey suggests that only 56.5% of philosophers identify as 'physicalists,' and 59% think that zombies are conceivable (though most of these think zombies are nevertheless impossible). It would also help if you explained what you mean by 'the theory of qualia.'

though it's Sellers "Empiricism and the Philosophy of Mind" (http://www.ditext.com/sellars/epm.html) that is seen to have done the killing.

Sellars' argument, I think, rests on a few confusions and shaky assumptions. I agree this argument is still extremely widely cited, but I think that serious epistemologists no longer consider it conclusive, and a number reject it outright. Jim Pryor writes:

These anti-Given arguments deserve a re-examination, in light of recent developments in the philosophy of mind. The anti-Given arguments pose a dilemma: either (i) direct apprehension is not a state with proposition content, in which case it's argued to be incapable with providing us with justification for believing any specific proposition; or (ii) direct apprehension is a state with propositional content. This second option is often thought to entail that direct apprehension is a kind of believing, and hence itself would need justification. But it ought nowadays to be very doubtful that the second option does entail such things. These days many philosophers of mind construe perceptual experience as a state with propositional content, even thought experience is distinct from, and cannot be reduced to, any kind of belief. Your experiences represent the world to you as being a certain way, and the way they represent the world as being is their propositional content. Now, surely, its looking to you as if the world is a certain way is not a kind of state for which you need any justification. Hence, this construal of perceptual experience seems to block the step from 'has propositional content' to 'needs justification'. Of course, what are 'apprehended' by perceptual experiences are facts about your perceptual environment, rather than facts about your current mental states. But it should at least be clear that the second horn of the anti-Given argument needs more argument than we've seen so far.

Comment author: [deleted] 11 December 2012 03:04:21AM 0 points [-]

Do you have evidence of this?

I mentioned in a subsequent post that there was an ambiguity in my original claim. Qualia have been used by philosophers to do two different jobs: 1) as the basis of the hard problem of consciousness, and 2) as the foundation of foundationalist theories of empiricism. Sellars essay, in particular is aimed at (2), not (1), and the mention of 'qualia' to which I was responding was probably a case of (1). The question of physicalism and the conceivability of p-zombies isn't directly related to the epistemic role of qualia, and one could reject classical empiricism on the basis of Sellars' argument while still believing that the reality of irreducible qualia speak against physicalism and for the conceivability of p-zombies.

Sellers' argument, I think, rests on a few confusions and shaky assumptions.

That may be, it's a bit outside my ken. Thanks for posting the quote. I won't go trying to defend the overall organization EPM, which is fairly labyrinthine, but I have some confidence in its critiques: I'd need more familiarity with Pryor's work to level a serious criticism, but he on the basis of your quote he seems to me to be missing the point: Sellars is not arguing that something's appearing to you in a certain way is a state (like a belief) which requires justification. He argues that it is not tenable to think of this state as being independent of (e.g. a foundation for) a whole battery of concepts including epistemic concepts like 'being in standard perceptual conditions'. Looking a certain way is posterior (a sophistication of) its being that way. Looking red is posterior to simply being red. And this is an attack on the epistemic role of qualia insofar as this theory implies that 'looking red' is in some way fundamental and conceptually independent.

Comment author: RobbBB 11 December 2012 03:21:20AM *  1 point [-]

Sellars is not arguing that something's appearing to you in a certain way is a state (like a belief) which requires justification. He argues that it is not tenable to think of this state as being independent of (e.g. a foundation for) a whole battery of concepts including epistemic concepts like 'being in standard perceptual conditions'. Looking a certain way is posterior (a sophistication of) its being that way. Looking red is posterior to simply being red. And this is an attack on the epistemic role of qualia insofar as this theory implies that 'looking red' is in some way fundamental and conceptually independent.

Yes, that is the argument. And I think its soundness is far from obvious, and that there's a lot of plausibility to the alternative view. The main problem is that this notion of 'conceptual content' is very hard to explicate; often it seems to be unfortunately confused with the idea of linguistic content; but do we really think that the only things that should add or take away any of my credence in any belief is the words I think to myself? In any case, Pryors' paper Is There Non-Inferential Justification? is probably the best starting point for the rival view. And he's an exceedingly lucid thinker.

Comment author: [deleted] 11 December 2012 04:16:13PM 0 points [-]

I'll read the Pryor article, in more detail, but from your gloss and from a quick scan, I still don't see where Pryor and Sellars are even supposed to disagree. I think, without being totally sure, that Sellars would answer the title question of Pryor's article with an emphatic 'yes!'. Experience of a red car justifies belief that the car is red. While experience of a red car also presupposes a battery of other concepts (including epistemic concepts), these concepts are not related to the knowledge of the redness of the car as premises to a conclusion.

Here's a quote from EPM p148, which illustrates that the above is Sellars' view (italics mine). Note that in the following, Sellars is sketching the view he wants to attack:

One of the forms taken by the Myth of the Given is the idea that there is, indeed must be, a structure of particular matter of fact such that (a) each fact can not only be noninferentially known to be the case, but presupposes no other knowledge either of particular matter of fact, or of general truths; and (b) such that the noninferential knowledge of facts belonging to this structure constitutes the ultimate court of appeals for all factual claims -- particular and general -- about the world. It is important to note that I characterized the knowledge of fact belonging to this stratum as not only noninferential, but as presupposing no knowledge of other matter of fact, whether particular or general. It might be thought that this is a redundancy, that knowledge (not belief or conviction, but knowledge) which logically presupposes knowledge of other facts must be inferential. This, however, as I hope to show, is itself an episode in the Myth.

So Sellars wants to argue that empiricism has no foundation because experience (as an epistemic success term) is not possible without knowledge of a bunch of other facts. But it does not follow from this that a) Sellars thinks knowledge derived from experience is inferential, or b) Sellars thinks non-inferential knowledge as such is a problem.

But that said, I haven't read enough of Pryor's paper(s) to understand his critiques. I'll take a look.

Comment author: Peterdjones 10 December 2012 03:16:10PM 1 point [-]

I'm not at all convinced that all LWers have been persuaded that they don't have qualia.

Among philosophers, the theory of qualia and the classical empiricism founded on it are also considered to be dead theories

Amongst some philosophers.

t's Sellers "Empiricism and the Philosophy of Mind" (http://www.ditext.com/sellars/epm.html) that is seen to have done the killing.

Hmmm. The only enthusiast for Sellars I know finds it necessary to adopt Direct Realism, which is a horribly flawed theory. In fact most of the problems with it consist of reconciling it with a naturalistic world view.

Comment author: [deleted] 10 December 2012 03:28:06PM *  1 point [-]

I'm not at all convinced that all LWers have been persuaded that they don't have qualia.

Well, it's probably important to distinguish between to uses to which the theory of qualia is put: first as the foundation of foundationalist empiricism, and second as the basis for the 'hard problem of consciousness'. Foundationalist theories of empiricism are largely dead, as is the idea that qualia are a source of immediate, non-conceptual knowledge. That's the work that Sellars (a strident reductivist and naturalist) did.

Now that I read it again, I think my original post was a bit misleading because I implied that the theory of qualia as establishing the 'hard problem' is also a dead theory. This is not the case, and important philosophers still defend the hard problem on these grounds. Mea Culpa.

The only enthusiast for Sellars I know finds it necessary to adopt Direct Realism, which is a horribly flawed theory. In fact most of the problems with it consist of reconciling it with a naturalistic world view.

Once direct realism as an epistemic theory is properly distinguished from a psychological theory of perception, I think it becomes an extremely plausible view. I think I'd probably call myself a direct realist.

Comment author: NancyLebovitz 12 December 2012 04:00:09AM 1 point [-]

Foundationalist theories of empiricism are largely dead, as is the idea that qualia are a source of immediate, non-conceptual knowledge.

I'd have said that qualia are not a source of unprocessed knowledge, but the processing isn't conceptual.

I take 'conceptual' to mean thought which is at least somewhat conscious and which probably can be represented verbally. What do you mean by the word?

Comment author: [deleted] 12 December 2012 04:45:23AM 0 points [-]

I take 'conceptual' to mean thought which is at least somewhat conscious and which probably can be represented verbally. What do you mean by the word?

I mean 'of such a kind as to be a premise or conclusion in an inference'. I'm not sure whether I agree with your assessment or not: if by 'non-conceptual processing' you mean to refer to something like a physiological or neurological process, then I think I disagree (simply because physiological processes can't be any part of an inference, even granting that often times things that are part of an inference are in some way identical to a neurological process).

Comment author: NancyLebovitz 13 December 2012 05:45:03AM 0 points [-]

I think we're looking at qualia from different angles. I agree that the process which leads to qualia might well be understood conceptually from the outside (I think that's what you meant). However, I don't think there's an accessible conceptual process by which the creation of qualia can be felt by the person having the qualia.

Comment author: Manfred 10 December 2012 03:34:28PM *  0 points [-]

Right - to hammer on the point, the common-ish (EDIT: Looks like I was hastily generalizing) LW opinion is that there never was any "hard problem of consciousness" (EDIT: meaning one that is distinct from "easy" problems of consciousness, that is, the ones we know roughly how to go about solving). It's just that when we meet a problem that we're very ignorant about, a lot of people won't go "I'm very ignorant about this," they'll go "This has a mysterious substance, and so why would learning more change that inherent property?"

Comment author: [deleted] 10 December 2012 03:41:41PM *  9 points [-]

It should be remembered though that the guy who's famous for formulating the hard problem of consciousness is:

1) A fan of EY's TDT, who's made significant efforts to get the theory some academic attention. 2) A believer in the singularity, and its accompanying problems. 3) The student of Douglas Hofstrader. 4) Someone very interested in AI. 5) Someone very well versed and interested in physics and psychology. 6) A rare, but sometimes poster on LW. 7) Very likely one of the smartest people alive. etc. etc.

I think consciousness is reducible too, but David Chalmers is a serious dude, and the 'hard problem' is to be taken very, very seriously. It's very easy to not see a philosophical problem, and very easy to think that the problem must be solved by psychology somewhere, much harder to actually explain a solution/dissolution.

Comment author: Alejandro1 10 December 2012 04:32:35PM -1 points [-]

I agree with you about how smart Chalmers is and that he does very good philosophical work. But I think you have a mistake in terminology when you say

I think consciousness is reducible too, but David Chalmers is a serious dude, and the 'hard problem' is to be taken very, very seriously.

It is an understandable mistake, because it is natural to take "the hard problem" as meaning just "understanding consciousness", and I agree that this is a hard problem in ordinary terms and that saying "there is a reduction/dissolution" is not enough. But Chalmers introduced the distinction between the "hard problem" and the "easy problems" by saying that understanding the functional aspects of the mind, the information processing, etc, are all "easy problems". So a functionalist/computationalist materialist, like most people on this site, cannot buy into the notion that there is a serious "hard problem" in Chalmers' sense. This notion is defined in a way that begs the question assuming that qualia are irreducible. We should say instead that solving the "easy problems" is at the same time much less trivial than Chalmers makes it seem, and enough to fully account for consciousness.

Comment author: Peterdjones 10 December 2012 05:00:13PM *  3 points [-]

cannot buy into the notion that there is a serious "hard problem" in Chalmers' sense. This notion is defined in a way way that begs the question assuming that qualia are irreducible.

No it isn't. Here is what Chalmers says:

"It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does."

There is no statement of irreducubility there. There is a statement that we have "no good explanaion" and we don't.

Comment author: Alejandro1 10 December 2012 05:10:30PM *  3 points [-]

However, see how he contrasts it with the "easy problems" (from Consciousness and its Place in Nature - pdf):

What makes the easy problems easy? For these problems, the task is to explain certain behavioral or cognitive functions: that is, to explain how some causal role is played in the cognitive system, ultimately in the production of behavior. To explain the performance of such a function, one need only specify a mechanism that plays the relevant role. And there is good reason to believe that neural or computational mechanisms can play those roles.

What makes the hard problem hard? Here, the task is not to explain behavioral and cognitive functions: even once one has an explanation of all the relevant functions in the vicinity of consciousness—discrimination, integration, access, report, control—there may still remain a further question: why is the performance of these functions accompanied by experience?

It seems clear that for Chalmers any description in terms of behavior and cognitive function is by definition not addressing the hard problem.

Comment author: Peterdjones 10 December 2012 05:17:50PM 1 point [-]

But that is not to say that qualia are irreducibole things, that is to say that mechanical explanations of qualia have not worked to date

Comment author: dspeyer 10 December 2012 09:40:36PM -2 points [-]

Why should physical processing give rise to a rich inner life at all?

What does this mean by "why"? What evolutionary advantage is there? Well, it enables imagination, which lets us survive a wider variety of dangers. What physical mechanism is there? That's an open problem in neurology, but they're making progress.

I've read this several times, and I don't see a hard philosophical problem.

Comment author: Peterdjones 10 December 2012 09:50:28PM 2 points [-]

What does this mean by "why"?

It's definitely a how-it-happens "why" and not how-did-it-evolve "why"

Well, it enables imagination,

There's more to qualia than free-floating representations. There is no reason to suppose an AI's internal maps have phenomenal feels, no way of testing that they do, and no way of engineering them in.

I've read this several times, and I don't see a hard philosophical problem.

It's a hard scientific problem. How could you have a theory that tells you how the world seems to a bat on LSD? How can you write a SeeRed() function?

Comment author: DaFranker 12 December 2012 09:43:17PM *  0 points [-]

How can you write a SeeRed() function?

Presumably, the exact same way you'd write any other function.

In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.

If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human's "redness qualia". If prompted and sufficiently intelligent, this program will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function.

Of course, I'm arguing a bit by the premises here with "correct behavior" being "fully and coherently maintained". The space of inputs and outputs to take into account in order to make a program that would convince you of its possession of the redness qualia is too vast for us at the moment.

TL;DR: It all depends on what the SeeRed() function will be used for / how we want it to behave.

Comment author: Peterdjones 12 December 2012 09:59:35PM *  -1 points [-]

In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.

False. In this case what matters is the perception of a red colour that occurs between input and ouput. That is what the Hard Problem, the problem of qualia is about.

If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human's "redness qualia"

That doesn't mean there are no qualia (I have them so I know there are). That also doesn't mean qualia just serendiptously arrive whenever the correct mapping from inputs to outputs is in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough.

Comment author: Decius 12 December 2012 09:36:16PM 0 points [-]

Is there a reason to suppose that anybody else's maps have phenomenal feels, a way of testing that they do, or a way of telling the difference? Why can't those ways be generalized to Intelligent entities in general?

Comment author: Peterdjones 12 December 2012 10:03:25PM -1 points [-]

Is there a reason to suppose that anybody else's maps have phenomenal feels,

Yes: naturalism. It would be naturalistcially anomalous if their brains worked very smilarly , but their phenomenology were completely different.

a way of testing that they do,

No. So what? Are you saying we are all p-zombies?

Comment author: Manfred 10 December 2012 04:32:16PM *  -1 points [-]

Though on the other hand, we don't have room to take everything serious dudes say seriously - too many dudes, not enough time.

If a problem happens not to exist, then I suppose one will just have to nerve onesself and not see it. Yes, there are non-hard problems of consciousness, where you explain how a certain process or feeling occurs in the brain, and sure, there are some non-hard problems I'd wave away with "well, that's solved by psychology somewhere." But no amount of that has any bearing on the "hard problem," which will remain in scare quotes as befits its effective nonexistence - finding a solution to a problem that is not a problem would be silly.

(EDIT: To clarify, I am not saying qualia do not exist, I am saying some mysterious barrier of hardness around qualia does not exist.)

Comment author: Peterdjones 10 December 2012 05:01:42PM 1 point [-]

If a problem happens not to exist, then I suppose one will just have to nerve onesself and not see it.

OK. Then demonstrate that the HP does not exist, in terms of Chalmer's specification, by showing that we do have a good explanation.

Comment author: Manfred 10 December 2012 08:04:11PM *  0 points [-]

Well, said Achilles, everybody knows that if you have A and B and "A and B imply Z," then you have Z.

How an Algorithm Feels From Inside.
The Visual Cortex is Used to Imagine
Stimulating the Visual Cortex Makes the Blind See

This sort of thing is sufficient for me, like Achilles' explanations were enough for Achilles. But if, say, the perception of the hard problem was causally unrelated to the actual existence of a hard problem (for epiphenominalism, this is literally what is going on), then gosh, it would seem like no matter what explanations you heard, the hard problem wouldn't go away - so it must be either a proof of dualism or a mistake.

Comment author: Peterdjones 10 December 2012 08:59:02PM *  1 point [-]

This sort of thing is sufficient for me

But not for me. Indeed. I am pretty sure none of those articles is even intended as a solution to the HP. And if they are, why not publish them is a journal and become famous?

How an Algorithm Feels From Inside.

Intended as a solution to FW.

Stimulating the Visual Cortex Makes the Blind See

So? Every living qualiaphile accepts some sort of relationship between brain states and qualia.

if, say, the perception of the hard problem was causally unrelated to the actual existence of a hard problem (for epiphenominalism, this is literally what is going on),

So? I said nothing about epiphenomenalism

Comment author: Manfred 10 December 2012 09:49:40PM *  0 points [-]

So? I said nothing about epiphenomenalism

The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.

Other than that, I don't have much to respond to here, since you're just going "So?"

Comment author: Peterdjones 10 December 2012 10:01:00PM *  0 points [-]

The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.

I can't find the posting, and I don't see how the MPF would relate to e12ism anyway.

The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.

How did you expect to convive me? I am familar with all the stuff you are quoting, and I still think there is an HP. So do many people.

Comment author: [deleted] 10 December 2012 04:35:35PM 1 point [-]

For practical reasons, I think that's fair enough...so long as we're clear that the above is a fully general counterargument.

Comment author: Manfred 10 December 2012 05:01:18PM *  0 points [-]

Right. I have not said any actual arguments against the hard problem of consciousness.

EDIT: Was true when I said it, then I replied to PeterD, not that it worked (as I noted in that very post, the direct approach has little chance against a confusion)

Comment author: Peterdjones 10 December 2012 05:05:23PM 0 points [-]

Argument for the importance of the HP: it is about the only thing that would motivate an educated 21st century person into doubting physcalism.

Comment author: RichardKennaway 10 December 2012 03:56:05PM 4 points [-]

The rest mostly go, "this could only be explained by a mysterious substance, there are no mysterious substances, therefore this does not exist."

Comment author: Peterdjones 10 December 2012 04:06:44PM *  0 points [-]

I don't know why you guys keep harping about substances. Substance dualism has been out of favour for a good century.

Comment author: Manfred 10 December 2012 04:54:32PM *  2 points [-]

Sorry, I was misusing terminology. Any ignorance-generating / ignorance-embodying explanation (e.g.s quantum mysticism / elan vital) uses what I'm calling "mysterious substance."

Basically I'm calling "quantum" a mysterious substance (for the quantum mystics), even though it's not like you can bottle it.

Maybe I should have said "mysterious form?" :D

Comment author: Peterdjones 10 December 2012 03:51:45PM 4 points [-]

There is a Hard Prolem, becuase there is basically no (non eliminative) science or technology of qualia at all. We cna get a start on the problem of building cognition, memory and perception into an AI, but we can;t get a start on writing code for Red or Pain or Salty. You can thell there is basically no non-eliminative science or technology of qualia because the best LWers' can quote is Dennett's eliminative theory.

Comment author: MrMind 10 December 2012 03:43:53PM *  0 points [-]

I don't know what others accept as a solution to the qualia problem, but I've found the explanations in "How an algorithm feels from the inside" quite on spot. For me, the old sequences have solved the qualia problem, and from what I see the new sequence presupposes the same.

Comment author: ArisKatsaris 18 December 2012 11:09:53AM *  1 point [-]

I've found the explanations in "How an algorithm feels from the inside" quite on spot.

I'm not sure I understand what it means for an algorithm to have an inside, let alone for an algorithm to "feel" something from the inside. "Inside" is a geometrical concept, not an algorithmical one.

Please explain what the inside feeling of e.g. the Fibonacci sequence (or an algorithm calculating such) would be.

Comment author: MrMind 18 December 2012 04:17:06PM 1 point [-]

I'm not sure I understand what it means for an algorithm to have an inside, let alone for an algorithm to "feel" something from the inside. "Inside" is a geometrical concept, not an algorithmical one.

Well, that's just the title, you know? The original article was talking about cognitive algorithms (an algorithm, not any algorithm). Unless you assume some kind of un-physical substance having a causal effect on your brain and your continued existence after death, you is what your cognitive algorithm feels when it's run on your brain wetware.

"Inside" is a geometrical concept, not an algorithmical one.

That's not true: every formal system that can produce a model of a subsets of its axioms might be considered as having an 'inside' (as in set theory: constructible model are called 'inner model'), and that's just one possible definition.

Comment author: ArisKatsaris 18 December 2012 04:40:37PM *  0 points [-]

The original article was talking about cognitive algorithms (an algorithm, not any algorithm).

So what's the difference between cognitive algorithms with the ability of "feeling from the inside" and the non-cognitive algorithms which can't "feel from the inside"?

Unless you assume some kind of un-physical substance having a causal effect on your brain and your continued existence after death, you is what your cognitive algorithm feels when it's run on your brain wetware.

Please don't construct strawmen. I never once mentioned unphysical substances having any causal effect, nor do I believe in such. Actually from my perspective it seems to me that it is you who are referring to unphysical substances called "algorithms" "models", the "inside", etc. All these seem to me to be on the map, not on the territory.

And to say that I am my algorithm running on my brain doesn't help dissolve for me the question of qualia anymore than if some religious guy had said that I'm the soul controlling my body.

Comment author: MrMind 18 December 2012 05:13:17PM *  0 points [-]

So what's the difference between cognitive algorithms with the ability of "feeling from the inside" and the non-cognitive algorithms which can't "feel from the inside"?

If I knew I would have already written an AI. This is an NP problem, easy to check, hard to find a solution for: I knew that the one running on my brain is of the kind, and the one spouting Fibonacci number is not. I can only guess that involves some kind of self-representation.

Please don't construct strawmen. I never once mentioned unphysical substances having any causal effect, nor do I believe in such.

Sorry if I seemed to do so, I wasn't attributing those beliefs to you, I was just listing the possible escape routes from the argument.

Actually from my perspective it seems to me that it is you who are referring to unphysical substances called "algorithms" "models", the "inside", etc. All these seem to me to be on the map, not on the territory.

Well, if you already do not accept those concepts, you need to tell me what your basic ontology is so we can agree on definitions. I thought that we already have "algorithm" covered by "Please explain what the inside feeling of e.g. the Fibonacci sequence (or an algorithm calculating such) would be"

And to say that I am my algorithm running on my brain doesn't help dissolve for me the question of qualia anymore than if some religious guy had said that I'm the soul controlling my body.

That's because it was not the question that my sentence was answering. You have to admit that writing "I'm not sure I understand what it means for an algorithm to have an inside" is a rather strange way to ask "Please justify the way the sequence has in your opinion dissolved the qualia problem". If you're asking me that, I might just want to write an entire separate post, in the hope of being clearer and more convincing.

Comment author: ArisKatsaris 18 December 2012 07:53:44PM *  1 point [-]

If I knew I would have already written an AI.

I think this is confusing qualia with intelligence. There's no big confusion about how an algorithm run on hardware can produce something we identify as intelligence -- there's a big confusion about such an algorithm "feeling things from the inside".

Well, if you already do not accept those concepts, you need to tell me what your basic ontology is so we can agree on definitions.

It seems to me that in a physical universe, the concept of "algorithms" is merely an abstract representation in our minds of groupings of physical happenings, and therefore algorithms are no more ontologically fundamental than the category of "fruits" or "dinosaurs".

Now starting with a mathematical ontology instead, like Tegmark IV's Mathematical Universe Hypothesis, it's physical particles that are concrete representations of algorithms instead (very simple algorithms in the case of particles). In that ontology, where algorithms are ontologically fundamental and physical particles aren't, you can perhaps clearly define qualia as the inputs of the much-more-complex algorithms which are our minds...

That's sort-of the way that I would go about dissolving the issue of qualia if I could. But in a universe which is fundamentally physical it doesn't get dissolved by positing "algorithms" because algorithms aren't fundamentally physical...

Comment author: MrMind 21 December 2012 10:52:56AM 0 points [-]

I'm going to write a full-blown post so that I can present my view more clearly. If you want we can move the discussion there when it will be ready (I think in a couple of days).