Manfred comments on By Which It May Be Judged - LessWrong

35 Post author: Eliezer_Yudkowsky 10 December 2012 04:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (934)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 10 December 2012 03:34:28PM *  0 points [-]

Right - to hammer on the point, the common-ish (EDIT: Looks like I was hastily generalizing) LW opinion is that there never was any "hard problem of consciousness" (EDIT: meaning one that is distinct from "easy" problems of consciousness, that is, the ones we know roughly how to go about solving). It's just that when we meet a problem that we're very ignorant about, a lot of people won't go "I'm very ignorant about this," they'll go "This has a mysterious substance, and so why would learning more change that inherent property?"

Comment author: [deleted] 10 December 2012 03:41:41PM *  9 points [-]

It should be remembered though that the guy who's famous for formulating the hard problem of consciousness is:

1) A fan of EY's TDT, who's made significant efforts to get the theory some academic attention. 2) A believer in the singularity, and its accompanying problems. 3) The student of Douglas Hofstrader. 4) Someone very interested in AI. 5) Someone very well versed and interested in physics and psychology. 6) A rare, but sometimes poster on LW. 7) Very likely one of the smartest people alive. etc. etc.

I think consciousness is reducible too, but David Chalmers is a serious dude, and the 'hard problem' is to be taken very, very seriously. It's very easy to not see a philosophical problem, and very easy to think that the problem must be solved by psychology somewhere, much harder to actually explain a solution/dissolution.

Comment author: Alejandro1 10 December 2012 04:32:35PM -1 points [-]

I agree with you about how smart Chalmers is and that he does very good philosophical work. But I think you have a mistake in terminology when you say

I think consciousness is reducible too, but David Chalmers is a serious dude, and the 'hard problem' is to be taken very, very seriously.

It is an understandable mistake, because it is natural to take "the hard problem" as meaning just "understanding consciousness", and I agree that this is a hard problem in ordinary terms and that saying "there is a reduction/dissolution" is not enough. But Chalmers introduced the distinction between the "hard problem" and the "easy problems" by saying that understanding the functional aspects of the mind, the information processing, etc, are all "easy problems". So a functionalist/computationalist materialist, like most people on this site, cannot buy into the notion that there is a serious "hard problem" in Chalmers' sense. This notion is defined in a way that begs the question assuming that qualia are irreducible. We should say instead that solving the "easy problems" is at the same time much less trivial than Chalmers makes it seem, and enough to fully account for consciousness.

Comment author: Peterdjones 10 December 2012 05:00:13PM *  3 points [-]

cannot buy into the notion that there is a serious "hard problem" in Chalmers' sense. This notion is defined in a way way that begs the question assuming that qualia are irreducible.

No it isn't. Here is what Chalmers says:

"It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does."

There is no statement of irreducubility there. There is a statement that we have "no good explanaion" and we don't.

Comment author: Alejandro1 10 December 2012 05:10:30PM *  3 points [-]

However, see how he contrasts it with the "easy problems" (from Consciousness and its Place in Nature - pdf):

What makes the easy problems easy? For these problems, the task is to explain certain behavioral or cognitive functions: that is, to explain how some causal role is played in the cognitive system, ultimately in the production of behavior. To explain the performance of such a function, one need only specify a mechanism that plays the relevant role. And there is good reason to believe that neural or computational mechanisms can play those roles.

What makes the hard problem hard? Here, the task is not to explain behavioral and cognitive functions: even once one has an explanation of all the relevant functions in the vicinity of consciousness—discrimination, integration, access, report, control—there may still remain a further question: why is the performance of these functions accompanied by experience?

It seems clear that for Chalmers any description in terms of behavior and cognitive function is by definition not addressing the hard problem.

Comment author: Peterdjones 10 December 2012 05:17:50PM 1 point [-]

But that is not to say that qualia are irreducibole things, that is to say that mechanical explanations of qualia have not worked to date

Comment author: dspeyer 10 December 2012 09:40:36PM -2 points [-]

Why should physical processing give rise to a rich inner life at all?

What does this mean by "why"? What evolutionary advantage is there? Well, it enables imagination, which lets us survive a wider variety of dangers. What physical mechanism is there? That's an open problem in neurology, but they're making progress.

I've read this several times, and I don't see a hard philosophical problem.

Comment author: Peterdjones 10 December 2012 09:50:28PM 2 points [-]

What does this mean by "why"?

It's definitely a how-it-happens "why" and not how-did-it-evolve "why"

Well, it enables imagination,

There's more to qualia than free-floating representations. There is no reason to suppose an AI's internal maps have phenomenal feels, no way of testing that they do, and no way of engineering them in.

I've read this several times, and I don't see a hard philosophical problem.

It's a hard scientific problem. How could you have a theory that tells you how the world seems to a bat on LSD? How can you write a SeeRed() function?

Comment author: DaFranker 12 December 2012 09:43:17PM *  0 points [-]

How can you write a SeeRed() function?

Presumably, the exact same way you'd write any other function.

In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.

If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human's "redness qualia". If prompted and sufficiently intelligent, this program will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function.

Of course, I'm arguing a bit by the premises here with "correct behavior" being "fully and coherently maintained". The space of inputs and outputs to take into account in order to make a program that would convince you of its possession of the redness qualia is too vast for us at the moment.

TL;DR: It all depends on what the SeeRed() function will be used for / how we want it to behave.

Comment author: Peterdjones 12 December 2012 09:59:35PM *  -1 points [-]

In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.

False. In this case what matters is the perception of a red colour that occurs between input and ouput. That is what the Hard Problem, the problem of qualia is about.

If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human's "redness qualia"

That doesn't mean there are no qualia (I have them so I know there are). That also doesn't mean qualia just serendiptously arrive whenever the correct mapping from inputs to outputs is in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough.

Comment author: DaFranker 12 December 2012 10:07:06PM 0 points [-]

That doesn't mean there are no qualia (I have them so I know there are). That also doesn't mean qualia just serendiptously arrive whenever the correct inputs and outputs are in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough

None of these were among my claims. For a program to reliably pass turing-like tests for seeing redness, a GLUT or zombielike would not cut it, you'd need some sort of internal system that generates certain inner properties and behaviors, one that would be effectively indistinguishable from qualia (this is my claim), and may very well be qualia (this is not my core claim, but it is something I find plausible).

Obviously I haven't solved the Hard Problem just by saying this. However, I do greatly dislike your apparent premise* that qualia can never be dissolved to patterns and physics and logic.

* If this isn't among your premises or claims, then it still does appear that way, but apologies in advance for the strawmanning.

Comment author: Peterdjones 12 December 2012 10:12:50PM 0 points [-]

None of these were among my claims. For a program to reliably pass turing-like tests for seeing redness, a GLUT or zombielike would not cut it, you'd need some sort of internal system that generates certain inner properties and behaviors, one that would be effectively indistinguishable from qualia (this is my claim), and may very well be qualia (this is not my core claim, but it is something I find plausible).

Sorry that is most definitely "serendipitously arrive". You don't know how to engineer the Redness in explicilty, you are just assuming it must be there if everything else is in place.

However, I do greatly dislike your apparent premise* that qualia can never be dissolved to patterns and physics and logic.

The claimis more like "hasn't been", and you haven't shown me a SeeRed().

Comment author: Decius 12 December 2012 09:36:16PM 0 points [-]

Is there a reason to suppose that anybody else's maps have phenomenal feels, a way of testing that they do, or a way of telling the difference? Why can't those ways be generalized to Intelligent entities in general?

Comment author: Peterdjones 12 December 2012 10:03:25PM -1 points [-]

Is there a reason to suppose that anybody else's maps have phenomenal feels,

Yes: naturalism. It would be naturalistcially anomalous if their brains worked very smilarly , but their phenomenology were completely different.

a way of testing that they do,

No. So what? Are you saying we are all p-zombies?

Comment author: DaFranker 12 December 2012 10:10:28PM *  1 point [-]

No. So what? Are you saying we are all p-zombies?

I don't know about Decius, but...

I am.

I'm also saying that it doesn't matter. The p-zombies are still conscious. They just don't have any added "conscious" XML tags as per some imaginary, crazy-assed unnecessary definition of "consciousness".

Tangential to that point: I think any morality system which relies on an external supernatural thinghy in order to make moral judgments or to assign any terminal value to something is broken and not worth considering.

Comment author: Peterdjones 12 December 2012 10:15:23PM 1 point [-]

I'm also saying that it doesn't matter. The p-zombies are still conscious. They just don't have any added "conscious" XML tags as per some imaginary, crazy-assed unnecessary definition of "consciousness".

I have no idea what you are gettign at. Please clarify.

Tangential to that point: I think any morality system which relies on an external supernatural thinghy in order to make moral judgments or to assign any terminal value to something is broken and not worth considering.

That has no discernable relationship to anythign I have said. Have you confused me with someone else?

Comment author: nshepperd 13 December 2012 03:38:00AM 1 point [-]

You appear to be making an unfortunate assumption that what Chalmers and Peterdjones are talking about is crazy-assed unnecessary XML tags, as opposed to, y'know, regular old consciousness.

Comment author: Decius 14 December 2012 12:23:35AM 0 points [-]

I'm saying that there is no difference between a p-zombie and the alternative.

Comment author: Manfred 10 December 2012 04:32:16PM *  -1 points [-]

Though on the other hand, we don't have room to take everything serious dudes say seriously - too many dudes, not enough time.

If a problem happens not to exist, then I suppose one will just have to nerve onesself and not see it. Yes, there are non-hard problems of consciousness, where you explain how a certain process or feeling occurs in the brain, and sure, there are some non-hard problems I'd wave away with "well, that's solved by psychology somewhere." But no amount of that has any bearing on the "hard problem," which will remain in scare quotes as befits its effective nonexistence - finding a solution to a problem that is not a problem would be silly.

(EDIT: To clarify, I am not saying qualia do not exist, I am saying some mysterious barrier of hardness around qualia does not exist.)

Comment author: Peterdjones 10 December 2012 05:01:42PM 1 point [-]

If a problem happens not to exist, then I suppose one will just have to nerve onesself and not see it.

OK. Then demonstrate that the HP does not exist, in terms of Chalmer's specification, by showing that we do have a good explanation.

Comment author: Manfred 10 December 2012 08:04:11PM *  0 points [-]

Well, said Achilles, everybody knows that if you have A and B and "A and B imply Z," then you have Z.

How an Algorithm Feels From Inside.
The Visual Cortex is Used to Imagine
Stimulating the Visual Cortex Makes the Blind See

This sort of thing is sufficient for me, like Achilles' explanations were enough for Achilles. But if, say, the perception of the hard problem was causally unrelated to the actual existence of a hard problem (for epiphenominalism, this is literally what is going on), then gosh, it would seem like no matter what explanations you heard, the hard problem wouldn't go away - so it must be either a proof of dualism or a mistake.

Comment author: Peterdjones 10 December 2012 08:59:02PM *  1 point [-]

This sort of thing is sufficient for me

But not for me. Indeed. I am pretty sure none of those articles is even intended as a solution to the HP. And if they are, why not publish them is a journal and become famous?

How an Algorithm Feels From Inside.

Intended as a solution to FW.

Stimulating the Visual Cortex Makes the Blind See

So? Every living qualiaphile accepts some sort of relationship between brain states and qualia.

if, say, the perception of the hard problem was causally unrelated to the actual existence of a hard problem (for epiphenominalism, this is literally what is going on),

So? I said nothing about epiphenomenalism

Comment author: Manfred 10 December 2012 09:49:40PM *  0 points [-]

So? I said nothing about epiphenomenalism

The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.

Other than that, I don't have much to respond to here, since you're just going "So?"

Comment author: Peterdjones 10 December 2012 10:01:00PM *  0 points [-]

The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.

I can't find the posting, and I don't see how the MPF would relate to e12ism anyway.

The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.

How did you expect to convive me? I am familar with all the stuff you are quoting, and I still think there is an HP. So do many people.

Comment author: [deleted] 10 December 2012 04:35:35PM 1 point [-]

For practical reasons, I think that's fair enough...so long as we're clear that the above is a fully general counterargument.

Comment author: Manfred 10 December 2012 05:01:18PM *  0 points [-]

Right. I have not said any actual arguments against the hard problem of consciousness.

EDIT: Was true when I said it, then I replied to PeterD, not that it worked (as I noted in that very post, the direct approach has little chance against a confusion)

Comment author: Peterdjones 10 December 2012 05:05:23PM 0 points [-]

Argument for the importance of the HP: it is about the only thing that would motivate an educated 21st century person into doubting physcalism.

Comment author: RichardKennaway 10 December 2012 03:56:05PM 4 points [-]

The rest mostly go, "this could only be explained by a mysterious substance, there are no mysterious substances, therefore this does not exist."

Comment author: Peterdjones 10 December 2012 04:06:44PM *  0 points [-]

I don't know why you guys keep harping about substances. Substance dualism has been out of favour for a good century.

Comment author: Manfred 10 December 2012 04:54:32PM *  2 points [-]

Sorry, I was misusing terminology. Any ignorance-generating / ignorance-embodying explanation (e.g.s quantum mysticism / elan vital) uses what I'm calling "mysterious substance."

Basically I'm calling "quantum" a mysterious substance (for the quantum mystics), even though it's not like you can bottle it.

Maybe I should have said "mysterious form?" :D

Comment author: Peterdjones 10 December 2012 03:51:45PM 4 points [-]

There is a Hard Prolem, becuase there is basically no (non eliminative) science or technology of qualia at all. We cna get a start on the problem of building cognition, memory and perception into an AI, but we can;t get a start on writing code for Red or Pain or Salty. You can thell there is basically no non-eliminative science or technology of qualia because the best LWers' can quote is Dennett's eliminative theory.