aaronsw comments on Mixed Reference: The Great Reductionist Project - Less Wrong

29 Post author: Eliezer_Yudkowsky 05 December 2012 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (353)

You are viewing a single comment's thread. Show more comments above.

Comment author: aaronsw 25 December 2012 09:46:41PM 3 points [-]

I was talking about Searle's non-AI work, but since you brought it up, Searle's view is:

  1. qualia exists (because: we experience it)
  2. the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
  3. if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

Which part does LW disagree with and why?

Comment author: Benito 26 December 2012 01:01:07AM *  2 points [-]

To offer my own reasons for disagreement,

I think the first point is unfounded (or misguided). We do things (like moving, and thinking). We notice and can report that we've done things, and occasionally we notice and can report that we've noticed that we've done something. That we can report how things appear to a part of us that can reflect upon stimuli is not important enough to be called 'quaila'. That we notice that we find experience 'ineffable' is not a surprise either - you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving). So, all we really have is the ability to notice and report that which has been advantageous for us to report in the evolutionary history of the human (these stimuli that we can notice are called 'experiences'). There is nothing mysterious here, and the word 'qualia' always seems to be used mysteriously - so I don't think the first point carries the weight it might appear to.

if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

Qualia is not clearly a basic fact of physics. I made the point that we would not expect a species designed by natural selection to be able to report or comprehend its most detailed, inner workings, solely on the evidence of what it can report and notice. But this is all skirting around the core idea of LessWrong: The map is not the territory. Just because something seems fundamental does not mean it is. Just because it seems like a Turing machine couldn't be doing consciousness, doesn't mean that is how it is. We need to understand how it came to be that we feel what we feel, before go making big claims about the fundamental nature of reality. This is what is worked on in LessWrong, not in Searle's philosophy.

Comment author: Peterdjones 26 December 2012 11:32:35AM -1 points [-]

That we notice that we find 'ineffable' is not a surprise either - you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving)That we notice that we find 'ineffable' is not a surprise either - you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving)

If the ineffabiity of qualia is down to the complexity of fine-grained neural behaviour, then the question is why is anything effable -- people can communicate about all sorts of things that aren't sensations (and in many cases are abstract and "in the head").

Comment author: Benito 26 December 2012 11:54:52AM *  0 points [-]

I'm not sure that I follow. Can anything we talk about be reduced to less than the basic stimuli we notice ourselves having?

All words (that mean anything) refer to something. When I talk about 'guitars', I remember experiences I've had which I associate with the word (i.e. guitars). Most humans have similar makeups, in that we learn in similar ways, and experience in similar ways (I'm just talking about the psychological unity of humans, and how far our brain design is from, say, mice). So, we can talk about things, because we've learnt to refer certain experiences (words) to others (guitars).

Neither of the two can refer to anything other to the experiences we have. Anything we talk about is in relation to our experiences (Or possibly even meaningless).

Comment author: Peterdjones 26 December 2012 02:52:36PM *  -1 points [-]

Most of the classic reductions are reductions to things beneath perceivable stimuli,eg heat to molecular motion. Reductionism and physialism would be in very bad trouble if language and concpetualistion grounded out where perception does. The theory also mispredicts that we woul be able communicate our sensations , but struggle to communicate abstract (eg mathemataical) ideas with a distant rleationship, or no relationship to senssation. In fact, the classic reductions are to the basic entities of phyiscs, which are ultimately defined mathematically, and often hard to hard to visualise or otherwise relate to sensation.

Comment author: Benito 26 December 2012 04:31:06PM *  0 points [-]

You could point out the different constituents of experience that feel fundamental, but they themselves (e.g. Red) don't feel as though they are made up of anything more than themselves.

When we talk about atoms, however, that isn't a basic piece of mind that mind can talk about. My mind feels as though it is constituted of qualia, and it can refer to atoms. I don't experience an atom, I experience large groups of them, in complex arrangements. I can refer to the atom using larger, complex arrangements of neurons (atoms). Even though, when my mind asks what the basic parts of reality are, it has a chain of reference pointing to atoms, each part of that chain is a set of neural connections, that don't feel reducible.

Even on reflection, our experiences reduce to qualia. We deduce that qualia are made of atoms, but that doesn't mean that our experience feels like its been reduced to atoms.

Comment author: Peterdjones 27 December 2012 11:12:28AM *  -1 points [-]

Where is that heading? Is it supposed to tell my why qualia are ineffable....or rather, why qualia are more ineffable than cognition?

Comment author: Benito 27 December 2012 12:38:57PM *  0 points [-]

I'm saying that we should expect experience to feel as if made of fundamental, ineffable parts, even though we know that it is not. So, qualia aren't the problem for a turing machine they appear to be.

Also, we all share these experience 'parts' with most other humans, due to the psychological unity of humankind. So, if we're all sat down at an early age, and drilled with certain patterns of mind parts (times-tables), then we should expect to be able to draw upon them at ease.

My original point, however, was just that the map isn't the territory. Qualia don't get special attention just because they feel different. They have a perfectly natural explanation, and you don't get to make game-changing claims about the territory until you've made sure your map is pretty spot-on.

Comment author: Peterdjones 27 December 2012 01:12:41PM -1 points [-]

I'm saying that we should expect experience to feel as if made of fundamental, ineffable parts, even though we know that it is not.

I don 't see why. Saying that eperience is really complex neurall activity isn't enough to explain that, because thought is really complex neural activity as well, and we can comminicate and unpack concepts.

So, qualia aren't the problem for a turing machine they appear to be.

Can you write the code for SeeRed() ? Or are you saying that TMs would have ineffable concepts?

. Qualia don't get special attention just because they feel different. They have a perfectly natural explanation,

You've inverted the problem: you have creatd the expectation that nothing mental is effable.

Comment author: Benito 27 December 2012 01:54:50PM *  0 points [-]

No, I'm saying that no basic, mental part will feel effable. Using our cognition, we can make complex notions of atoms and guitars, built up in our minds, and these will explain why our mental aspects feel fundamental, but they will still feel fundamental.

I'm not continuing this discussion, it's going nowhere new. I will offer Orthonormal's sequence on qualia as explanatory however: http://lesswrong.com/lw/5n9/seeing_red_dissolving_marys_room_and_qualia/

Comment author: nshepperd 26 December 2012 12:33:27AM *  2 points [-]

I can't really speak for LW as a whole, but I'd guess that among the people here who don't believe¹ "qualia doesn't exist", 1 and 2 are fine, but we have issues with 3, as expanded below. Relatedly, there seems be some confusion between the "boring AI" proposition, that you can make computers do reasoning, and Searle's "strong AI" thing he's trying to refute, which says that AIs running on computers would have both consciousness and some magical "intentionality". "Strong AI" shouldn't actually concern us, except in talking about EMs or trying to make our FAI non-conscious.

3. if you simulate a brain with a Turing machine, it won't have qualia

Pretty much disagree.

qualia is clearly a basic fact of physics

Really disagree.

and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not

And this seems really unlikely.

¹ I qualify my statement like this because there is a long-standing confusion over the use of the word "qualia" as described in my parenthetical here.

Comment author: aaronsw 04 January 2013 10:08:32PM 2 points [-]

Well, let's be clear: the argument I laid out is trying to refute the claim that "I can create a human-level consciousness with a Turing machine". It doesn't mean you couldn't create an AI using something other than a pure Turing machine and it doesn't mean Turing machines can't do other smart computations. But it does mean that uploading a brain into a Von Neumann machine isn't going to keep you alive.

So if you disagree that qualia is a basic fact of physics, what do you think it reduces to? Is there anything else that has a first-person ontology the way qualia does?

And if you think physics can tell whether something is a Turing-machine-simulating-a-brain, what's the physical algorithm for looking at a series of physical particles and deciding whether it's executing a particular computation or not?

Comment author: nshepperd 05 January 2013 02:45:29AM 1 point [-]

So if you disagree that qualia is a basic fact of physics, what do you think it reduces to?

Something brains do, obviously. One way or another.

And if you think physics can tell whether something is a Turing-machine-simulating-a-brain, what's the physical algorithm for looking at a series of physical particles and deciding whether it's executing a particular computation or not?

I should perhaps be asking what evidence Searle has for thinking he knows things like what qualia is, or what a computation is. My statements were both negative: it is not clear that qualia is a basic fact of physics; it is not obvious that you can't describe computation in physical terms. Searle just makes these assumptions.

If you must have an answer, how about this: a physical system P is a computation of a value V if adding as premises the initial and final states of P and a transition function describing the physics of P shortens a formal proof that V = whatever.

Comment author: aaronsw 05 January 2013 09:46:46PM 1 point [-]

They're not assumptions, they're the answers to questions that have the highest probability going for them given the evidence.

Comment author: MugaSofer 26 December 2012 02:04:35AM 1 point [-]

if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

There's your problem. Why the hell should we assume that "qualia is clearly a basic fact of physics "?

Comment author: aaronsw 04 January 2013 10:10:16PM 1 point [-]

Because it's the only thing in the universe we've found with a first-person ontology. How else do you explain it?

Comment author: MugaSofer 04 January 2013 11:39:16PM *  -1 points [-]

Well, I probably can't explain it as eloquently as others here - you should try the search bar, there are probably posts on the topic much better than this one - but my position would be as follows:

  • Qualia are experienced directly by your mind.

  • Everything about your mind seems to reduce to your brain.

  • Therefore, qualia are probably part of your brain.

Furthermore, I would point out two things: one, that qualia seem to be essential parts of having a mind; I certainly can't imagine a mind without qualia; and two, that we can view (very roughly) images of what people see in the thalamus, which would suggest that what we call "qualia" might simply be part of, y'know, data processing.

Comment author: TheOtherDave 26 December 2012 12:57:03AM *  1 point [-]

Another not-speaking-for-LW answer:

Re #1: I certainly agree that we experience things, and that therefore the causes of our experience exist. I don't really care what name we attach to those causes... what matters is the thing and how it relates to other things, not the label. That said, in general I think the label "qualia" causes more trouble due to conceptual baggage than it resolves, much like the label "soul".

Re #2: This argument is oversimplistic, but I find the conclusion likely.
More precisely: there are things outside my brain (like, say, my adrenal glands or my testicles) that alter certain aspects of my experience when removed, so it's possible that the causes of those aspects reside outside my brain. That said, I don't find it likely; I'm inclined to agree that the causes of my experience reside in my brain. I still don't care much what label we attach to those causes, and I still think the label "qualia" causes more confusion due to conceptual baggage than it resolves.

Re #3: I see no reason at all to believe this. The causes of experience are no more "clearly a basic fact of physics" than the causes of gravity; all that makes them seem "clearly basic" to some people is the fact that we don't understand them in adequate detail yet.

Comment author: pjeby 26 December 2012 12:20:35AM 0 points [-]

Searle's view is:

  1. qualia exists (because: we experience it)
  2. the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
  3. if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

Which part does LW disagree with and why?

The whole thing: it's the Chinese Room all over again, a intuition pump that begs the very question it's purportedly answering. (Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word "understanding" is fudged in the Chinese Room argument, but basically it's the same.)

I suppose you could say that there's a grudging partial agreement with your point number two: that "the brain causes qualia". The rest of what you listed, however, is drivel, as is easy to see if you substitute some other term besides "qualia", e.g.:

  1. Free will exists (because: we experience it)
  2. The brain causes free will (because if you cut off any part, etc.)
  3. If you simulate a brain with a Turing machine, it won't have free will because clearly it's a basic fact of physics and there's no way to tell just using physics whether something is a machine simulating a brain or not.

It doesn't matter what term you plug into this in place of "qualia" or "free will", it could be "love" or "charity" or "interest in death metal", and it's still not saying anything more profound than, "I don't think machines are as good as real people, so there!"

Or more precisely: "When I think of people with X it makes me feel something special that I don't feel when I think of machines with X, therefore there must be some special quality that separates people from machines, making machine X 'just a simulation'." This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.

Specifically, the thing that drives these arguments is our inbuilt machinery that classifies things as mind-having or not-mind-having, for purposes of prediction-making. But the feeling that we get that a thing is mind-having or not-mind-having is based on what was useful evolutionarily, not on what the actual truth is. Searlian (Surly?) arguments are thus in exactly the same camp as any other faith-based argument: elevating one's feelings to Truth, irrespective of the evidence against them.

Comment author: [deleted] 26 December 2012 01:07:45AM 1 point [-]

(Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word "understanding" is fudged in the Chinese Room argument, but basically it's the same.)

Just a nit pick: the argument Aaron presented wasn't an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn't beg the question. Aaron's argument was an argument agains artificial consciousness.

Also, I think Aaron's presentation of (3) was a bit unclear, but it's not so bad a premise as you think. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won't experience qualia. So if we have qualia, and count as conscious in virtue of having qualia (1), then brain-simulating turing machines won't count as conscious. If we don't have qualia, i.e. if all our mental states are reducible to purely physical descriptions, then the argument is unsound because premise (1) is false.

You're right that you can plug many a term in to replace 'qualia', so long as those things are not reducible to purely physical descriptions. So you couldn't plug in, say, heart-attacks.

This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.

Could you explain this a bit more? I don't see how it's relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle's argument.

Comment author: pjeby 26 December 2012 03:09:16AM *  1 point [-]

the argument Aaron presented wasn't an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn't beg the question

In order for the argument to make any sense, you have to buy into several assumptions which basically are the argument. It's "qualia are special because they're special, QED". I thought about calling it circular reasoning, except that it seems closer to begging the question. If you have a better way to put it, by all means share.

Could you explain this a bit more? I don't see how it's relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle's argument.

When I said that our mind detection circuitry was the root of the argument, I didn't mean that Searle was overtly arguing on the basis of his feelings. What I'm saying is, the only evidence for Searle-type premises are the feelings created by our mind-detection circuitry. If you assume these feelings mean something, then Searle-ish arguments will seem correct, and Searle-ish premises will seem obvious beyond question.

However, if you truly grok the mind-projection fallacy, then Searle-type premises are just as obviously nonsensical, and there's no reason to pay any attention to the arguments built on top of them. Even as basic a tool as Rationalist Taboo suffices to debunk the premises before the argument can get off the ground.

Comment author: Peterdjones 26 December 2012 12:28:49PM -1 points [-]

you have to buy into several assumptions which basically are the argument.

Any vald argument has a conclusion that is entiailed by its premises taken jointly. Circularity is when the whole conclusion is entailed by one premise, with the others being window-dressing.

you have to buy into several assumptions which basically are the argument.

I think there is a way that ripe tomatoes seem visually: how is that mind-projection.

Comment author: MugaSofer 26 December 2012 01:57:19AM 0 points [-]

But ... if you're assuming that qualia are "not reducible to purely physical descriptions", and you need qualia to be conscious, then obviously brain-simulations wont be conscious. But those assumptions seem to be the bulk of the position he's defending, aren't they?

Comment author: [deleted] 26 December 2012 03:21:28AM 2 points [-]

But those assumptions seem to be the bulk of the position he's defending, aren't they?

Right, the argument comes down, for most of us, to the first premise: do we or do we not have mental states irreducible to purely physical conditions. Aaron didn't present an argument for that, he just presented Searle's argument against AI from that. But you're right to ask for a defense of that premise, since it's the crucial one and it's (at the moment) undefended here.

Comment author: MugaSofer 26 December 2012 03:13:23PM *  -2 points [-]

Presenting an obvious result of a nonobvious premise as if it was a nonobvious conclusion seems suspicious, as if he's trying to trick listeners into accepting his conclusion even when their priors differ.

[Edited for terminology.]

Comment author: [deleted] 26 December 2012 03:33:34PM 0 points [-]

Presenting a trivial conclusion from nontrivial premises as a nontrivial conclusion seems suspicious

Not only suspicious, but impossible: if the premises are non-trivial, the conclusion is non-trivial.

In every argument, the conclusion follows straight away from the premises. If you accept the premises, and the argument is valid, then you must accept the conclusion. The conclusion does not need any further support.

Comment author: MugaSofer 26 December 2012 10:08:43PM -1 points [-]

Y'know, you're right. Trivial is not the right word at all.

Comment author: Peterdjones 26 December 2012 12:48:10PM -1 points [-]

. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won't experience qualia.

To pick a further nit, the argument is more that qualia can't be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.

Comment author: [deleted] 26 December 2012 03:36:16PM *  0 points [-]

To pick a further nit, the argument is more that qualia can't be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.

That's a possibility, but not as I laid out the argument: if being conscious entails having qualia, and if qualia are all irreducible to purely physical descriptions, and every state of a turning machine is reducible to a purely physical description, then turing machines can't simulate consciousness. That's not very neat, but I do believe it's valid. Your alternative is plausible, but it requires my 'turning machines are reducible to purely physical descriptions' premise to be false.

Comment author: aaronsw 04 January 2013 09:51:39PM *  0 points [-]

Beginning an argument for the existence of qualia with a bare assertion that they exist

Huh? This isn't an argument for the existence of qualia -- it's an attempt to figure out whether you believe in qualia or not. So I take it you disagree with step one, that qualia exists? Do you think you are a philosophical zombie?

I do think essentially the same argument goes through for free will, so I don't find your reductio at all convincing. There's no reason, however, to believe that "love" or "charity" is a basic fact of physics, since it's fairly obvious how to reduce these. Do you think you can reduce qualia?

I don't understand why you think this is a claim about my feelings.

Comment author: shminux 05 January 2013 12:38:05AM *  2 points [-]

Suppose that neuroscientists some day show that the quale of seeing red matches a certain brain structure or a neuron firing pattern or a neuro-chemical process in all humans. Would you then say that the quale of red has been reduced?

Comment author: aaronsw 05 January 2013 09:45:19PM 2 points [-]

Of course not!

Comment author: shminux 05 January 2013 10:16:42PM 0 points [-]

and why not?

Comment author: aaronsw 05 January 2013 10:23:39PM 1 point [-]

Because the neuron firing pattern is presumably the cause of the quale, it's certainly not the quale itself.

Comment author: shminux 05 January 2013 10:35:42PM 1 point [-]

I don't understand what else is there.

Comment author: aaronsw 05 January 2013 11:28:17PM 6 points [-]

Imagine a flashlight with a red piece of cellophane over it pointed at a wall. Scientists some day discover that the red dot on the wall is caused by the flashlight -- it appears each and every time the flashlight fires and only when the flashlight is firing. However, the red dot on the wall is certainly not the same as the flashlight: one is a flashlight and one is a red dot.

The red dot, on the other hand, could be reduced to some sort of interaction between certain frequencies of light-waves and wall-atoms and so on. But it will certainly not get reduced to flashlights.

By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren't made out of neurons any more than red dots are made of flashlights.

Comment author: shminux 06 January 2013 01:04:44AM 0 points [-]

By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren't made out of neurons any more than red dots are made of flashlights.

Ok, that's where we disagree. To me the subjective experience is the process in my brain and nothing else.

Comment author: Peterdjones 05 January 2013 11:32:13PM *  0 points [-]

There's no arguemnt there. Your point about qualia is illustrated by your point about flashlights, but not entailed by it.

Comment author: MugaSofer 10 January 2013 12:58:36PM -2 points [-]

By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren't made out of neurons any more than red dots are made of flashlights.

How do you know this?

Comment author: Peterdjones 05 January 2013 10:32:49PM 0 points [-]

There's no certainty either way.

Comment author: Peterdjones 05 January 2013 10:36:35PM *  -2 points [-]

Reduction is an explanatory process: a mere observed correlation does not qualify.

Comment author: pjeby 06 January 2013 01:21:46AM *  0 points [-]

I take it you disagree with step one, that qualia exists?

I think that anyone talking seriously about "qualia" is confused, in the same way that anyone talking seriously about "free will" is.

That is, they're words people use to describe experiences as if they were objects or capabilities. Free will isn't something you have, it's something you feel. Same for "qualia".

I do think essentially the same argument goes through for free will

Dissolving free will is considered an entry-level philosophical exercise for Lesswrong. If you haven't covered that much of the sequences homework, it's unlikely that you'll find this discussion especially enlightening.

(More to the point, you're doing the rough equivalent of bugging people on a newsgroup about a question that is answered in the FAQ or an RTFM.)

Do you think you are a philosophical zombie?

This is probably a good answer to that question.

I don't understand why you think this is a claim about my feelings.

Because (as with free will) the only evidence anyone has (or can have) for the concept of qualia is their own intuitive feeling that they have some.

Comment author: Peterdjones 06 January 2013 01:27:09AM -2 points [-]

Free will isn't something you have, it's something you feel.

So you say. It is not standardly defined that way.

Same for "qualia".

Qualia are defined as feelings, sensations etc. Since we have feelings, sensations etc we have qualia. I do not see the confusion in using the word ""qualia"

Comment author: hairyfigment 05 January 2013 12:21:35AM -1 points [-]

Do you think you can reduce qualia?

Well, would that mean writing a series like this?

My intuition certainly says that Martha has a feeling of ineffable learning. Do you at least agree that this proves the unreliability of our intuitions here?

Comment author: aaronsw 05 January 2013 09:45:02PM -1 points [-]

Who said anything about our intuitions (except you, of course)?

Comment author: hairyfigment 06 January 2013 05:26:14AM 0 points [-]

You keep making statements like,

the neuron firing pattern is presumably the cause of the quale, it's certainly not the quale itself.

And you seem to consider this self-evident. Well, it seemed self-evident to me that Martha's physical reaction would 'be' a quale. So where do we go from there?

(Suppose your neurons reacted all the time the way they do now when you see orange light, except that they couldn't connect it to anything else - no similarities, no differences, no links of any kind. Would you see anything?)

Comment author: aaronsw 06 January 2013 07:34:20PM 1 point [-]

I guess you need to do some more thinking to straighten out your views on qualia.

Comment author: Exiles 12 January 2013 10:12:13AM *  2 points [-]

Goodnight, Aaron Swartz.

Comment author: MugaSofer 06 January 2013 10:39:26PM *  0 points [-]

Or you do. You claim the truth of your claims is self-evident, yet it is not evident to, say, hairyfigment, or Eliezer, or me for that matter.

If I may ask, have you always held this belief, or do you recall being persuaded of it at some point? If so, what convinced you?

Comment author: hairyfigment 06 January 2013 11:13:28PM *  -1 points [-]

Let's back up for a second:

  • You've heard of functionalism, right? You've browsed the SEP entry?
  • Have you also read the mini-sequence I linked? In the grandparent I said "physical reaction" instead of "functional", which seems like a mistake on my part, but I assumed you had some vague idea of where I'm coming from.
Comment author: MugaSofer 04 January 2013 11:46:10PM *  -1 points [-]

I do think essentially the same argument goes through for free will

Could you expand on this point, please? It generally agreed* that "free will vs determinism" is a dilemma that we dissolved long ago. I can't see what else you could mean by this, so ...

[*EDIT: here, that is]

Comment author: aaronsw 05 January 2013 09:36:42PM 0 points [-]

I guess it really depends on what you mean by free will. If by free will, pjeby meant some kind of qualitative experience, then it strikes me that what he means by it is just a form of qualia and so of course the qualia argument goes through. If he means by it something more complicated, then I don't see how point one holds (we experience it), and the argument obviously doesn't go through.

Comment author: Peterdjones 26 December 2012 12:17:36PM 0 points [-]

Beginning an argument for the existence of qualia with a bare assertion that they exist

But that's not contentious. Qualia are things like the appearence of tomatoes or taste of lemon. I've seen tomatoes and tasted lemons.

This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.

But Searle says that feelngs, understanding, etc are properties of how the brain works. What he argues against is the claim that they are computational properties. But it is also uncontentious that physiclaism can be true and computationalism false.

Comment author: Peterdjones 26 December 2012 10:52:29AM -1 points [-]

if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

It isn't even clear to Searle that qualia are physically basic. He thinks consciousness is a a high-level outcome of the brain's concrete causal powers. His objection to computaional apporaches is rooted in the abstract nature of computation, not in the physcial basiscness of qualia. (In fact, he doesn't use the word "qualia", although he often seems to be talking about the same thing).