Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: entirelyuseless 20 July 2017 01:35:15PM 1 point [-]

This seems like a good comment to illustrate, once again, your abuse of the idea of meaning.

I'm proposing the radical new view that the world is made of atoms and other "stuff", and that most words refer to some configurations of this stuff.

There are two ways to understand this claim: 1) most words refer to things which happen also to be configurations of atoms and stuff. 2) most words mean certain configurations of atoms.

The first interpretation would be fairly sensible. In practice you are adopting the second interpretation. This second interpretation is utterly false.

Consider the word "chair." Does the word chair mean a configuration of atoms that has a particular shape that we happen to consider chairlike?

Suppose someone approached a chair in your house with an atomic microscope and discovered that it was not made of atoms, but was a continuous substance without any boundaries in it. Would you suddenly say that it was not a chair? Not at all. You would say "this chair is not made of atoms." This proves conclusively that the meaning of the word chair has nothing whatsoever to do with "a configuration of atoms." A chair is in fact a configuration of atoms; but this is a description of a thing, not a description of a word.

In this view "pain" doesn't just correlate with some brain activity, it is that brain activity.

This could be true, if you mean this as a factual statement. It is utterly false, if you mean it as an explanation of the word "pain," which refers to a certain subjective experience. The word "pain" is not about brain activity in the same way that the word "chair" is not about atoms, as explained above.

But the question of "do robots feel pain", is as interesting and meaningful as "are tables also chairs".

I would just note that "are tables also chairs" has a definite answer, and is quite meaningful.

I'm pointing out that you cannot work out one from another, because your concept of consciousness has no properties or attributes that are more grounded in reality than consciousness itself. You need to play rationalist taboo. If you defined consciousness as "ability to process external events" or "ability to generate thoughts" or "the process that makes some people say they're conscious", finding a correspondence between consciousness and brain states would be possible, even if not easy. But you seem to refuse such definitions, you call them correlates, which suggests that there could be a consciousness that satisfies none of them.

I would say that being a chair (according to the meaning of the word) is correlated with being made of atoms. It may be perfectly correlated in fact; there may be no chair which is not made of atoms, and it may be factually impossible to find or make a chair which is not. But this is a matter for empirical investigation; it is not a matter of the meaning of the word. The meaning of the word is quite open to the possibility that there is a chair not made of atoms. In the same, the meaning of the word "consciousness" refers to a subjective experience, not to any objective description, and consequently in principle the meaning of the word is open to application to a consciousness which does not satisfy any particular objective description, as long as the subjective experience is present.

Comment author: tadasdatys 20 July 2017 05:35:24PM 0 points [-]

Suppose someone approached a chair in your house with an atomic microscope and discovered that it was not made of atoms

I explicitly added "other stuff" to my sentence to avoid this sort of argument. I don't want or need to be tied to current understanding of physics here.

But even if I had only said "atoms", this would not be a problem. After seeing a chair that I previously thought was impossible, I can update what I mean by "chair". In the same, but more mundane way, I can go to a chair expo, see a radical new design of chair, and update my category as well. The meaning of "chair" does not come down from the sky fully formed, it is constructed by me.

I would just note that "are tables also chairs" has a definite answer, and is quite meaningful.

I want to see that.

Comment author: TheAncientGeek 20 July 2017 11:49:48AM *  2 points [-]

But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.

Sure, but what does that have to do with anything?

We do, on the other hand, know subjecively what pain feels like..

Does "objective" mean "well understood" to you?

That's not the point. The point is that if we have words referring to subjective sensations, like "purple" and "bitter", we can distinguish them subjectively. But if we discard out subjective insight into them, as you are proposing, and replace them with vague objective descriptions -- vague, because no one knows precise descriptions of the full gamut of atomic configurations which implement pain. -- then you take a step backwards. You can't distingusih a brain-scan of someone seeing purple from a brainscan of someone tasting bitter. Basing semantics on objective facts, or "reality" as you call it. only works if you know which fact is which. You are promoting something which sounds good, but doesn't work -- as a research program. Of course it works just fine at getting applause from an audience of dualism-haters.

.multiple realisability

There are multiple representations

Are you talking about realisations or representations?

Flawed reasoning starts with a postulate that "Pain" exists and then asks, what physical states correspond to it. And when told that "pain is the activity in region X", it somehow feels that "activity in Y could also be described as pain", is a counter argument.

No one has made that argument. The point is not that it is not ultimately true that subjetive states are brain states, it is that rejecting the subjective entirely, at this stage, is not useful. Quite the reverse. Consciousness is the only thing we know from the inside--why throw that away?

Good reasoning starts with noticing that people say "ouch" when they stub their toes, or that subbing a toe has a very distinct feeling, and then asks, what causes/predicts these actions/differences, and then wonders, how could we best classify these.

If we know what casues or predicts something, then we can sometimes reproduce it in radically different ways. That is the basis of artificial intelligence: realising intelligence in a medium other than a brain.

But you are saying that pain cannot be realised by a robot, and your reasons for sayign that are entirely non-empirical.

We might be able to refine the concept of consciousness as part of a research programme, but research programmes have to start with folk concepts.

Comment author: tadasdatys 20 July 2017 05:07:06PM 0 points [-]

We do, on the other hand, know subjecively what pain feels like..

I know that the experience of stubbing my toe is called pain, and I know that what I'm sitting on is called a chair. But I don't know the "precise descriptions of the full gamut of atomic configurations which implement" them in either case. This is very normal.

You can't distingusih a brain-scan of someone seeing purple from a brainscan of someone tasting bitter.

You seem to be under impression that I advocate certain methods of examining brains over others. I don't know where you got that? I do believe that everything that could be learned about a brain could be learned by zapping one set of neurons and seeing if another set fires. But if it's more practical for you to show the brain something purple, and ask it to rate how bitter that felt, from 1 to 5, I have no problems with it. This method, while less direct, can be more useful (especially depending on the exact questions you want to answer). The problem, as I understand, is that you believe these two methods to be radically different, when they are not. It's as if you assume something is real, just because it comes out of people's mouths.

realisations or representations

I'm not assigning any different meanings to those words, at least not in this context. Are you? Interestingly, both words are pretty awkward to use here. And maybe I can agree that "realization" is a little better.

No one has made that argument.

Parts of my text are referring to the arguments I saw in wikipedia under "multiple realizaility". But the idea that Pain definitely is a thing that exists, rather than just a label for a set of things that your brain sometimes does, that are in some ways similar, is something I do find in your posts. In particular, you have to believe this to even ask whether robots feel pain.

and your reasons for sayign that are entirely non-empirical

I'm still waiting for your empirical reasons why "purple is not bitter", or better yet, "purple is not a chair", if you feel the concept of bitterness is too subjective.

Comment author: TheAncientGeek 20 July 2017 02:10:38PM 0 points [-]

There is nothing that you know about consciousness, from which you can derive a more accurate and more material description.

How do you know? And what of things like https://en.wikipedia.org/wiki/Global_Workspace_Theory ?

It buys me the ability to look at "do robots feel pain" and see that it's a stupid question.

It doesn't seem to have given you the ability to prove that it is a stupid question.

Comment author: tadasdatys 20 July 2017 04:26:42PM 0 points [-]

How do you know?

Well, for one, you have been unwilling to share any such knowledge. Is it a secret, perhaps?

https://en.wikipedia.org/wiki/Global_Workspace_Theory

I see a model that claims to reproduce some of the behaviors of the human mind. Why is that relevant? Where are your subjective experiences in it?

Also, to clarify, when I say "you know nothing", I'm not asking for some complex model or theory, I'm asking for the starting point from which those models and theories were constructed.

prove that it is a stupid question.

Proof is a high bar, and I don't know how to reach it. You could teach me by showing a proof, for example, that "is purple bitter" is a stupid question. Although I suspect that I would find your proof circular.

Comment author: TheAncientGeek 20 July 2017 03:18:08PM *  1 point [-]

All of these things have perfectly good physical representations.

Not if "perfectly good" means "known".

Comment author: tadasdatys 20 July 2017 04:09:46PM 0 points [-]

Not if "perfectly good" means "known".

It's ok, it doesn't. Why do people keep bringing up current knowledge?

Comment author: g_pepper 20 July 2017 12:27:32PM 0 points [-]

What makes you think that? Surely this belief would be a memory and memories are physically stored in the brain, right?

To clarify: at the present you can't obtain a person's beliefs by measurement, just as at the present we have no objective test for consciousness in entities with a physiology significantly different from our own. These things are subjective but not unreal.

Those sound like synonyms, not in any way more precise than the word "consciousness" itself.

And yet I know that I have first person experiences and I know that I am self-aware via direct experience. Other people likewise know these things about themselves via direct experience. And it is possible to discuss these things based on that common understanding. So, there is no reason to stop using the word "consciousness".

Comment author: tadasdatys 20 July 2017 04:06:50PM 0 points [-]

These things are subjective but not unreal.

Did you mean, "at present subjective"? Because if something is objectively measurable then it is objective. Are these things both subjective and objective? Or will we stop being conscious, when we get a better understanding of the brain.

I know that I have first person experiences and I know that I am self-aware via direct experience.

Are those different experiences or different words for the same thing? What would it feel like to be self-aware without having first person experiences or vice versa?

Comment author: gjm 20 July 2017 11:21:52AM 0 points [-]

I agree with much of what you say but I am not sure it implies for cousin_it's position what you think it does.

I'm sure it's true that, as you put it elsewhere in the thread, consciousness is "extrapolated": calling something conscious means that it resembles an awake normal human and not a rock, a human in a coma, etc., and there is no fact of the matter as to exactly how this should be extrapolated to (say) aliens or intelligent robots.

But this falls short of saying that at best, calling something conscious equals saying something about its externally observable behaviours.

For instance: suppose technology advances enough that we can (1) make exact duplicates of human beings, which (initially) exactly match the memories, personalities, capabilities, etc., of their originals, and (2) reversibly cause total paralysis in a human being, so that their mind no longer has any ability to produce externally observable effects, and (3) destroy a human being's capacity for conscious thought while leaving autonomic functions like breathing normal.

(We can do #2 and #3 pretty well already, apart from reversibility. I want reversibility so that we can confirm later that the person was conscious while paralysed.)

So now we take a normal human being (clearly conscious). We duplicate them (#1). We paralyse them both (#2). Then we scramble the brain of one of them (#3). Then we observe them as much as you like.

I claim these two entities have exactly the same observable behaviours, past and present, but that we can reasonably consider one of them conscious and the other not. We can verify that one of them was conscious by reversing the paralysis. Verifying that the other wasn't depends on our confidence that by mashing up most of their cerebral cortex (or whatever horrible thing we did in #3) really destroys consciousness, but this seems like a thing we could reasonably be quite confident of.

You might say that our judgement that one of these (ex-?) human beings is conscious is dependent on our ability to reverse the paralysis and check. But, given enough evidence that the induction of paralysis is harmlessly reversible, I claim we could be very confident even if we knew that after (say) a week both would be killed without the paralysis ever being reversed.

Comment author: tadasdatys 20 July 2017 04:00:33PM 0 points [-]

Indeed, we can always make two things seem indistinguishable, if we eliminate all of our abilities to distinguish them. The two bodies in your case could still be distinguished with an fmri scan, or similar tool. This might not count as "behavior", but then I never wanted "behavior" to literally mean "hand movements".

I think you could remove that by putting the two people into magical impenetrable boxes and then randomly killing one of them, through some schrodinger's cat-like process. But I wouldn't find that very interesting either. Yes, you can hide information, but it's not just information about consciousness you're hiding, but also about "ability to do arithmetic" and many other things. Now, if you could remove consciousness without removing anything else, that would be very interesting.

Comment author: TheAncientGeek 19 July 2017 07:51:08PM *  1 point [-]

The brain activity of pain is an objective fact

That the brain is not quiescent when experiencing pain is an objective fact. But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.

Please check out multiple realisability.

Because of that, none can genuinely tell whether an advanced robot has genuine qualia. That includes you, although you are inclined to think that your subjective intuitions are objective knowledge.

But the question of "do robots feel pain", is as interesting and meaningful as "are tables also chairs".

You accept multiple realisability for intelligence, but not for consciousness. That is arbitrary.

Comment author: tadasdatys 20 July 2017 08:19:19AM 0 points [-]

But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.

Sure, but what does that have to do with anything? Does "objective" mean "well understood" to you?

multiple realisability

There are multiple representations of pain the same way that there are multiple representations of chair.

It is ridiculous how much of this debate is about the basic problem of classification, rather than anything to do with brains. Flawed reasoning starts with a postulate that "Pain" exists and then asks, what physical states correspond to it. And when told that "pain is the activity in region X", it somehow feels that "activity in Y could also be described as pain", is a counter argument. Good reasoning starts with noticing that people say "ouch" when they stub their toes, or that subbing a toe has a very distinct feeling, and then asks, what causes/predicts these actions/differences, and then wonders, how could we best classify these.

your subjective intuitions

Calling my reasoning, even if not fully formal, "subjective intuitions" seems rude. I'm not sure if there is some point you're trying to express with that.

You accept multiple realisability for intelligence, but not for consciousness. That is arbitrary.

Not sure where you see me talking about intelligence. But intelligence is far more well defined and measurable than consciousness. Multiple realizability has nothing to do with that.

Comment author: TheAncientGeek 19 July 2017 07:39:08PM *  2 points [-]

It is possible to use language meaningfully without knowing exactly how it pans out in terms of precise configurations of matter, and also without denying that it eventually does. Treating pain semantically as some specific brain activity buys you nothing in terms of the ability to communicate and understand .... when you don't know which precise kind...which you don't. If Purple and Bitter are both Brain Activity Not Otherwise Specified, they are the same. If you can solve the mind body problem , then you will be in the position to specify the different kinds of brain activity they are. But you can also distinguish them , here and now, using the subjectively obvious difference. And without committing yourself to evil dualism.

Comment author: tadasdatys 20 July 2017 06:49:12AM 0 points [-]

It is possible to use language meaningfully without knowing exactly how it pans out in terms of precise configurations of matter

I have never claimed otherwise. In fact, there is literally nothing that I have exact description of, in terms of matter - neither pain nor chairs. But you have to know something. I know that "chair is what I sit on" and from that there is a natural way to derive many statements about chairs. I know that "gravity is what makes things fall down", and from that there is a fairly straightforward way to the current modern understanding of gravity. There is nothing that you know about consciousness, from which you can derive a more accurate and more material description.

Treating pain semantically as some specific brain activity buys you nothing

It buys me the ability to look at "do robots feel pain" and see that it's a stupid question.

And without committing yourself to evil dualism.

What evil dualism?

Comment author: TheAncientGeek 19 July 2017 07:21:52PM 0 points [-]

I did not mean to imply that ideal moral reasoning is weird and unguessable....only that you should not take imperfect moral reasoning (whose?) to be the last word. The idea that deliberately causing pain is wrong is not contentious, and you don't actually have an argument against it.

It's only subjective in the sense that mine is different from yours

That's the sense that matters.

Comment author: tadasdatys 20 July 2017 06:35:03AM 0 points [-]

That's the sense that matters.

That's not a very interesting sense. Is height also subjective, since we are not equally tall? This sense is also very far from the magical "subjective experience" you've used. I guess the problematic word in that phrase is "experience", not "subjective"?

Comment author: g_pepper 19 July 2017 08:16:44PM 0 points [-]

If a thing is "impossible to measure", then the thing is likely bullshit.

In the case of consciousness, we are talking about subjective experience. I don't think that the fact that we can't measure it makes it bullshit. For another example, you might wonder whether I have a belief as to whether P=NP, and if so, what that belief is. You can't get the answer to either of those things via measurement, but I don't think that they are bullshit questions (albeit they are not particularly useful questions).

What understanding exactly? Besides "I'm conscious" and "rocks aren't conscious", what is it that you understand about consciousness?

In brief, my understanding of consciousness is that it is the ability to have self-awareness and first-person experiences.

Comment author: tadasdatys 20 July 2017 06:09:05AM 0 points [-]

You can't get the answer to either of those things via measurement

What makes you think that? Surely this belief would be a memory and memories are physically stored in the brain, right? Again, there is a difference between difficult and impossible.

self-awareness and first-person experiences

Those sound like synonyms, not in any way more precise than the word "consciousness" itself.

View more: Next