Clearly the only reasonable answer is "no, not in general".
I challenge this.
Either you relax the communication channel in such a way that I can access other kinds of information (brain scans, purchase history, body language, etc.) or you do not get to say "not in general", because there's nothing general about two people communicating only through a terminal.
To me it's like you're saying "can you tell me how a cake smells by a picture? No! So I'm not sure that smells are really communicable". Hm.
This post grew out of an old conversation with Wei Dai
Since physical existence of Wei is highly doubtful can we have a link to the conversation?
The argument is too general, as it also proves that it is impossible to know that another biological human has conscious. Maybe nobody except me-now has it.
I knew a person who claimed that he could create 4-dimensional images in hid mind eye. I don't know should I believe him and how to check it.
What if the person claims to be able to add numbers? If you ask them about 2+2 and they answer 4, maybe they were pre-ordered with that response, but if you get them to add a few dozen poisson-distributed numbers, maybe you start believing they're actually implementing the algorithm. This relies on the important distinction between telling two things apart certainly and gathering evidence.
The three examples deal with different kinds of things.
Knowing X mostly means believing in X, or having a memory of X. Ideally beliefs would influence actions, but even if they don't, they should be physically stored somehow. In that sense they are the most real of the three.
Having a mental skill to do X means that you can do X with less time and effort than other people. With honest subjects, you could try measuring these somehow, but, obviously, you may find some subject who claims to have the skill perform slower than another who claims not to. Ultimate...
Now imagine a sealed box that behaves exactly like a human, dutifully saying things like "I'm conscious", "I experience red" and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?
I think you're doing some priming here by adding &quo...
What sort of reasons are we currently talking about though? I want to hear reasons based on the properties of the objects being classified. You seem to accept whatever reasons you can come up with.
"Properties of the objects being classified" are much more extensive than you realize. For example, it is property of pain that it is subjective and only perceived by the one suffering it. Likewise, it is a property of a chair that someone made it for a certain purpose.
If IKEA made two identical objects and labeled one "chair" and another "table", would they then actually be different objects?
The intention of the one who makes a chair is relevant, but not necessarily completely determinate. If someone says "I am making a chair," but it turns out that the thing has the shape of a hammer, it still will not be a chair.
IKEA can have whatever intentions they want, but http://www.ikea.com/us/en/catalog/products/20299829/ is a stool. Are you seriously telling me that it isn't?
In most cases of that kind, the thing being called a table really is a table, and not a stool. Obviously I cannot confirm this in the particular case since I do not intend to buy it. But it is related to the fact that it is made for a certain purpose, as I said. In other words, in most cases the thing is not suitable for use as a stool: it might collapse after one occasion of sitting on it, or anyway after several days. In other words, being made as a table, it is physically unsuitable to be used as a seat. And consequently if it did collapse, it would quite correct to say, "This collapsed because you were using it as a stool even though it is not one."
That said, I already said that the intention of the makers is not 100% determining.
That's assuming that "feeling" is a meaningful category.
That's not subject to falsification, in the same way that it is not subject to falsification that the thing I am sitting on is called a "chair." In other words, I already notice the similarity between all the things that are called feelings in the same way that I notice the similarity between chairs.
If you didn't start from that assumption, and instead identified your experiences with brain states, you could go one step further and ask "are the states of the robot's processor/memory similar to my brain states", but then you hit the obvious classification problem.
Talk about assumptions. I assume, and you are assuming here, that I have a brain, because we know in most cases that when people have been examined, they turned out to have brains inside their heads. But the fact that my toe hurts when I stub it, is not an assumption. If it turned out that I did not have a brain, I would not say, "I must have been wrong about suffering pain." I would say "My pain does not depend on a brain." I pointed out your error in this matter several times earlier -- the meaning of pain has absolutely nothing at all to do with brain activities or even the existence of a brain. As far as anyone knows, the pain I feel when I stub my toe could depend on a property of the moon, and the pain I feel when I bump into a lamppost on a property of Mt. Everest. If that were the case, it would affect in no way the fact that those two pains feel similar.
There are some similarities and there are some differences, and you have to choose which of those are the most important to you, and there is no one right way to do it. Lack of knowledge isn't the main problem here.
This is completely wrong, for the reason I just stated. We are not talking about similarities between brain states -- we are talking about the similarity of two feelings. So it does not matter if the robot's brain state is similar to mine. It matters whether it feels similar, just as I noted that my different pains feel similar to one other, and would remain feeling similar, even if they depended on radically different physical objects like the moon and Mr. Everest.
The intention of the one who makes a chair is relevant, but not necessarily completely determinate. If someone says "I am making a chair," but it turns out that the thing has the shape of a hammer, it still will not be a chair.
When exactly is the intention relevant? If two objects have the same shape but different intended uses, and you still classify them the same, then the intention is not relevant. More generally, if we have variables X, Y and want to test if a function f(X,Y) depends not only on X, but also on Y, we have to find a point wh...
(This post grew out of an old conversation with Wei Dai.)
Imagine a person sitting in a room, communicating with the outside world through a terminal. Further imagine that the person knows some secret fact (e.g. that the Moon landings were a hoax), but is absolutely committed to never revealing their knowledge of it in any way.
Can you, by observing the input-output behavior of the system, distinguish it from a person who doesn't know the secret, or knows some other secret instead?
Clearly the only reasonable answer is "no, not in general".
Now imagine a person in the same situation, claiming to possess some mental skill that's hard for you to verify (e.g. visualizing four-dimensional objects in their mind's eye). Can you, by observing the input-output behavior, distinguish it from someone who is lying about having the skill, but has a good grasp of four-dimensional math otherwise?
Again, clearly, the only reasonable answer is "not in general".
Now imagine a sealed box that behaves exactly like a human, dutifully saying things like "I'm conscious", "I experience red" and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?
A philosopher believing in computationalism would emphatically say yes. But considering the examples above, I would say I'm not sure! Not at all!