(This post grew out of an old conversation with Wei Dai.)
Imagine a person sitting in a room, communicating with the outside world through a terminal. Further imagine that the person knows some secret fact (e.g. that the Moon landings were a hoax), but is absolutely committed to never revealing their knowledge of it in any way.
Can you, by observing the input-output behavior of the system, distinguish it from a person who doesn't know the secret, or knows some other secret instead?
Clearly the only reasonable answer is "no, not in general".
Now imagine a person in the same situation, claiming to possess some mental skill that's hard for you to verify (e.g. visualizing four-dimensional objects in their mind's eye). Can you, by observing the input-output behavior, distinguish it from someone who is lying about having the skill, but has a good grasp of four-dimensional math otherwise?
Again, clearly, the only reasonable answer is "not in general".
Now imagine a sealed box that behaves exactly like a human, dutifully saying things like "I'm conscious", "I experience red" and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?
A philosopher believing in computationalism would emphatically say yes. But considering the examples above, I would say I'm not sure! Not at all!
"Properties of the objects being classified" are much more extensive than you realize. For example, it is property of pain that it is subjective and only perceived by the one suffering it. Likewise, it is a property of a chair that someone made it for a certain purpose.
The intention of the one who makes a chair is relevant, but not necessarily completely determinate. If someone says "I am making a chair," but it turns out that the thing has the shape of a hammer, it still will not be a chair.
In most cases of that kind, the thing being called a table really is a table, and not a stool. Obviously I cannot confirm this in the particular case since I do not intend to buy it. But it is related to the fact that it is made for a certain purpose, as I said. In other words, in most cases the thing is not suitable for use as a stool: it might collapse after one occasion of sitting on it, or anyway after several days. In other words, being made as a table, it is physically unsuitable to be used as a seat. And consequently if it did collapse, it would quite correct to say, "This collapsed because you were using it as a stool even though it is not one."
That said, I already said that the intention of the makers is not 100% determining.
That's not subject to falsification, in the same way that it is not subject to falsification that the thing I am sitting on is called a "chair." In other words, I already notice the similarity between all the things that are called feelings in the same way that I notice the similarity between chairs.
Talk about assumptions. I assume, and you are assuming here, that I have a brain, because we know in most cases that when people have been examined, they turned out to have brains inside their heads. But the fact that my toe hurts when I stub it, is not an assumption. If it turned out that I did not have a brain, I would not say, "I must have been wrong about suffering pain." I would say "My pain does not depend on a brain." I pointed out your error in this matter several times earlier -- the meaning of pain has absolutely nothing at all to do with brain activities or even the existence of a brain. As far as anyone knows, the pain I feel when I stub my toe could depend on a property of the moon, and the pain I feel when I bump into a lamppost on a property of Mt. Everest. If that were the case, it would affect in no way the fact that those two pains feel similar.
This is completely wrong, for the reason I just stated. We are not talking about similarities between brain states -- we are talking about the similarity of two feelings. So it does not matter if the robot's brain state is similar to mine. It matters whether it feels similar, just as I noted that my different pains feel similar to one other, and would remain feeling similar, even if they depended on radically different physical objects like the moon and Mr. Everest.
When exactly is the intention relevant? If two objects have the same shape but different intended uses, and you still classify them the same, then the intention is not relevant. More generally, if we have variables X, Y and want to test if a function f(X,Y) depends not only on X, but also on Y, we have to find a point wh... (read more)