Clearly the only reasonable answer is "no, not in general".
I challenge this.
Either you relax the communication channel in such a way that I can access other kinds of information (brain scans, purchase history, body language, etc.) or you do not get to say "not in general", because there's nothing general about two people communicating only through a terminal.
To me it's like you're saying "can you tell me how a cake smells by a picture? No! So I'm not sure that smells are really communicable". Hm.
This post grew out of an old conversation with Wei Dai
Since physical existence of Wei is highly doubtful can we have a link to the conversation?
The argument is too general, as it also proves that it is impossible to know that another biological human has conscious. Maybe nobody except me-now has it.
I knew a person who claimed that he could create 4-dimensional images in hid mind eye. I don't know should I believe him and how to check it.
What if the person claims to be able to add numbers? If you ask them about 2+2 and they answer 4, maybe they were pre-ordered with that response, but if you get them to add a few dozen poisson-distributed numbers, maybe you start believing they're actually implementing the algorithm. This relies on the important distinction between telling two things apart certainly and gathering evidence.
The three examples deal with different kinds of things.
Knowing X mostly means believing in X, or having a memory of X. Ideally beliefs would influence actions, but even if they don't, they should be physically stored somehow. In that sense they are the most real of the three.
Having a mental skill to do X means that you can do X with less time and effort than other people. With honest subjects, you could try measuring these somehow, but, obviously, you may find some subject who claims to have the skill perform slower than another who claims not to. Ultimate...
Now imagine a sealed box that behaves exactly like a human, dutifully saying things like "I'm conscious", "I experience red" and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?
I think you're doing some priming here by adding &quo...
How do I know that some activity is "pondering your own consciousness"?
Isn't that what you were doing when you said "Can I be sure that I'm conscious"?
It seems to me that one's own consciousness is beyond dispute if one is able to think about things (including but not limited to one's own consciousness) and have first-person experiences. Even if one disputes the consciousness of others (for example, if one is a solipsist), I don't see how anyone can reasonably doubt his/her own consciousness.
It's turtles all the way down. Just like you can't give me a description of consciousness, and you can't give me a description of "pondering your own consciousness", you can't give me a description of "first person experiences" either. You can't give me a description of any of these related concepts except in terms of other such concepts.
It's not so much that I'm doubting whether I'm conscious, but rather I'm doubting whether I can figure out whether I'm conscious. I can't figure out if I have something when you can't communicate to me exactly what it is that I may or may not have.
(This post grew out of an old conversation with Wei Dai.)
Imagine a person sitting in a room, communicating with the outside world through a terminal. Further imagine that the person knows some secret fact (e.g. that the Moon landings were a hoax), but is absolutely committed to never revealing their knowledge of it in any way.
Can you, by observing the input-output behavior of the system, distinguish it from a person who doesn't know the secret, or knows some other secret instead?
Clearly the only reasonable answer is "no, not in general".
Now imagine a person in the same situation, claiming to possess some mental skill that's hard for you to verify (e.g. visualizing four-dimensional objects in their mind's eye). Can you, by observing the input-output behavior, distinguish it from someone who is lying about having the skill, but has a good grasp of four-dimensional math otherwise?
Again, clearly, the only reasonable answer is "not in general".
Now imagine a sealed box that behaves exactly like a human, dutifully saying things like "I'm conscious", "I experience red" and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?
A philosopher believing in computationalism would emphatically say yes. But considering the examples above, I would say I'm not sure! Not at all!