(This post grew out of an old conversation with Wei Dai.)
Imagine a person sitting in a room, communicating with the outside world through a terminal. Further imagine that the person knows some secret fact (e.g. that the Moon landings were a hoax), but is absolutely committed to never revealing their knowledge of it in any way.
Can you, by observing the input-output behavior of the system, distinguish it from a person who doesn't know the secret, or knows some other secret instead?
Clearly the only reasonable answer is "no, not in general".
Now imagine a person in the same situation, claiming to possess some mental skill that's hard for you to verify (e.g. visualizing four-dimensional objects in their mind's eye). Can you, by observing the input-output behavior, distinguish it from someone who is lying about having the skill, but has a good grasp of four-dimensional math otherwise?
Again, clearly, the only reasonable answer is "not in general".
Now imagine a sealed box that behaves exactly like a human, dutifully saying things like "I'm conscious", "I experience red" and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?
A philosopher believing in computationalism would emphatically say yes. But considering the examples above, I would say I'm not sure! Not at all!
The chair you are sitting on is a realisation; Van Gogh's painting of his chair at Arles is a representation. You can't sit on it.
That's very vaguely phrased. There's are questions of whether pain has phenomenal qualities, whether it is totally reducible to physical behaviour, and whether it is multiply realisable. If pain doesn't have phenomenal properties, how do you decide which set of brain states get labelled as pain states?
But the concern is that you have no way of coming to know the answers to those questions. You have predetermined that everything must be treated as physics from the outset, so you will ineveitably get out the answer you put in. You are not treating the identity of pain with brain states as a falsifiable hypothesis.
There are uncontentious examples of multiply realisable things. Everything in computer science is MR - all algorithms, data structures , whatever. For the purposes of AI research, intelligence is assumed to be MR. There is no implication that MR things are things that "exist apart" from their realisations. So I don't know where you are getting that from.
I would have to believe pain is MR to believe that; but the objection cannot be that nothing is MR. You are apparently being inconsistent about MR.
Colour and taste are different categories, therefore category error.
No, I'm treating the identity of pain with the memories thoughts and behaviors that express pain, as unfalsifiable. In other words, I loosely define pain "the thing that makes you say ouch". That's how definitions work - the theory that the thing I'm sitting on is a chair is also unfalsifiable. At that point the identity of pain with brain states is in principle falsifiable, you just induce the same state in two brains and observe only one saying ouch. Obvi... (read more)