(This post grew out of an old conversation with Wei Dai.)
Imagine a person sitting in a room, communicating with the outside world through a terminal. Further imagine that the person knows some secret fact (e.g. that the Moon landings were a hoax), but is absolutely committed to never revealing their knowledge of it in any way.
Can you, by observing the input-output behavior of the system, distinguish it from a person who doesn't know the secret, or knows some other secret instead?
Clearly the only reasonable answer is "no, not in general".
Now imagine a person in the same situation, claiming to possess some mental skill that's hard for you to verify (e.g. visualizing four-dimensional objects in their mind's eye). Can you, by observing the input-output behavior, distinguish it from someone who is lying about having the skill, but has a good grasp of four-dimensional math otherwise?
Again, clearly, the only reasonable answer is "not in general".
Now imagine a sealed box that behaves exactly like a human, dutifully saying things like "I'm conscious", "I experience red" and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?
A philosopher believing in computationalism would emphatically say yes. But considering the examples above, I would say I'm not sure! Not at all!
You wouldn't think that a book or Eliza program saying "I see red" were conscious, right? The question is whether optimizing an upload can make it close to an Eliza program for some topics. I think it's possible, given how little we can say about consciousness (i.e. how few different responses we'd need to code into the Eliza program).
Not disagreeing in principle, it depends on the degree of optimization and the set of data you expect the upload to have low error on. Eliza will succeed on a very small set of data but will fail quickly on anything close to real-life. It's possible that there's a more compact representation that results in "I see red" than the DAG with consciousness in it, but I don't think it's that easy to optimize out without breaking other tests. BTW you've read Blindsight right? Great scifi on this topic basically (with aliens instead of uploads)