For the purpose of this thought experiment let's assume that the reductionist approach to consciousness is truthful and consciousness is indeed reducible to physics and there is no ontologically distinct basic mental element that produces the experience of consciousness - the qualia.
"The Scary problem of Qualia"
This leads to a very strange problem which I'd term "the Scary problem of Qualia" in a little humouristic sense. If mental experience is as proposed, and humans with brains doing thinking are "merely physics", or "ordered physics", physics + logic that is, then that results to the experience of consciousness being logic given the physics...
...Which is to say that whenever there is (a physical arrangement with) a logical structure that matches (is transitive with) the logical structure of consciousness - then there would be consciousness. It gets more complicated. If you draw a line with a pencil on a piece of paper, so that it encodes a three dimensional trajectory over time of a sentient being's consciousness - you basically have created a "soulful" being. Except there's just a drawn line on a piece of paper.
(Assuming you can store a sufficient amount of bits in such an encoding. Think of a "large" paper and a long complicated line if imagining an A4 with something scribbled on is a problem. You can also replace the pencil & paper with a turing machine if you like)
If you now take a device complicated enough to decode the message, and create a representation for this message, for a sci-fi example a "brain modifying device which stimulates" a form of empathic inference, you can partially think the thoughts recorded on the piece of paper. Further yet you could simulate this particular person inside a powerful AI, even save that information to a disk, insert it into an android, and let that person go and live his/her life on. If this isn't sufficient you could engineer a biological being which would have a brain that produces a series of chemical reactions, enzymatic reactions, combined with an electron cloud etc that happens to be transitive with the logical structure of the data stored on the chip.
Meditation: And this creates another kind of problem. Did the person come into existence:
1. When the line drawn with the pencil came into existence?
2. When the entity that created the line thought of how to draw the line?
3. When the line was decoded?
4. Did it come into existence when the supercomputer AI simulated the person from the line?
5. When it was recorded onto the chip for the android?
6. Or lastly when the contents of the chip were translated to a biological brain producing the same thoughts?
Meditation: Is logic an ontologically basic thing?
In my book this is all a natural consequence from the reductionist approach to the hard problem of consciousness. That is unless we wan't to consider the possibility that electron clouds have little tags which say "consciousness" hanging from.. uhuh.. From their amplitudes.
So in other words: From the reductionist perspective there's just physics which can be described with the help of logic. Whenever there is a physical part of the universe that is correlated with the rest of the universe in such a way that it would resemble consciousness when interacted with, that thing would be just as much a person/zombie as we're. Same goes for simulated people.
edited: I was not aware of that paradox, but it looks like this paradox is created by formulating a false premise and accepting it as true. In the example given on that wikipedia page, the "paradox of of the heap" the second premise is obviously incorrect. If you have 5 dollars in your pocket, and create a rule which says "even if you spend money, you'll still have money in your pocket" it's pretty clear that this isn't true after you've spent 5 dollars.
I think it's closely related to the ship of theseus which argues identity after changin parts. Sorites paradox argues identity after removing parts. If you have vague definitions for the identity and construct false rules and accept those false rules as true, then these unnecessary paradoxes will follow.
If you remove pieces of board from a ship, you won't get to "no ship" or "scattered boards" directly, but instead you get to "sinking ship" or "broken ship" or "incomplete ship" at some point, or just "ship missing a board", etc, which isn't about the ship, but rather the vagueness of our labels for it. That's what I think at least.
I think it was a good pick in this context of consciousness because consciousness is really complicated and we only have very vague definitions for it.
Replace the symbol with substance and Disputing Definitions I think are good lesswrong posts around similar issues.
Well yes, we can clearly see that the second premise is false after some inductive reasoning.
But there's also another route, the non-inductive route: can you give me a single example of a heap of sand that becomes a non-heap when you remove a grain?
The point is not that heaps are magic or induction is broken or anything like that. The point is that humans are awful at finding the boundaries of their categories. And as Wei Dai would note, we can't just get around this by playing taboo when the thing we're supposed to be finding the boundary of enters directly into our utility function.