The confusion appears to result from the fact that I'm not talking about the pseudo-causal structure of the modeling units comprising the simulation, but rather the causal structure of the underlying physical basis of the computer running the simulation.
The natural objection is, why would the physical substrate matter?
Let's assume you replace somebody's brain with a Von Neumann computer running a simulation of that person's brain. You get something that behaves like a conscious person, and even claims to be conscious person if asked. Would you say that this thing is not conscious?
If you think it is not conscious, then what does "conscious" actually mean in epistemic terms? If I tell you that X is conscious, how do you update your posterior beliefs on the outcomes of future observations about X?
Recently published article in Nature Methods on a new protocol for preserving mouse brains that allows the neurons to be traced across the entire brain, something that wasn't possible before. This is exciting because in as little as 3 years, the method could be extended to larger mammals (like humans), and pave the way for better neuroscience or even brain uploads. From the abstract:
http://blog.brainpreservation.org/2015/04/27/shawn-mikula-on-brain-preservation-protocols/