Thanks for your answers.
Does an animated GIF possess human consciousness?
No, but as others pointed out, an animated GIF is not a simulation of the thing it represents.
Just to be clear, when we are talking of simulations of a computational system, we mean something that computes the same input to output mapping of the system that is simulated, the same mathematical function (or, more precisely the same posterior, if the system is stochastic).
An animated GIF doesn't respond to inputs, therefore it doesn't compute the same function that the brain computes.
Think of playing an old console video game on an emulator vs watching a video recorded from the console screen of somebody playing that game. Clearly the emulator and the video are very different objects:
you can legitimately say that the emulator is simulating the game, furthermore you can say that the emulator is actually running the game: "Being a video game" is a property of certain patterns of input-output mappings, and this property is invariant (up to a performance overhead) under simulation, it is independent on the physical substrate.
on the other hand, the video record of somebody playing a game can't be said to be a game, or even the simulation of a game.
To V_V, I don't think it has human consciousness. If I answer otherwise, I'm pressed to acknowledge that well-coded chatbots have human consciousness, which is absurd. With regard to what "conscious" means in epistemic terms, I don't know, but I do know that the Turing test is insufficient because it only deals with appearances and it's easy to be duped.
Well-coded chatbots don't come any close to simulating the linguistic behavior of humans. There are claims now and then that some chatbot passed the Turing test, but if you look past the hype, all these claims are fundamentally false. Here it is Scott Aaronson's take on the last of these claims.
Seriously, if we really had computer programs passing the Turing test, we would probably also have computer programs working as engineers or lawyers.
About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over.
I'm asking how you understand the term at operational level right now.
Let me introduce you Foo. Foo may be a human, an animal, a plant, a non-living object, etc. It may be an artifact, or a natural-occurring object, or a combination of both. It may be in a normal state for its kind of objects or an abnormal state (e.g. in coma, out of fuel, out of battery charge) I won't tell you.
If I ask you questions about the behavior of Foo, e.g. "Does Foo move if prodded with a stick?", "Can Foo find the exit of a maze?", "How does Foo behave in front of a mirror?", "Can you train Foo to push a button when a certain light goes on?", "Can you trade with Foo?", "Can you discuss philosophy with Foo?", you can't answer these questions. In Bayesian terms, your subjective probability distribution over possible empirical observations about Foo has a large entropy.
Now I tell you that Foo is conscious. I tell you what I mean by "conscious", I'm leaving that to your interpretation.
I bet that now you can answer many of the questions above, if not with certainty at least with some significant confidence. In Bayesian terms, after conditioning on the piece of evidence "Foo is conscious", the entropy of your subjective probability distribution over possible empirical observations about Foo became smaller.
Do you agree with that? If so, how do you reconcile that with non-functionalism?
No, but as others pointed out, an animated GIF is not a simulation of the thing it represents.
The animated GIF, as I originally described it, is an "imitation of the operation of a real-world process or system over time", which is the verbatim definition (from Wikipedia) of a simulation. Counterfactual dependencies are not needed for imitation.
...Just to be clear, when we are talking of simulations of a computational system, we mean something that computes the same input to output mapping of the system that is simulated, the same mathematical
Recently published article in Nature Methods on a new protocol for preserving mouse brains that allows the neurons to be traced across the entire brain, something that wasn't possible before. This is exciting because in as little as 3 years, the method could be extended to larger mammals (like humans), and pave the way for better neuroscience or even brain uploads. From the abstract:
http://blog.brainpreservation.org/2015/04/27/shawn-mikula-on-brain-preservation-protocols/