Recently published article in Nature Methods on a new protocol for preserving mouse brains that allows the neurons to be traced across the entire brain, something that wasn't possible before. This is exciting because in as little as 3 years, the method could be extended to larger mammals (like humans), and pave the way for better neuroscience or even brain uploads. From the abstract:
Here we describe a preparation, BROPA (brain-wide reduced-osmium staining with pyrogallol-mediated amplification), that results in the preservation and staining of ultrastructural details throughout the brain at a resolution necessary for tracing neuronal processes and identifying synaptic contacts between them. Using serial block-face electron microscopy (SBEM), we tested human annotator ability to follow neural ‘wires’ reliably and over long distances as well as the ability to detect synaptic contacts. Our results suggest that the BROPA method can produce a preparation suitable for the reconstruction of neural circuits spanning an entire mouse brain
http://blog.brainpreservation.org/2015/04/27/shawn-mikula-on-brain-preservation-protocols/
The animated GIF, as I originally described it, is an "imitation of the operation of a real-world process or system over time", which is the verbatim definition (from Wikipedia) of a simulation. Counterfactual dependencies are not needed for imitation.
Ok, let's go with this definition. As I understand it then, machine functionalism is not about simulation (as imitation) per se but rather about recreating the mathematical function that the human brain is computing. Is this correct?
A brain doesn't necessarily respond to inputs, but sure, we can require that the simulation responds to inputs, though I find this requirement a bit strange.
It sounds like a beautiful idea, being invariant under a simulation that is independent of substrate.
I agree.
In short, it's a combination of a Turing test and the possession of a functioning human brain-like structure. If an entity exhibits awake human-like behavior (i.e, by passing the Turing test or suitable approximation) and possesses a living human brain (inferred from visual inspection of their biological form) or human brain-like equivalent (which I've yet to see, except possibly in some non-human primates), then I generally conclude it has human or human-like consciousness.
When I consider your comment here with your previous comment above that "definitions of consciousness which are not invariant under simulation have little epistemic usefulness", I think I understand your argument better. However the epistemic argument you're advancing is a fallacy because you're demonstrating what you assume: If I run an accurate simulation of a human brain on a computer and ask it whether it has human consciousness, of course it will say 'yes' and it will even pass the Turing test because we're assuming it's an accurate simulation of a human brain. The reasoning is circular and does not actually inform us whether the simulation is conscious. So your "epistemic usefulness" appears irrelevant to the question of whether machine functionalism is correct. Or am I missing something?
My general question to the machine functionalists here is, why are you assuming it is sufficient to merely simulate the human brain to recreate its conscious experience? The human brain is a chemico-physical system and such systems are generally explained in terms of causal structures involving physical or chemical entities, though such explanations (including simulations) are never mistaken for the thing itself. So why should human consciousness, which is a part of the natural world and whose basis we know first-hand involves the human brain, be any different?
If the question here is, is consciousness a substrate-independent function that the brain computes or is it associated with a unique type of physico-chemical causal (space-time) structure, then I would say the latter is more likely due to the past successes in physics and chemistry in explaining natural phenomena. In any event, our knowledge of the basis of consciousness is still highly speculative. I can attempt further reductio ad absurdums with machine functionalism involving ever more ridiculous scenarios but will probably not convince anyone who has taken the requisite leap of faith.
So, one reason I pointed you at orthonormal's sequence is that if you read all those posts they seem likely to trigger different intuitions for you.
I would also ask if you think that Aristotle - had he only been smarter - could have figured out his "unique type of physico-chemical causal (space-time) structure" from pure introspection. A negative answer would not automatically prove functionalism. We know of other limits on knowledge. But it does show that the thought experiment in which you are currently a simulation is at least as 'conceivable'... (read more)