Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Neurons perform analog summation, so the space-time diagram or causal structure is stochastic/statistical rather than deterministic

Surely you realize that quibbling over the use of analog vs digital neural summation in my toy example does not address my main argument.

Neural analog computational systems can be simulated perfectly in a probabilistic sense

Anything can be simulated perfectly (and trivially) in a probabilistic sense.

There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.

A definition of consciousness which precludes objective testing is outside the realm of scientific inquiry at best, and pseudo-science at worse.

If we knew the basis for consciousness, we would have objective tests. It's possible that studying the brain's structural and connectional organization in detail will provide the clues we need to develop better informed opinions about the basis of consciousness.

This is my final post and I would like to thank everyone for the discussion. If anyone is interested in developing autotracing and autosegmentation programs for connectomics and neural circuit reconstruction in whole-brain volume electron microscopy datasets, please email me at brainmaps at gmail dot com or visit http://connectomes.org for more information. Thanks again.

causal structure of a Turing machine simulating a human brain is very different from an actual human brain.

This statement contravenes universal computability, and is therefore false. A universal computer can instantiate any other causal structure. Remember: the causal structure at the substrate level is irrelevant due to the universality of computation. Causal structures can be embedded within other causal structures (multiple realizability).

My statement does not contravene universal computability since I'm assuming a Turing machine can simulate a human brain. Let me try another approach: Look at the space-time diagram of a Turing machine adding two numbers and compare with the space-time diagram of a neuron performing a similar summation. The causal structures in the space-time diagrams are very different. Yes, you can simulate a causal structure, but this is not the same thing as the causal structure of the underlying physical substrate performing the simulation.

It can be simulated because anything can be simulated!

Anything can be simulated imperfectly. Take the weather or C. elegans nervous system.

are there any empirical predictions where your viewpoint disagrees with functionalism?

I'm just exhibiting skepticism over claims from machine functionalism relating to Turing (and related) machine consciousness. I'm not promoting a specific viewpoint.

I predict that within a decade or two, computers with about 10^14 ops will run human mind simulations, and these sims will pass any and all objective tests for human intelligence, self-awareness, consciousness, etc.

There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.

People will just accept that sims are conscious/self-aware for the exact same reasons that we reject solipsism.

Have we rejected solipsism? Certainly panpsychism is consistent with it and this appears untouched in consciousness research.

Thank you for the thoughtful reply.

I think a large part of what makes me a machine functionalist is an intuition that neurons...aren't that special. Like, you view the China Brain argument as a reductio because it seems so absurd. And I guess I actually kind of agree with that, it does seem absurd that a bunch of people talking to one another via walkie-talkie could generate consciousness. But it seems no more absurd to me than consciousness being generated by a bunch of cells sending action potentials to one another.

Aren't neurons special? At the very least, they're mysterious. We're far from understanding them as physico-chemical systems. I've had the same reaction and incredulity as you to the idea that interacting neurons can 'generate consciousness'. The thing is, we don't understand individual neurons. Yes, neurons compute. The brain computes. But so does every physical system we encounter. So why should computation be the defining feature of consciousness? It's not obvious to me. In the end, consciousness is still a mystery and machine functionalism requires a leap of faith that I'm not prepared to take without convincing evidence.

But even beyond that, it seems intuitively obvious to me that your brain's counterfactual dependencies are what make your brain, your brain.

Yes, counterfactual dependencies appear necessary for simulating a brain (and other systems) but the causal structure of the simulated objects is not necessarily the same as the causal structure of the underlying physical system running the simulation, which is my objection to Turing machines and Von Neumann architectures.

you could just as easily say that neurons are "simulating" consciousness. Essentially machine functionalists think that causal structure is all there is in terms of consciousness, and under that view the line between something being a "simulation" versus being "real" kind of disappears.

it's an interesting thought, and I generally agree with this. The question seems to come down to defining causal structure. The problem is that the causal structure of the computer system running a simulation of an object does not appear anything like that of the object. A Turing machine running a human brain simulation appears to have a very different causal structure compared with the human brain.

The algorithmic computations can be instantiated in many different causal structures but only some will

Any sentence of this form is provably false, due to the universality of computation and multiple realizability.

This is incorrect because the causal structure of a Turing machine simulating a human brain is very different from an actual human brain. Of course, you can redefine causality in terms of "simulation causality" but the underlying causal structure of the respective systems will be very different.

Yes it is - causal structure is just computational structure, there is no difference.

If you accept Wheeler's "it from bit" argument, then anything can be instantiated with information. But at this point, you're veering far from science.

No, but as others pointed out, an animated GIF is not a simulation of the thing it represents.

The animated GIF, as I originally described it, is an "imitation of the operation of a real-world process or system over time", which is the verbatim definition (from Wikipedia) of a simulation. Counterfactual dependencies are not needed for imitation.

Just to be clear, when we are talking of simulations of a computational system, we mean something that computes the same input to output mapping of the system that is simulated, the same mathematical function

Ok, let's go with this definition. As I understand it then, machine functionalism is not about simulation (as imitation) per se but rather about recreating the mathematical function that the human brain is computing. Is this correct?

An animated GIF doesn't respond to inputs, therefore it doesn't compute the same function that the brain computes.

A brain doesn't necessarily respond to inputs, but sure, we can require that the simulation responds to inputs, though I find this requirement a bit strange.

"Being a video game" is a property of certain patterns of input-output mappings, and this property is invariant (up to a performance overhead) under simulation, it is independent on the physical substrate.

It sounds like a beautiful idea, being invariant under a simulation that is independent of substrate.

There are claims now and then that some chatbot passed the Turing test, but if you look past the hype, all these claims are fundamentally false.

I agree.

About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over.

I'm asking how you understand the term at operational level right now.

In short, it's a combination of a Turing test and the possession of a functioning human brain-like structure. If an entity exhibits awake human-like behavior (i.e, by passing the Turing test or suitable approximation) and possesses a living human brain (inferred from visual inspection of their biological form) or human brain-like equivalent (which I've yet to see, except possibly in some non-human primates), then I generally conclude it has human or human-like consciousness.

When I consider your comment here with your previous comment above that "definitions of consciousness which are not invariant under simulation have little epistemic usefulness", I think I understand your argument better. However the epistemic argument you're advancing is a fallacy because you're demonstrating what you assume: If I run an accurate simulation of a human brain on a computer and ask it whether it has human consciousness, of course it will say 'yes' and it will even pass the Turing test because we're assuming it's an accurate simulation of a human brain. The reasoning is circular and does not actually inform us whether the simulation is conscious. So your "epistemic usefulness" appears irrelevant to the question of whether machine functionalism is correct. Or am I missing something?

My general question to the machine functionalists here is, why are you assuming it is sufficient to merely simulate the human brain to recreate its conscious experience? The human brain is a chemico-physical system and such systems are generally explained in terms of causal structures involving physical or chemical entities, though such explanations (including simulations) are never mistaken for the thing itself. So why should human consciousness, which is a part of the natural world and whose basis we know first-hand involves the human brain, be any different?

If the question here is, is consciousness a substrate-independent function that the brain computes or is it associated with a unique type of physico-chemical causal (space-time) structure, then I would say the latter is more likely due to the past successes in physics and chemistry in explaining natural phenomena. In any event, our knowledge of the basis of consciousness is still highly speculative. I can attempt further reductio ad absurdums with machine functionalism involving ever more ridiculous scenarios but will probably not convince anyone who has taken the requisite leap of faith.

thanks. I'm not sure if you were pointing me in that direction for a specific reason but found commentator pjeby's explanation for the ineffability of qualia insightful.

It's unclear why counterfactual dependencies would be necessary for machine functionalism, but ok, let's include them in the GIF example. Take the first GIF as the initial condition and let the (binary) state of pixel, Xi, at time step, t, take the form, f(i,X1(t-1),X2(t-1),...Xn(t-1)). Does this make it any more plausible that the animated GIF has human consciousness? If you think the GIF has human consciousness, then what is the significance of the fact that the system of equations is generally underdetermined? Personally, it's not plausible that the GIF has human consciousness, but would agree that since it's an extreme example, my intuition could be wrong. Unfortunately, this appears to mean that we must agree to disagree on the question of the validity of machine functionalism, or is there another way forward?

Thanks for the replies. I will try to answer and expand on the points raised. There are a number of reductio ad absurdums that dissuade me from machine functionalism, including Ned Block's China brain and also the idea that a Turing machine running a human brain simulation would possess human consciousness. Let me try to take the absurdity to the next level with the following example:

Does an animated GIF possess human consciousness?

Imagine we record the activity of every neuron in a human brain at every millisecond; at each millisecond, we record whether each of the 100 billion neurons in the human brain is firing an action potental or not. We record all of this for a 1 second duration. Now, for each of the 1000 milliseconds, we represent the neural firing state of all neurons as a binary GIF image of about 333,000 pixels in height and width (this probably exceeds GIF format specifications, but who cares), where each pixel represents the firing state of a specific neuron. We can make 1000 of these GIFs for each millisecond over the 1 second duration. With these 1000 GIFs, we concatenate them to form an animated GIF and then play the animated GIF on an endless loop. Since we are now "simulating" the neural activities of all the neurons in the human brain, we might expect that the animated GIF possesses human consciousness... But this view is absurd and this exercise suggests there is more to consciousness than reproducing neural activities in different substrates.

To V_V, I don't think it has human consciousness. If I answer otherwise, I'm pressed to acknowledge that well-coded chatbots have human consciousness, which is absurd. With regard to what "conscious" means in epistemic terms, I don't know, but I do know that the Turing test is insufficient because it only deals with appearances and it's easy to be duped. About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over.

To Kyre, you hit the crux in your second example. The absurdity of China brain and the Turing machine with human consciousness stems from the fact that the causal structures (i.e., space-time diagrams) in these physical systems are completely different from the causal structure of the human brain. As you describe, in a typical computer there are honest-to-god physical cause and effect in the voltage levels in the memory gates, but the causal structure is completely different from wetware, and this is where the absurdity of attributing consciousness to computations (or simulation) comes from, at least for me. Consciousness is not just computational. Otherwise you have absurdities like China brain and animated GIFs with human consciousness. It seems more likely to be physico-computational, as reflected in the causal structure of interactions of the physical system which underlies the computations and simulations.

There may be a computer architecture that reproduces the correct causal structure, but Von Neumann and related architectures do not. And to your last question, yes! A simulation is just an image. If you think it is the real thing, then you must accept that an animated GIF can possess human consciousness. Personally, this conclusion is too absurd for me to accept.

To jacob_cannell, thanks for the congrats. Sure, consciousness has baggage but using self-awareness instead already commits one to consciousness as a special type of computation, which the reductio ad absurdums above try to disprove. I agree it's likely that "Self-awareness is just a computational capability", depending on what you mean by 'Self' and 'awareness'. You state that "The 'causal structure' is just the key algorithmic computations" but this is not quite right. The algorithmic computations can be instantiated in many different causal structures but only some will resemble those of the human brain and presumably possess human consciousness.

TLDR: The basis of consciousness is very speculative and there is good reason to believe it goes beyond computation to the physico-computational and causal (space-time) structure.

Shawn Mikula here. Allow me to clear up the confusion that appears to have been caused by being quoted out of context. I clearly state in the part of my answer preceding the quoted text the following:

"2) assuming you can run accurate simulations of the mind based on these structural maps, are they conscious?".

So this is not a question of misunderstanding universal computation and whether a computer simulation can mimic, for practical purposes, the computations of the brain. I am already assuming the computer simulation is mimicking the brain's activity and computations. My point is that a computer works very differently from a brain which is evident in differences in its underlying causal structure. In other words, the coordinated activity of the binary logic gates underlying the computer running the simulation has a vastly different causal structure than the coordinated activity and massive parallelism of neurons in a brain.

The confusion appears to result from the fact that I'm not talking about the pseudo-causal structure of the modeling units comprising the simulation, but rather the causal structure of the underlying physical basis of the computer running the simulation.

Anyway, I hope this helps.