bogus comments on The two insights of materialism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (132)
You seem to be begging the question: I suspect that we simply have different models of what the "problem of consciousness" is.
Regardless, physicalism seems to be the most parsimonious theory; computationalism implies that any physical system instantiates all conscious beings, which makes it a non-starter.
Say again? Why should I believe this to be the case?
Basically, the interpretation of a physical system as implementing a computation is subjective, and a sufficiently complex interpretation can interpret it as implementing any computation you want, or at least any up to the size of the physical system. AKA the "conscious rocks" or "joke interpretations" problem.
Paper by Chalmers criticizing this argument, citing defenses of it by Hilary Putnam and John Searle
Simpler presentation by Jaron Lanier
I can see why someone might think that, but surely the requirement that any interpretation be a homomorphism from the computation to the processes of the object would be strong restriction on the sets of computation that it is instantiating?
Intriguing. Could you elaborate? Apparently "homomorphism" is a very general term.
I think the idea is that you can't pick a different interpretation for the rock implementing a specific computation for each instant of time. A convincing narrative of the physical processes in a rock instantiating a consciousness would require a mapping from rock states to the computational process of the consciousness that remains stable over time. With the physical processes going on in rocks being pretty much random, you wouldn't get the moment-to-moment coherence you'd need for this even if you can come up with interpretations for single instants.
One intuition here is that once you come up with a good interpretation, the physical system needs to be able to come up with correct results from computations that go on longer than where you extrapolated doing your interpretation. If you try to get around the single instant thing and make a tortured interpretation of rock states representing the computation of, say, 100 consecutive computations of the consciousness, the interpretation is going to have the rock give you garbage for computation 101. You're just doing the computation yourself now and painstakingly fitting things to random physical noise in the rock.
Try bisimulation.
A homomorphism is a "structure preserving map", and is quite general until you specify what is preserved.
From my brief reading of Chalmers, he's basically captured my objection. As Risto_Saarelma says, the point is that a mapping merely of states should not count. As long as the sets of object states are not overlapping, there's a mapping into the abstract computation. That's boring. To truly instantiate the computation, what has to be put in is the causal structure, the rules of the computation, and these seem to be far more restrictive than one trace of possible states.
Chalmer's "clock and dial" seems to get around this in that it can enumerate all possible traces, which seems to be equivalent to capturing the rules, but still feels decidedly wrong.
Having printed it out and read it, it seems that "any physical system instantiates all conscious beings" is fairly well refuted, and what is left reduces to the GLUT problem.
Thanks for the link.
I remember seeing the Chalmers paper before, but never reading far enough to understand his reasoning - I should probably print it out and see if I can understand it on paper.
Edit: Yes, I know that he's criticizing the argument - I'm just saying I got lost last time I tried to read it.
So do you think there is a meaningful difference between computationalism and physicalism if the Church-Turing-Deutsch principle is true? If so, what is it?
Basically, physicalism need not be substrate-independent. For instance, it could be that Pearce is right: subjective experience is implemented by a complex quantum state in the brain, and our qualia, intentionality and other features of subjective experience are directly mapped to the states of this quantum system. This would account for the illusion that our consciousness is "just" our brain, while dramatically simplifying the underlying ontology.
Is that a yes or a no? It seems to me that saying physicalism is not substrate-independent is equivalent to saying the Church-Turing-Deutsch principle is false. In other words, that a Turing machine cannot simulate every physical process. My question is whether you think there is a meaningful difference between physicalism and computationalism if the Church-Turing-Deutsch principle is true. There is obviously a difference if it is false.
Why would this be? Because of free will? Even if free will exists, just replace the input of free will with a randomness oracle and your Turing machine will still be simulating a conscious system, albeit perhaps a weird one.
I don't think free will is particularly relevant to the question. Pearce seems to be claiming that some kind of quantum effects in the brain are essential to consciousness and that a simulation of a brain in a computer therefore cannot be conscious. If you could simulate the quantum processes then the argument falls apart. It only makes sense if the Church-Turing-Deutsch principle is false and there are physical processes that cannot be simulated by a Turing machine. I think that is unlikely but possible and a coherent position.
If all physical processes can be simulated by a Turing machine then I don't see a meaningful difference between physicalism and computationalism. I still don't know what your answer is to that question. If you do think there is still a meaningful difference then please share.
*sigh* You seem to be so committed to computationalism that you're unable to understand competing theories.
Simulating quantum processes on a classical computer is not the same as instantiating them in the real world. And physicalism commits us to giving a special status to the real world, since it's what our consciousness is made of. (Perhaps other "consciousnesses" exist which are made out of something else entirely, but physicalism is silent on this issue.) Hence, consciousness is not invariant under simulation; a classical simulation of a conscious system is similar to a zombie in that it behaves like a conscious being but has no subjective experience.
ETA: I think you are under the mistaken impression that a theory of consciousness needs to explain your heterophenomenological intuitions, i.e. what kinds of beings your brain would model as conscious. These intuitions are a result of evolution, and they must necessarily have a functionalist character, since your models of other beings have no input other than the general form of said beings and their behavior. Philosophy of mind mostly seeks to explain subjective experience, which is just something entirely different.
So you do think there is a difference between physicalism and computationalism even if the Church-Turing-Deutsch principle is true? And this difference is something to do with a special status held by the real world vs. simulations of the real world? I'm trying to understand what these competing theories are but there seems to be a communication problem that means you are failing to convey them to me.
That's what it means to say that physicalism is substrate-dependent. There is a (simple) psycho-physical law which states that subjective experience is implemented on a specific substrate.
It just so happens that evolution has invented some analog supercomputers called "brains" and optimized them for computational efficiency. At some point, it hit on a "trick" for running quantum computations with larger and larger state spaces, and started implementing useful algorithms such as reinforcement learning, aversive learning, perception, cognition etc. on this substrate. As it turns out, the most efficient physical implementations of such quantum algorithms have subjective experience as a side effect, or perhaps as a crucial building block. So subjective awareness got selected for and persisted in the population to this day.
It seems a fairly simple story to me. What's wrong with it?
So is one of the properties of that specific substrate (the physical world) that it cannot be simulated by a Turing machine? I don't know why you can't just give a yes/no answer to that question. I've stated it explicitly enough times now that you just come across as deliberately obtuse by not answering it.
I think I've been fairly clear that I don't deny the possibility that consciousness depends on non-computable physics. I don't think it is the most likely explanation but it doesn't seem to be clearly ruled out given our current understanding of the universe. Your story might be something close to the truth if the Church-Turing-Deutsch principle is false. It appears to me to be incoherent if it is true however.
I think the Church-Turing-Deutsch principle is probably true but I don't think we can rule out the possibility that it is false. If it is true then it seems a simulation of a human running on a conventional computer would be just as conscious as a real human. If it is false then it is not possible to simulate a human being on a conventional computer and it therefore doesn't make sense to say that such a simulation cannot be conscious because a simulation cannot be created. What if anything do you disagree with from those claims?
Are you saying that there is some extra law (on top of the physical laws that explain how our brains implement our cognitive algorithms) that maps our cognitive algorithms, or a certain way of implementing them, to consiousness? So that, in principal, the universe could have not had that law, and we would do all the same things, run all the same cognitive algorithms, but not be consious? Do you believe that p-zombies are conceptially possible?