mattnewport comments on The two insights of materialism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (132)
Do people holding this view who call themselves materialists actually exist? It seems an incoherent position to hold and I can't recall seeing anyone express that belief. It seems very similar to the dualist position that consciousness has some magic property that can't be captured outside of a human brain.
John Searle, David Pearce (see the last question), presumably some of the others listed under "Criticism" here.
As far as I can tell from looking at those links both Searle and Pearce would deny the possibility of simulating a person with a conventional computer. I understand that position and while I think it is probably wrong it is not obviously wrong and it could turn out to be true. It seems that this is also Penrose's position.
From the Chinese Room Wikipedia entry for example:
From the Pearce link you gave:
So I still wonder whether anyone actually believes that you could simulate a human mind with a computer but that it would not be conscious.
They would deny that a conventional computer simulation can create subjective experience. However, the Church-Turing thesis implies that if physicalism is true then conscious beings can be simulated. AFAICT, it is only Penrose who would deny this.
Do you mean the Church-Turing-Deutsch principle? It appears to me that Pearce at least in the linked article is making a claim which effectively denies that principle - his claim implies that physics is not computable.
Why? Pearce is a physicalist, not a computationalist; he ought to accept the possibility of a computation which is behaviorally identical to consciousness but has no conscious experience.
What sense of 'ought' are you using here? That seems like a very odd thing to believe to me. If you think that's what he actually believes you're going to have to point me to some evidence.
So that means you are a computationalist? Fine, but why do you think physicalism may be incoherent?
It's hard to fish for evidence in a single interview, but Pearce says:
To me, this reads as an express acknowledgement of the CT thesis (unless quantum gravity turns out to be uncomputable, in which case the CTT is just plain false).
The distinction seems to hinge on whether physics is computable. I suspect the Church-Turing-Deutsch principle is true and if it is then it is possible to simulate a human mind using a classical computer and that simulation would be conscious. If it is false however then it is possible that consciousness depends on some physical process that cannot be simulated in a computer. That seems to me to be what Pearce is claiming and that is not incoherent. If we live in such a universe however then it is not possible to simulate a human using a classical computer / universal Turing machine and so it is incoherent to claim that you could simulate a human but the simulation would not be conscious because you can't simulate a human.
I honestly don't see how you make that connection. It seems clear to me that Pearce is implying that consciousness depends on non-computable physical processes.
You seem to be begging the question: I suspect that we simply have different models of what the "problem of consciousness" is.
Regardless, physicalism seems to be the most parsimonious theory; computationalism implies that any physical system instantiates all conscious beings, which makes it a non-starter.
Basically, what bogus said.
I'm confused about what you mean by "simulating a person". Presumably you don't mean simulating in a way that is conscious/has mental states (since that would make the claim under discussion trivially, uninterestingly inconsistent), so presumably you do mean just simulating the physics/neurology and producing the same behavior. While AFAIK neither explicitly says so in the links, Searle and Pearce both seem to me to believe the latter is possible. (Searle in particular has never, AFAIK, denied that an unconscious Chinese Room would be possible in principle; and by "strong AI" Searle means the possibility of AI with an 'actual mind'/mental states/consciousness, not just generally intelligent behavior.)
Yes. Equivalently, is uploading possible with conventional computers?
It seems to me that both Searle and Pearce would answer no to both questions. Pearce in particular seems to be saying that consciousness depends on quantum properties of brains that cannot be simulated by a conventional computer. It appears to me that this is equivalent to a claim that physics is not computable but I'm not totally confident of that equivalence. I have trouble reading any other conclusion from anything in those links. Can you point to a quote that makes you think otherwise?
I don't think Pearce or Searle would agree with this, and it sounds like you might be projecting your belief onto them. We already know of philosophers who explicitly endorse the possibility of zombies, so it's not surprising for philosophers to endorse positions that imply the possibility of zombies.
Afraid not, but I think if they thought physics were uncomputable (in the behavioral-simulation sense) they would say so more explicitly.
Way back at the beginning of this thread I was trying to establish whether anybody who calls themselves a materialist actually believes the statement "you can't fully simulate a person without the simulation being conscious" to be false. I still don't feel I have an answer to that question. It seems that bogus might believe that statement to be false but he is frustratingly evasive when it comes to answering any direct questions about what he actually believes. It seems we are not currently in a position to say definitively what Pearce or Searle believe.
The only reason I asked in the first place is that I've tended to assume someone who self-describes as a materialist would also believe that statement to be true. I guess the moral of this thread is that I can't assume that and should ask if I want to know.
Many people want to draw the line at lookup tables - they don't believe simulation by lookup table would be conscious.
-- Daniel Dennett (from here)
The point being that GLUTs are faulty intuition pumps, so we cannot use them to bolster our intuition that "something mechanical that passed the Turing Test might nevertheless not be conscious".
It would take a GLUT as large as the universe just to store all possible replies to questions I might ask of it, but it would flounder on a simple test: if I were to repeat the same question several times, it would give me the same answer each time. You could push me into a less convenient possible world by arguing that the GLUT responds to minute differences in my tone of voice, etc. - but I could also record myself on tape and play the same tape back N times, and the GLUT would expose itself as such, and therefore fail the test, by sphexishly reciting back its stored lines.
There's no way that I can see of going around this, other than to "extend" the GLUT concept to allow for stored states and conditional branches, at which point we recover Turing completeness. To a programmer, the GLUT concept just isn't credible.
Ok, basic confusion here. The GLUT obviously has to be indexed on conversation histories up to the point of the reply, not just the last statement from the interlocutor. Having it only index using the last statement would make it pretty trivially incapable of passing a good Turing test. It follows that since it's still assumed to be a finite table, it can only do conversations up to a given length, say half an hour. Half an hour, on the other hand, should be quite long enough to pass a Turing test, and since we're dealing with crazy scales here, we might just as well make the maximum length of conversation 80 years or something.
The lookup tables are not conscious but the process that produced them was.
What about a randomly generated lookup table that just happens to simulate a person? (They can be found here.)
I think your prior estimate for other people's philosophical competence and/or similarity to you is way too high.
To the best of our knowledge, any "quantum property" can be simulated by a classical computer with approx. exponential slowdown. Obviously, a classical computer is not going to instantiate these quantum properties.
Is that obvious?
It should be. We can definitely build classical computers where quantum effects are negligible.
(For all we know, the individual transistors of these computers might have some subjective experience; but the computer as a whole won't.)
If the Church-Turing-Deutsch thesis is true and some kind of Digital Physics is an accurate depiction of reality then a simulation of physics should be indistinguishable from 'actual' physics. Saying subjective experience would not exist in the simulation under such circumstances would be a particularly bizarre form of dualism.
The same formal structure will exist, but it will be wholly unrelated to what we mean by "subjective experience". What's dualistic about this claim?
I don't know about consciousness, but the position that subjective experience has some magic property is common sense. Materialism is just a reasonable attempt to ground that magic property in the physical world.