mattnewport comments on The two insights of materialism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (132)
Yes. Equivalently, is uploading possible with conventional computers?
It seems to me that both Searle and Pearce would answer no to both questions. Pearce in particular seems to be saying that consciousness depends on quantum properties of brains that cannot be simulated by a conventional computer. It appears to me that this is equivalent to a claim that physics is not computable but I'm not totally confident of that equivalence. I have trouble reading any other conclusion from anything in those links. Can you point to a quote that makes you think otherwise?
I don't think Pearce or Searle would agree with this, and it sounds like you might be projecting your belief onto them. We already know of philosophers who explicitly endorse the possibility of zombies, so it's not surprising for philosophers to endorse positions that imply the possibility of zombies.
Afraid not, but I think if they thought physics were uncomputable (in the behavioral-simulation sense) they would say so more explicitly.
Way back at the beginning of this thread I was trying to establish whether anybody who calls themselves a materialist actually believes the statement "you can't fully simulate a person without the simulation being conscious" to be false. I still don't feel I have an answer to that question. It seems that bogus might believe that statement to be false but he is frustratingly evasive when it comes to answering any direct questions about what he actually believes. It seems we are not currently in a position to say definitively what Pearce or Searle believe.
The only reason I asked in the first place is that I've tended to assume someone who self-describes as a materialist would also believe that statement to be true. I guess the moral of this thread is that I can't assume that and should ask if I want to know.
Many people want to draw the line at lookup tables - they don't believe simulation by lookup table would be conscious.
-- Daniel Dennett (from here)
The point being that GLUTs are faulty intuition pumps, so we cannot use them to bolster our intuition that "something mechanical that passed the Turing Test might nevertheless not be conscious".
It would take a GLUT as large as the universe just to store all possible replies to questions I might ask of it, but it would flounder on a simple test: if I were to repeat the same question several times, it would give me the same answer each time. You could push me into a less convenient possible world by arguing that the GLUT responds to minute differences in my tone of voice, etc. - but I could also record myself on tape and play the same tape back N times, and the GLUT would expose itself as such, and therefore fail the test, by sphexishly reciting back its stored lines.
There's no way that I can see of going around this, other than to "extend" the GLUT concept to allow for stored states and conditional branches, at which point we recover Turing completeness. To a programmer, the GLUT concept just isn't credible.
Ok, basic confusion here. The GLUT obviously has to be indexed on conversation histories up to the point of the reply, not just the last statement from the interlocutor. Having it only index using the last statement would make it pretty trivially incapable of passing a good Turing test. It follows that since it's still assumed to be a finite table, it can only do conversations up to a given length, say half an hour. Half an hour, on the other hand, should be quite long enough to pass a Turing test, and since we're dealing with crazy scales here, we might just as well make the maximum length of conversation 80 years or something.
Tut, tut. Assuming the confusion you claim to see is mine: you don't get to tell me that my objection to an intuition pump is incoherent, you are required to show that it is incoherent, and it is preferable to avoid lullaby language in such argumentation.
Yes, the question "what is your index" exposes the GLUT as a confused intuition pump. I am at present looking at the Ned Block (1981) paper Psychologism and Behaviorism which (as best I could ascertain) is the original source for the GLUT concept. It makes a similar claim to yours, namely that "for a Turing Test of any given length, the machine could in principle be programmed in just the same way to pass a Turing Test of that length".
But sauce for the goose is sauce for the gander: for a GLUT of any size, there is a Turing Test of sufficient duration that exposes the GLUT as not conscious, by looping back to the start of the conversation! This shows that the argument from a necessarily finite index does have force to counter the GLUT as an intuition pump.
It is flawed in other ways. You can't blame Ned Block who at the time of writing that paper can't have spent a lot of time on IRC, but someone with that experience would tell you that indexing on character strings wouldn't be enough to pass a 1-hour Turing test: the GLUT as originally specified would be vulnerable to timing attacks. It wouldn't be able to spontaneously say something like "You haven't typed anything back to me for thirty minutes, what's wrong?"
"OK", a GLUT advocate might reply, "we can in principle include timings in the index, to whatever timing resolution you are capable of detecting".
It's tempting to grant this "in principle" counter-objection, especially as I don't have the patience to go to the literature and verify that the "timing attack" objection hasn't been raised and countered before.
But the fact that the timing attack wasn't anticipated by Ned Block is precisely what shows up the GLUT concept as a faulty intuition pump. You don't get to "go back to the drawing board" on the GLUT concept each time an attack is found and iteratively improve it until its index has been generalized enough to cover all possible circumstances: that is tantamount to having an actual, live, intelligent human sit behind the keyboard and respond.
Actually the whole idea of the GLUT machine (dubbed the 'blockhead' in Braddon-Mitchell's and Jackson's book, The Philosophy of Mind and Cognition) IS precisely to use live intelligent humans to store an intelligent response to every response a judge might make under a pre-specified limit (including silence and looping, which is discussed explicitly in the paper). The idea is to show that even though the resulting machine has the capacity to emit an intelligent response to any comment within the finite specified limits, it nonetheless has the intelligence of a juke-box. The point is that the intelligent programmers anticipate anything that the "judge" could say in the finite span. The upshot is that the capacity of a machine to pass a Turing Test of a finite length does not entail actual intelligence.
I confess to having downloaded the paper recently and not given it more attention than was necessary to satisfy my usual habit of having primary sources at hand. I've gone back and read it more carefully, but it probably deserves still longer scrutiny.
(Welcome to Less Wrong, by the way. I don't suppose you need to post an introduction, seeing as you have your own Wikipedia page. Nice to be chatting with you here!)
However, I'm not seeing where this is discussed explicitly, other than (this is perhaps what you mean) under the general heading of using "quantized stimulus parameters" as input to the GLUT-generating process. I grant that this does adequately deal with the most crude timing attacks imaginable.
There do seem to me to be other, more subtle attacks which - according to my earlier argument that, if you have to go back to the drawing board each time such an attack is found, leave the GLUT critique of behaviourism ineffective - would still prove fatal. For instance we can consider teachability of the GLUT, to uncover an entire class of attacks.
Suppose there is some theoretical concept, unknown to the putative human programmers of the GLUT (or perhaps we should call them conversation-authors, as the programming involved is minimal), but which can be taught to someone of normal intelligence. I don't want to restrict my argument to any particular domain, but for illustrative purposes let's pick the phenomenon of lasing light. This is a reasonable example, since the GLUT concept would have been implementable as early as Babbage's time and the key insights date from Einstein's.
In this scenario, the GLUT's interviewer choses as his conversation topic the theoretical background needed to build up to the concept of lasing light. The test comes when she (gender picked by flipping a coin) asks the GLUT to make specific predictions about a given experimental setup that extrapolates relevant physical law into a domain not previously discussed, but where that law still applies.
By my earlier stipulation, the GLUT's builders must discover, in the process of building the GLUT, the physical law of lasing light. They must also prune the conversation tree of "wrong" predictions, since that would alert the interviewer to the fact that the GLUT was "faking" understanding up to the point of the experimental test; this rules out the builders merely "covering all (conversational) bases". They must truly understand the phenomenon themselves.
(One may object that it would take an inordinately long time to teach a person of merely normal intelligence about a phenomenon such as lasing light. But we have earlier stipulated that the length of the test can be extended to human lifespans; that is surely enough for a person of normal intelligence to eventually get there.)
We are led to what is (to me at least) a disturbing conclusion. The building of a GLUT entails the discovery by the builders of all experimentally discoverable physical laws of our universe that can be taught a person of normal intelligence in a reasonable finite lifespan.
I'm not a professional philosopher, so possibly this argument has holes.
Nevertheless it seems to me that this unpalatable conclusion points to one primordial flaw in the GLUT argument: it goes counter to the open-ended nature of the optimization process known as intelligence. You cannot optimize by covering all bases, for the same reason that a theory that can explain all conceivable events has no real content.
The original paper tried to anticipate this objection by offering as a general defense the stipulation that the GLUT should simulate a "desert island" type of castaway, so that the GLUT would be dispensed of the capacity to converse fluently about current events. But the objection is more general and its force becomes harder to avoid if the duration of the test is extended greatly: we need to imagine that the GLUT can be brought up to date with current events, and afterwards respond appropriately to them, as would a person of normal intelligence. This requires the GLUT builders to anticipate the future with enough precision to prune "inappropriate" responses, and so the defense that the builders would "cover all bases" is untenable.
The domain of physical law is the one where the consequences of the teachability test are brought into sharpest focus, but I suspect that "merely social" tests of the GLUT in everyday life would very quickly expose its supposed intelligence as a sham.
Behaviourism, or God-like GLUT builders: pick your poison.
There is an aspect of the construction that you are not quite taking in. The programmers give a response to EVERY sequence of letters and spaces that a judge COULD type in the remaining segment of the original hour. One or more of those sequences will be a description of a laser, another will be a description of some similar device that goes counter to physical law, etc. The programmers are supposed to respond to each string as an intelligent person would respond. Here is the relevant part of the description: "Suppose the interrogator goes first, typing in one of A1...An. The programmers produce one sensible response to each of these sentences, B1...Bn. For each of B1...Bn, the interrogator can make various replies [every possible reply of all lengths up to the remaining time], so many branches will sprout below each of the Bi. Again, for each of these replies, the programmers produce one sensible response, and so on." The general point is that there is no need for the programmers to "think of" every theory: that is accomplished by exhaustion. Of course the machine is impossible but that is OK because the point is a conceptual one: having the capacity to respond intelligently for any stipulated finite period (as in the Turing Test) is not conceptually sufficient for genuine intelligence.
On a different level of objection, I for one would bite the functionalist bullet: something that could talk to me regularly for 80 years, sensibly, who could actually teach me things or occasionally delight me, all the while insisting that it wasn't in fact conscious but merely a GLUT simulating my Aunt Bertha...
Well, I would call that thing conscious in spite of itself.
To simulate Aunt Bertha effectively, and to keep that up for 80 years, it would in all likelihood have to be encoded with Aunt Bertha's memories, Aunt Bertha's wonderful quirks of personality, Aunt Bertha's concerns for my little domestic worries as I gradually moved through my own narative arc in life, Aunt Bertha's nuggets of wisdom that I would sometimes find deep as the ocean and other times silly relics of a different age, and so on and so forth.
The only difference with Aunt Bertha would be that, when I asked her (not "it") why she thought she answered as she does, she'd tell me, "You know, dear nephew, I don't want to deceive you, for all that I love you: I'm not really your Aunt Bertha, I'm just a GLUT programmed to act like her. But don't fret, dear. You're just an incredibly lucky boy who got handed the jackpot when drawing from the infinite jar of GLUTs. Isn't that nice? Now, about your youngest's allergies..."
Wasn't an objection to these kinds of GLUTs that you'd basically have to make them by running countless actual, conscious copies of Aunt Bertha and record their incremental responses to each possible conversation chain? So you would be in a sense talking with a real, conscious human, although they might be long dead when you start indexing the table.
Though since each path is just a recording of a live person, it wouldn't agree with being a GLUT unless the Aunt Bertha copies used to build the table would have been briefed earlier about just why they are being locked in a featureless white room and compelled to have conversation with the synthetic voice speaking mostly nonsense syllables at them from the ceiling.
(We can do the "the numbers are already ridiculous, so what the hell" maneuver again here, and replace strings of conversation with the histories of total sensory input Aunt Bertha's mind can have received at each possible point in her life at a reasonable level of digitization, map these to a set of neurochemical outputs to her muscles and other outside-world affecting bits, and get a simulacrum we can put in a body with similar sensory capabilities and have it walking around, probably quite indistinguishable from the genuine, Turing-complete article. Although this would involve putting the considerably larger number of Bertha-copies used to build the GLUT into somewhat more unpleasant situations than being forced to listen to gibberish for ages.)
Surely there are multiple possible conscious experiences that could be had by non-GLUT entities with Aunt Bertha's behavior. How would you decide which one to ascribe to the GLUT?
I'm not sure I even understand the question.
If you asked me, "Is GAunt Bertha conscious", I would confidently answer "yes", for the same reason I would answer "yes" if asked that question about you. Namely, both you and her talk fluently about consciousness, about your inner lives, and the parsimonious explanation is that you have inner lives similar to mine.
In the case of GAunt Bertha, it is the parsimonious explanation despite her protestations to the contrary, even though they lower the prior.
In Bayesian terms, I would count those 80 years of correspondence as overwhelming evidence that she has an inner life similar to mine, and the GLUT hypothesis starts out burdened with such a large prior probabilty against it that the amount of evidence you would have to show me to convince me that Aunt Bertha was a GLUT all along would take ages longer to even convey to me.
Oh, sorry. I thought you were assuming Aunt Bertha was a GLUT (not just that she claimed to be), and claiming she would be conscious. I agree that if Bertha claims to be a GLUT, she's ridiculously unlikely to actually be one, but I'm not sure why this is interesting.
Regardless....
If something is conscious, it seems like there should be a fact of the matter as to what it is experiencing. (There might be multiple separate experiences associated with it, but then there should be a fact of the matter as to which experiences and with what relative amounts of reality-fluid.) (If you use UDT or some such theory under which ascription of consciousness is observer-dependent, there is still a subjectively objective fact of the matter here.)
Intuitively, it seems likely that behavior underdetermines experience for non-GLUTs: that, for some set of inputs and outputs that some conscious being exhibits, there are probably two different computations that have those same inputs and outputs but are associated with different experiences.
If the totality of Aunt Bertha's possible inputs and outputs has this property — if different non-GLUT computations associated with different experiences could give rise to them — and if GBertha is conscious, which of these experiences (or what weighting over them) does GBertha have?
The lookup tables are not conscious but the process that produced them was.
What about a randomly generated lookup table that just happens to simulate a person? (They can be found here.)
That world is more inconvenient than the one where I wake up with my arm replaced by a purple tentacle. Did you even read the article you linked to?
My specification is the reason we are talking about something improbable. It's not the cause of the improbable thing itself.
The point is that you have specified something so improbable that it is not going to actually happen, so I don't have to explain it, like I don't have to worry about how I would explain my arm being replaced by a purple tentacle.
But you don't actually need to resort to this dodge. You already said the lookup tables aren't conscious; that in itself is a step which is troublesome for a lot of computationalists. You could just add a clause to your original statement, e.g.
"The lookup tables are not conscious, but the process that produced them was either conscious or extremely improbable."
Voila, you now have an answer which covers all possible worlds and not just the probable ones. I think it's what you wanted to say anyway.
Mitchell isn't asking you to explain anything. He's asking you to predict (effectively) what would happen, consciousness-wise, given a randomly generated GLUT. There is a fact of the matter as to what would happen in that situation (in the same sense, whatever that may be, that there are facts about consciousness in normal situations), and a complete theory will be able to say what it is; the best you can say is that you don't currently have a theory that covers that situation (or that the situation is underspecified; maybe it depends on what sort of randomizer you use, or something).
I think your prior estimate for other people's philosophical competence and/or similarity to you is way too high.
To the best of our knowledge, any "quantum property" can be simulated by a classical computer with approx. exponential slowdown. Obviously, a classical computer is not going to instantiate these quantum properties.
Is that obvious?
It should be. We can definitely build classical computers where quantum effects are negligible.
(For all we know, the individual transistors of these computers might have some subjective experience; but the computer as a whole won't.)
If the Church-Turing-Deutsch thesis is true and some kind of Digital Physics is an accurate depiction of reality then a simulation of physics should be indistinguishable from 'actual' physics. Saying subjective experience would not exist in the simulation under such circumstances would be a particularly bizarre form of dualism.
The same formal structure will exist, but it will be wholly unrelated to what we mean by "subjective experience". What's dualistic about this claim?