"But you can't prove it's impossible for my mind to spontaneously generate a belief that happens to be correct!"
Whether the belief happens to be true is irrelevant. What matters is whether the person can justify the belief. If the conviction is spontaneously generated, the person doesn't have a rational argument that shows how the claim arises from previously-accepted statements. Thus, asserting that claim is wrong, regardless of whether it happens to be true or not.
It's not about truth! It's about justification!
I mean, there's got to be more to it than inputs and outputs.
Otherwise even a GLUT would be conscious, right?
Eliezer, I suspect you are not being 100% honest here. I don't have any problems with a GLUT being conscious.
"Otherwise even a GLUT would be conscious, right?"
I have to admit that this sounds crazy, and that I don't really understand what's going on. But it looks like it's logically necessary that lookup tables can be conscious. As far as we know, the Universe, and everything in it, can be simulated on a giant Turing machine. What is a Turing machine, if not a lookup table? Granted, most Turing machines use a much smaller set of symbols than a GLUT- base 5 or base 10 instead of base 10^10^50- but how would that change a system from being "non-conscious" to being "conscious"? And while a Turing machine has a state register, this can be simulated by just using N lookup tables instead of one lookup table. It seems like we have to believe that 1), the mathematical structure of a UTM relative to a giant lookup table, which is very minimal indeed, is the key element required for consciousness, or 2), the Universe is not Turing-computable, or 3), consciousness does not exist.
Eliezer, I suspect you are not being 100% honest here. I don't have any problems with a GLUT being conscious.I have problems with a GLUT being conscious. (Actually, the GLUT fails dramatically to satisfy the graph-theoretic requirements for consciousness that I alluded to but did not describe earlier today, but I wouldn't believe that a GLUT could be conscious even if that weren't the case.)
Hrm... as far as no one actually willing to jump in and say "a glut can be/is conscious"... What about Moravec and Egan? (Egan in Permutation City, Moravec in Simulation, Consciousness, Existance)... I don't recall them explicitly coming out and saying it, but it does seem to have been implied.
Anyways, I think I'm about to argue it... Or at least argue that there's something here that's seriously confusing me:
Okay, so you say that it's the generating process of the GLUT that has the associated consciousness, rather than the GLUT itself. Fine...
But exactly where is the breakdown between that and, say, the process that generates a human equivalent AI? Why not say that process is where the consciousness resides rather than the AI itself? if one takes at least some level of functionalism, allowing some optimizations and so on in the internal computations, then the internal "levers" can end up looking algorithmically very very different than the external, even if the behavior is identical.
In other words, as I start with the "correct" rods and levers to produce consciousness, then optimize various bits of it incramentally... when does the optimization proces...
Hi Caledonian. Hi Stephen. If I remember correctly, this is where the program that is the three of us having college bull sessions goes HALT and we never get any further, is it not? Once again, Eliezer says clearly what Caledonian was thinking and articulated through metaphor in one-on-one conversations (namely "Well, then it wouldn't be conscious. IMHO." ) but is predictably not understood by same, while I am far from sure. Eliezer: You don't know how much I wanted to see you type essentially the line "Ordinarily, when we're talking to...
"The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing."
You begin by saying that you are using "zombie" in a broader-than-usual sense, to denote something that "behave[s] exactly like a human without being conscious". The GLUT was con...
Isn't the state-space of similar such problems known to exceed the number of atoms in the Universe? There is a term for problems which are rendered unsolvable because there just isn't enough possible state-storing matter to represent them, but I can't think of it now.
Pardon me if this is a stupid question, my experience with AI is limited. Funny Eliezer should mention Haskell, I've got to get back to trying to wrap my brain around 'monads'.
I'm not sure what you mean by a GLUT? A static table obviously wouldn't be conscious, since whatever the details consciousness is obviously a process. But, the way you use GLUT suggests that you are including algorithms for processing the look-ups, how would that be different from other algorithmic reasoning systems using stored data (memories)?
And while a Turing machine has a state register, this can be simulated by just using N lookup tables instead of one lookup table. It seems like we have to believe that 1), the mathematical structure of a UTM relative to a giant lookup table, which is very minimal indeed, is the key element required for consciousness, ...TMs also have the notable ability to not halt for some inputs. And if you wanted to precompute those results, writing NULL values into your GLUT, I'd really like to know where the heck you got your Halting Oracle from. The mathematical str...
There was something like a random-yet-working GLUT picked out by sheer luck - abiogenesis. And it did eventually become conscious. The original improbability is a small jump (comparatively) and the rest of the improbability was pumped in by evolution. Still, it's an existence proof of sorts - I don't think you can argue conscious origin as necessary for consciousness. There needs to be an optimizer, or enough time for luck. There doesn't really need to be any mind per se.
A simple GLUT cannot be conscious and or intelligent because it has no working memory or internal states. For example, suppose the GLUT was written at t = 0. At t = 1, the system has to remember that "x = 4". No operation is taken since the GLUT is already set. At t = 2 the system is queried "what is x?". Since the GLUT was written before the information that "x = 4" was supplied, the GLUT cannot know what x is. If the GLUT somehow has the correct answer then the GLUT goes beyond just having precomputed outputs to precomputed ...
The rule of the rationalist's game is that every improbable-seeming belief needs an equivalent amount of evidence to justify it.
Aren't you already breaking it allowing what you consider improbable GLUTs with no evidence?
Also how would you play this game with someone with a vastly different prior?
Any process can be replaced by a sufficiently-large lookup table with the right elements.
If you accept that a process can be conscious, you must acknowledge that lookup tables can be.
There is no alternative. Resistance is useless.
Let me be the first in this thread to suggest that, for the purposes of GLUTs, we should taboo the word "conscious." This post, in my opinion, is a shining example of Eliezer’s ability to verbally carve reality at its joints. After a remarkably clear discussion of the real problem, the question of “conscious” GLUTs seems like a silly near-boundary case.
Is there a technical reason I should think otherwise?
PK is right. I don't think a GLUT can be intelligent, since it can't remember what it's done. If you let it write notes in the sand and then use those notes as part of the future stimulus, then it's a Turing machine.
The notion that a GLUT could be intelligent is predicated on the good-old-fashioned AI idea that intelligence is a function that computes a response from a stimulus. This idea, most of us in this century now believe, is wrong.
Wow, a lot of things to say at this point.
Eliezer Yudkowsky: First, as I started reading, I was going to correct you and point out that Daniel Dennett thinks a GLUT can be conscious, as that is exactly his response to Searle's Chinese Room argument, thinking that I didn't need to read further. Fortunately, I did read the whole thing and find out, when I look at the substance of what the two of you believe, it's the same. While Dennett would say that the GLUT running in the Chinese Room is conscious, what you were really asking was, what is the source of ...
"Any process can be replaced by a sufficiently-large lookup table with the right elements."
That misses my point. A process is needed to do the look-ups or the table just sits there.
If you abstract away the low-level details of how neurons work, couldn't the brain be considered a very large, multidimensional look-up table with a few rules regarding linkages and how to modify strengths of connections?
Phil: Gluts can certainly learn. A GLUT's program is this:
while (true) { x = sensory input y, z = GLUT(y, x) muscle control output = z }
Everything a GLUT has learned is encoded into y. Human GLUTS are so big that even their indices are huge.
Is the entity that results from gerrymandering together neural firings from different people's brains, so as to produce a pattern of neural firings similar to a brain but not corresponding to any "real person" in this Everett branch, conscious? How about gerrymandering together instructions occurring in different CPUs? Atomic motions in random rocks?
Consider a tiny look-up table mapping a few (input sentence, state) pairs to (output sentence, state) pairs - one small enough to practically be constructed, even. So long as you stick to the few sentences it accepts in the current state, it behaves exactly like a GLUT. If a GLUT is conscious, either this smaller table is conscious too, or it's the never activated entries that make the GLUT conscious.
Personally my response to the one would be similar to Caledonian's; perhaps more extreme. I think the linguistic analysis of philosophers is essentially worthless. Language is a means of communication and the referents a word has a matter of convention; meaning is a psychological property of no particular value. What concerns me is the person doing the communication. Where have they been and what have they done? You can, of course, follow the improbability on that. But my maxim is just,
Maxim: Language is a means of communication.
If somebody comes to you wi...
"But suppose someone actually did reach into a GLUT-bin and by genuinely pure chance pulled out a GLUT that wrote philosophy papers?"
That misses my point. A process is needed to do the look-ups or the table just sits there.
Ah, I see you're not familiar with the works of Jorge Luis Borges. Permit me to hyperlink: The Library of Babel
PK, Phil Goetz, and Larry D'Anna are making a crucial point here but I'm afraid it is somewhat getting lost in the noise. The point is (in my words) that lookup tables are a philosophical red herring. To emulate a human being they can't just map external inputs to external outputs. They also have to map a big internal state to the next version of that big external state. (That's what Larry's equations mean.)
If there was no internal state like this, a GLUT couldn't emulate a person with any memory at all. But by hypothesis, it does emulate a person (pe...
Internal state is not necessary. Consider a function f mapping strings to strings by means of a lookup table. Here are some examples of f evaluated with well-chosen inputs:
f("Hi, Dr. S here, how are you now that you're a lookup table?") = "Very well, thank you. I notice no difference."
f("Hi, Dr. S here, how are you now that you're a lookup table? Really, none at all?") = "Yes, really no differences at all."
f("Hi, Dr. S here, how are you now that you're a lookup table? You have insulted my entire family!") = "I know you well enough to know that my last reply could not possibly have insulted you; someone must be feeding me fake input histories again."
There should probably be timestamps in the input histories but that's an implementation detail. For what it's worth, I hold that f is conscious.
Of course a GLUT can be conscious. A problem some may have with it would be that it is not self-modifying, for the table is set in stone, right? Well, consider it from this perspective:
First of all, I assume that all or some of the output is fed back into the input, directly or indirectly (or is that cheating? why?). Then, we can divide the GLUT in two parts, A and B, that differ only in one input: the fact that the "zombie" has previously heard a particular phrase, for example "You are not conscious, you ugly zombie!".
There is no need ...
People who want to read more about this topic online may find that it is sometimes referred to as a "humongous" (slang for huge) lookup table or HLUT. Googling on that term will find some additional hits.
Psy-Kosh's point about implementations that use lookup tables internally of various sizes I think echos Moravec's point in Mind Children. The idea is that you could replace various sub-parts of your conscious AI with LUTs, ranging all the way from trivial substitutions up to a GLUT for the whole thing. Then as he says, when and where is the consc...
The more I think about it, the more I am convinced that if any GLUT could ever be made it would be an unspeakably horrible abomination. To explicitly represent the brain states of all the worst things that could happen to a person is a terrible thing. Weather the "internal state" variable is actually pointing at one doesn't seem to make a big moral difference. GLUTs are torture. They are the worst form of torture I've ever heard of. I'm glad they're almost certainly impossible.
I recall several years back Eliezer writing on these topics and at the time he saw this as a major stumbling block for functionalism. I would be interested in hearing how his thoughts have evolved, and I hope he can write about this soon.
Very, very strongly seconded.
Larry gives me another idea. Say the GLUT is implemented as a giant book with a person following instructions a la the Chinese Room. In the course of looking up the current (sentence, state) pair in the book, many other entries will inevitably impinge on the operator's retinas and enter their m...
Hal: Yeah, I actually am inclined toward thinking that something like Permutation City style cosmology/consciousness is actually valid... HOWEVER
If so, that seems to seperate consciousness and material reality to the point that one may as well say "what material reality?"
But then, one could say
"hrm, okay, so let's say that physics as we know it is the wrong reduction, and instead there's some other principle that ends up implying/producing consciousness, and something about that fundamental principle and so on causes statistical patterns/reg...
Greg Egan says, in the Permutation City FAQ:
I think the universe we live in provides strong empirical evidence against the “pure” Dust Theory, because it is far too orderly and obeys far simpler and more homogeneous physical laws than it would need to, merely in order to contain observers with an enduring sense of their own existence. If every arrangement of the dust that contained such observers was realised, then there would be billions of times more arrangements in which the observers were surrounded by chaotic events, than arrangements in which there were uniform physical laws.
Nick: oh, hey, cool, thanks. Didn't know about the existance of such a FAQ
Yeah, the uniformity thing (which I thought of in terms of existance of structure in experience) does seem to be a hit against it, and something I've spent time thinking about, still without conclusion though.
On the other hand, the chain of reasoning leading to it seems hard to argue against.
ie, what would have to be true for for something like the dust theory to be false? I have trouble thinking of any way of having the dust theory be false and yet also keeping anything like zombies...
Incidentally, I note that the uniformity/structure problem is also, near as I can tell, a hit against Tegmark style "all possible mathematical structures" multiverse
Not necessarily. Tegmark suggests that mathematical structures with higher algorithmic complexity [in what encoding?] have lower weight [is there a Mangled Worlds-like phenomenon that turns this weight into discrete objective frequencies?], and that laws producing an orderly universe have lower complexity than chaotic universes or especially encodings of specific chaotic experiences.
Does Tegmark provide any justification for the lower weight thing or is it a flat out "it could work if in some sense higher complexity realities have lower weight"?
For that matter, what would it even mean for them to be lower weight?
I'd, frankly, expect the reverse. The more "tunable parameters", the more patterns of values they could take on, so...
For that matter, if some means of different weights/measures/whatever could be applied to the different algorithm's, why disallow that sort of thing being applied to different "dust interpretations"?
And any thoughts at all on why it seems like I'm not (at least, most of me seemingly isn't) a Boltzmann brain?
Well, the first point is to discard the idea that orderly perceptions are less probable than chaotic ones in the Dust.
The second is to recognize that probability doesn't matter to the anthropic principle at all. You don't exist in the chaotic perspectives, so you never see them.
Psy-Kosh:
Does Tegmark provide any justification for the lower weight thing or is it a flat out "it could work if in some sense higher complexity realities have lower weight"?
It's the same justification as for the Kolmogorov prior: if you use a prefix-free code to generate random objects, less complex objects will come up more frequently. Descriptions of worlds with more tunable parameters must include those parameters, which adds complexity. (But, yes, if complexity/weight/frequency is ignored, there are infinitely more worlds above any complexit...
Psy-Kosh : "Yeah, the uniformity thing (which I thought of in terms of existance of structure in experience) does seem to be a hit against it, and something I've spent time thinking about, still without conclusion though.
On the other hand, the chain of reasoning leading to it seems hard to argue against.
ie, what would have to be true for for something like the dust theory to be false? I have trouble thinking of any way of having the dust theory be false and yet also keeping anything like zombies disallowed."
Psy-Kosh, that isn't a chain of reasoni...
It seems that the dust should generate observer-moments with probability according to their algorithmic complexity, which would produce many more chaotic than normal ones.
The full version of the Library of Babel can be generated by "walking" through the versions with a limited number of texts, each of finite length. It contains every possible string that can be composed of a given set of symbols - infinitely many strings, each infinitely long. Any finite string that can appear in the Library, does appear - infinitely many times.
In the Englis...
It's interesting that Eliezer never heard anyone say that a GLUT is conscious before now, but now nearly all the commenters are saying that GLUT is conscious. What is the meaning of this?
Unknown: I was unclear. I meant "rejecting the assumptions involved in the chain of reasoning that leads to the dust hypothesis would seem to require accepting things very much like zombies, and in ways that seem rather preposterous, at least to me"
Yes, obviously if ~zombie -> dust, then ~dust->zombie. Either way, I know I'm very confused about this whole matter.
Caledonian: Yes, AB will be more common than CDEFG as a substring. but ABABABABABAB will be less common than AB(insert-random-sequence-here)
In other words, the number of "me&qu...
In the FULL version, "AB" and "CDEFG" are equally probable. Each appears infinitely often, but the order of the category of infinities that they belong to is the same.
Would you argue that odd numbers are as probable as even numbers in the set of natural numbers, because the order of the category of infinities that they belong to is the same?
How about squares (1, 2, 4, 9, 16, ...) versus non-square numbers? Prime numbers versus composite numbers?
It depends on how you order it. With the natural numbers in ascending order, squares are less common. Interleaving them like {1, 2, 4, 3, 9, 5, 16, 6, 25, 7, ...}, they're equally common. With a different order type like {2, 3, 5, 6, 7, ..., 1, 4, 9, 16, 25, ...}, I have no idea. This is a problem.
See also Nick Bostrom's Infinite Ethics [PDF].
Would you argue that odd numbers are as probable as even numbers in the set of natural numbers, because the order of the category of infinities that they belong to is the same? How about squares (1, 2, 4, 9, 16, ...) versus non-square numbers? Prime numbers versus composite numbers?
As far as I understand, the sets of odd numbers, squares, and primes are all countable.
As such, a one-to-one correspondence can be established between them and the counting numbers. Therefore, considered across infinity, there are just as many primes as there are odd numbers...
Caledonian,
The part I have a problem with is where you go from the cardinality of the sets to a judgment of "equally probable".
Let me put it this way: you wrote,
In the English version, in any of the truncated (and sufficiently long) versions of the Library, the sequence "AB" is much more common than "CDEFG". It doesn't matter whether the texts are ten thousand letters long, or ten billion - the first is less complex and thus more probable than the second.
The "any" is the problem. I can construct a truncated versio...
My statement doesn't hold in ANY truncated version of the Library - it's not difficult to construct an example, because any finite version automatically serves.
But we're not DEALING with a finite version of the Library. We are dealing with the infinite version. And infinity wreaks some pretty serious havoc on conventional concepts of probability.
So why do you say that all sentences have equal probability, rather than that the probability is undefined, which would seem to be the default option?
Hmmmm...
The set of Turing machines is countably infinite.
If I ran a computer program that systematically emulated every Turing machine, would I thereby create every possible universe?
For example:
n=1;
max = 1;
while (1) {
emulate_one_instruction(n);
n = n+1;
if (n > max)
{max = max + 1; n = 1;}
}
(In other words, the pattern of execution goes 1,1,2,1,2,3,1,2,3,4, and so on. If you wait long enough, this sequence will eventually repeat any number you specify as many times as you specify.)
Of course, you'd need infinite resources to run this for an infinite number of steps...
Some of those instructions won't halt, so eventually you'll get hung up in an infinite loop without outputting anything. And the Halting Problem has no general solution...
a "logically possible" but fantastic being â a descendent of Ned Block's Giant Lookup Table fantasy...
First, I haven't seen how this figures into an argument, and I see that Eliezer has already taken this in another direction, but...
What immediately occurs to me is that there's a big risk of a faulty intuition pump here. He's describing, I assume, a lookup table large enough to describe your response to every distinguishable sensory input you could conceivably experience during your life. The number of entries is unimaginable. But I sus...
Cyan: not true. As you can see, the non-halting processes don't prevent the others from running; they slow them down, but who cares when you have an infinite computer?
Tom: what do you think of my previous comment about a tiny look-up table?
As far as infinities, well, I think I'll for now stick with the advice of only bringing in infinities via well defined limits unless absolutely needed otherwise.
That's a good strategy and I recommend you stick to it.
The infinities are absolutely needed, here.
Caledonian: But do we here need to go beyond "well behaved limit defined infinities"?
You do if you want to talk about certain sets. Some of those sets are relevant to the Dust hypothesis. Therefore, if you want to talk about the Dust hypothesis, you have to be willing to discuss infinities in a more complex way.
Short answer: yes.
Paul and Patricia Churchland, and Jerry Fodor, and others, have argued that GLUTs would be conscious.
They would be conscious. But they need memory, because the past provides context that changes proper responses to future questions / dialogue.
Amendment: I said GLUTs need memory based on the idea of perfectly duplicating the behavior of some other conscious being, like Eliezer, who does have memory. But there are brain-damaged people with various deficiencies in long- and/or short-term memory who still have conscious experience, so a GLUT without the ability to store new memories could be conscious like those people. Anyhoo.
A person's thoughts are underdetermined by their actions - there's no way, probably even in principle, to know nearly as much about my current thoughts as I do by observing my macro-level behavior (as opposed to micro-scale heat/EM wave output), and definitely no way to do so by observing what I type, even over a long period of interaction. So, since a GLUT is purely behavioral, which of the many possible experiences corresponding to my behavior would arise from a GLUT simulating me?
Nick: a GLUT wouldn't just be a list of actions though, it'd be a list, basically, of all possible outputs for all possible inputs.
In other words, if I simply knew your actions, that may underdetermine you, but if I knew all the ways you would have acted for all possible circumstances, well, it's not obvious to be that that would underdetermine you.
It seems likely to me that even that, for reasonable definitions of "action", couldn't distinguish between e.g. me and a very good improviser with a rich model of my mind (and running at a high subjective speedup) but completely different private thoughts, or a group of such people, or between me and me plus some secret thought I would never tell anyone or act on but regularly think about.
Nick: Are you even reasonably confident that such an impostor wouldn't, effectively, have instanstiated a version of you in their head?
Even if they did (and I doubt they would have to, but am less confident), they would also have thoughts that weren't mine.
I'm sure this will come across as naïve or loony, but is anyone else here occasionally terrified by the idea that they might 'wake up' as a Boltzmann brain at some point, with a brain arranged in such a way as to subject them to terrible agony?
Perhaps a GLUT cannot actually pass the Turing Test. Consider the following extension to the thought experiment.
I have a dilemma. I must conduct a Turing Test. I have two identical rooms. You will be in one room. A GLUT will be in the other. At the end of the experiment, I must destroy one of the two rooms. The Turing Test forbids me to peer inside the rooms, and I only communicate with simple textual question/responses.
What can I do to save your life? What I would want to do is create a window between the two rooms. It would allow all the information in e...
Surely the 'bottom line' is this:
Once you've described what a GLUT is and what it does, it's a mistake to think that there's anything more to be said about whether it's "really conscious". (Agreeing with Dennett against Chalmers:) consciousness isn't a fundamental property like electric charge but a 'woolly', 'high level' one like health or war. Clearly there's no reason to think that for every physical system, there is a well-defined answer to the question "is it healthy?" (or "is a war in progress?") You can devise scenarios...
Part of the brain's function is to provide output to itself. Consequently, even though I would be quite happy saying C-3PO is conscious, I wouldn't be so quick to say that about a GLUT.
Still, it seems remarkable to me that everyone is treating consciousness as an either/or. Homo sapiens gradually became conscious after species that weren't. Infants gradually become conscious after a fertilized egg that was not. Let us put essentialism to rest.
And as an aside, I would state roughly that an organism is conscious iff it has theory of mind. That is, consciousness is ToM applied to oneself.
A GLUT consciousness would need to store an internal state for the consciousness it is modeling. This could be as detailed as the region of configuration space describing an equivalent brain. You have a mapping from (sensation, state) to (external output, state). Since this is essentially a precomputed physical simulation, it's trivially capable of consciousness.
Eliminating the state parameter would lead to non-consciousness.
Unless you think it's possible to program a conscious being in Haskell."
Ahemhem. Haskell is as fine a turing complete language; we just like to have our side effects explicit!
Also, can we just conclude that "consciousness" is the leakiest of surface generalizations ever? If I one day get the cog-psy skills I am going to run a stack-trace on what makes us say "consciousness" without knowing diddy about what it is.
As a budding AI researcher, I am frankly offended by philosophers pretending to be wise like that. No. There is no suc...
Likewise, EXPTIME doesn't mean Large EXPTIME -- an algorithm running in exp(1e-15*N) seconds is asymptotically slower than one running in N^300 seconds, but it is faster for pretty much all practical purposes.
I once read an Usenet post or Web page along the lines of “There are two kinds of numbers: those smaller than Graham's number and those larger than Graham's number. Computational complexity theory traditionally only concerns itself with the latter, but only the former are relevant to real-world problems.”
A philosopher says, "This zombie's skull contains a Giant Lookup Table of all the inputs and outputs for some human's brain." This is a very large improbability. So you ask, "How did this improbable event occur? Where did the GLUT come from?"
The philosopher is clearly simulating our universe, since as Eliezer already observed, a Giant Lookup Table won't fit in our universe. So he may as well be simulating 10^10^10^20 copies of our universe, each with a different Giant Lookup Table, so that every possible Giant Lookup Table gets re...
(I know this is an old article; let me know if commenting on it is a faux pas of some sort)
I can't recall ever seeing anyone claim that a GLUT is conscious.
Well, I'd definitely claim it. If we could somehow disregard all practical considerations, and conjure up a GLUT despite the unimaginably huge space requirements -- then we could, presumably, hold conversations with it, read those philosophy papers that it writes, etc. How is that different from consciousness ? Sure, the GLUT's hardware is weird and inefficient, but if we agree that robots and zombi...
Not Conscious? I'd say the GLUT was not only conscious, it has god like powers. It can solve NP hard problems in one look up. It can prove anything in under a second.
It's easy for a human to confuse epsilon for zero. In most cases this would be a useful simplification, but a GLUT can take that simplification and use it against you. A look up table doesn't warp space and time? Well, actually it does, it's just that a normal one would warp it by an insignificant amount. We wouldn't normally think of a look up table as threatening a death star, but even a...
How can you be 100% confident that a look up table has zero consciousness when you don't even know for sure what consciousness is?
Why not just define consciousness in a rational, unambiguous, non-contradictory way and then use it consistently throughout. If we are talking thought experiments here, it is up to us to make assumption(s) in our hypothesis. I don't recall EY giving HIS definition of consciousness for his thought experiment.
However, if the GLUT behaves exactly like a human, and humans are conscious, then by definition the GLUT is conscious, whatever that means.
As far as I can tell, GLUTs have to fail Turing tests for relativistic reasons.
Presumably lookup tables need to be stored somewhere in the universe. The number of possible lookups a GLUT might have to do to respond to whatever's happened in a Turing test so far grows exponentially with time, so the distance information has to travel from some part of the lookup table to an output device also grows exponentially with time (and Grover's algorithm doesn't change this). Since the information can't travel faster than the speed of light, before long a tester wo...
'a "logically possible" but fantastic being' [Dennett]
I don't see where the top posting is going on the whole. P-Zombies are always supposed to logically possible, as Dennet says. There may be a lot of things wrong with logical possibility: it may be imposssible to derive real-world consequences from it, it may not exist..but whatever it is, it is not a level of probablity, even a small one. Tell a zombiephile that p-zombies are highly unlikey, and she'll reply "sure, but they're still logically possible".
GLUTs pose a challenge to t...
"No, no!" says the philosopher. "In the thought experiment, they aren't randomly generating lots of GLUTs, and then using a conscious algorithm to pick out one GLUT that seems humanlike! I am specifying that, in this thought experiment, they reach into the inconceivably vast GLUT bin, and by pure chance pull out a GLUT that is identical to a human brain's inputs and outputs! There! I've got you cornered now! You can't play Follow-The-Improbability any further!"
In my (limited) understanding of the way the universe began, it was a...
In this line of business you meet an awful lot of people who think that an arbitrarily generated powerful AI will be "moral".
A good counter to this argument would be to find a culture with morals strongly opposed to our own, and demonstrate that it is logical and internally consistent. My inability to think of such a culture could be interpreted as evidence that a sufficiently-powerful AI would be moral. But I think it's more likely that the morals we agree on are properties common to most moral frameworks that are workable in our particular b...
I can't help but notice that almost all the comments here are dealing with whether or not the GLUT is conscious. Apparently the community didn't find the "It's completely improbable" argument satisfying, and were left still asking the question. If I thought the explanation was correct and complete, just not satisfying, I would try to reason out what sort of mind would even ask that question, and why. I didn't find the explanation to be complete, though, so I'll try to answer the question instead.
As James_C points out, the GLUT can be treated as a...
Probably I am not right, but it looks to me that consciousness can go on without any "inputs" and "outputs". If I sit in the dark room alone and think about some sort of problem then I neither taking any inputs at that moment and nor generating any output uless I decide to think aloud :) So if you believe I am not a zombie then I am consciousness regardless if there are any inputs/outputs.
One more thing. Suppose there is a GLUT and I can talk to it. So I can ask a question: "GLUT, is there a question which you cannot answer?" What do you guys think the GLUT will tell me?
One of the best examples of the GLUT which I used to find very convincing, is by Jaron Lanier. (https://youtu.be/RgfFFRFPvyw) Instead of randomly pulling a computer out of nowhere, it's just the finite set of all possible computers. He uses this not to argue for zombies, but to introduce confusion and show how since hailstorms and asteroids can't be conscious, nobody really knows what they're talking about, therefore dualism is just as valid as reductionism. I now see where the error in reasoning is, thanks.
This is by far the silliest part of the sequences for me. Within this blog post, Yudkowsky briefly went insane and decided thought experiments have to be "probable" or "realistic" in order to be engaged with. He then refuses to answer the prompt until the last four sentences, wherein he basically admits that he doesn't have a framework for answering it.
Suppose someone, that someone indeed being a conscious agent, creates a GLUT and then swiftly dies a horrible death so you can stop focusing on the person who made the GLUT or how it got there and answer the damn question. Is the GLUT conscious?
In "The Unimagined Preposterousness of Zombies", Daniel Dennett says:
A Giant Lookup Table, in programmer's parlance, is when you implement a function as a giant table of inputs and outputs, usually to save on runtime computation. If my program needs to know the multiplicative product of two inputs between 1 and 100, I can write a multiplication algorithm that computes each time the function is called, or I can precompute a Giant Lookup Table with 10,000 entries and two indices. There are times when you do want to do this, though not for multiplication—times when you're going to reuse the function a lot and it doesn't have many possible inputs; or when clock cycles are cheap while you're initializing, but very expensive while executing.
Giant Lookup Tables get very large, very fast. A GLUT of all possible twenty-ply conversations with ten words per remark, using only 850-word Basic English, would require 7.6 * 10585 entries.
Replacing a human brain with a Giant Lookup Table of all possible sense inputs and motor outputs (relative to some fine-grained digitization scheme) would require an unreasonably large amount of memory storage. But "in principle", as philosophers are fond of saying, it could be done.
The GLUT is not a zombie in the classic sense, because it is microphysically dissimilar to a human. (In fact, a GLUT can't really run on the same physics as a human; it's too large to fit in our universe. For philosophical purposes, we shall ignore this and suppose a supply of unlimited memory storage.)
But is the GLUT a zombie at all? That is, does it behave exactly like a human without being conscious?
The GLUT-ed body's tongue talks about consciousness. Its fingers write philosophy papers. In every way, so long as you don't peer inside the skull, the GLUT seems just like a human... which certainly seems like a valid example of a zombie: it behaves just like a human, but there's no one home.
Unless the GLUT is conscious, in which case it wouldn't be a valid example.
I can't recall ever seeing anyone claim that a GLUT is conscious. (Admittedly my reading in this area is not up to professional grade; feel free to correct me.) Even people who are accused of being (gasp!) functionalists don't claim that GLUTs can be conscious.
GLUTs are the reductio ad absurdum to anyone who suggests that consciousness is simply an input-output pattern, thereby disposing of all troublesome worries about what goes on inside.
So what does the Generalized Anti-Zombie Principle (GAZP) say about the Giant Lookup Table (GLUT)?
At first glance, it would seem that a GLUT is the very archetype of a Zombie Master—a distinct, additional, detectable, non-conscious system that animates a zombie and makes it talk about consciousness for different reasons.
In the interior of the GLUT, there's merely a very simple computer program that looks up inputs and retrieves outputs. Even talking about a "simple computer program" is overshooting the mark, in a case like this. A GLUT is more like ROM than a CPU. We could equally well talk about a series of switched tracks by which some balls roll out of a previously stored stack and into a trough—period; that's all the GLUT does.
A spokesperson from People for the Ethical Treatment of Zombies replies: "Oh, that's what all the anti-mechanists say, isn't it? That when you look in the brain, you just find a bunch of neurotransmitters opening ion channels? If ion channels can be conscious, why not levers and balls rolling into bins?"
"The problem isn't the levers," replies the functionalist, "the problem is that a GLUT has the wrong pattern of levers. You need levers that implement things like, say, formation of beliefs about beliefs, or self-modeling... Heck, you need the ability to write things to memory just so that time can pass for the computation. Unless you think it's possible to program a conscious being in Haskell."
"I don't know about that," says the PETZ spokesperson, "all I know is that this so-called zombie writes philosophical papers about consciousness. Where do these philosophy papers come from, if not from consciousness?"
Good question! Let us ponder it deeply.
There's a game in physics called Follow-The-Energy. Richard Feynman's father played it with young Richard:
When you get a little older, you learn that energy is conserved, never created or destroyed, so the notion of using up energy doesn't make much sense. You can never change the total amount of energy, so in what sense are you using it?
So when physicists grow up, they learn to play a new game called Follow-The-Negentropy—which is really the same game they were playing all along; only the rules are mathier, the game is more useful, and the principles are harder to wrap your mind around conceptually.
Rationalists learn a game called Follow-The-Improbability, the grownup version of "How Do You Know?" The rule of the rationalist's game is that every improbable-seeming belief needs an equivalent amount of evidence to justify it. (This game has amazingly similar rules to Follow-The-Negentropy.)
Whenever someone violates the rules of the rationalist's game, you can find a place in their argument where a quantity of improbability appears from nowhere; and this is as much a sign of a problem as, oh, say, an ingenious design of linked wheels and gears that keeps itself running forever.
The one comes to you and says: "I believe with firm and abiding faith that there's an object in the asteroid belt, one foot across and composed entirely of chocolate cake; you can't prove that this is impossible." But, unless the one had access to some kind of evidence for this belief, it would be highly improbable for a correct belief to form spontaneously. So either the one can point to evidence, or the belief won't turn out to be true. "But you can't prove it's impossible for my mind to spontaneously generate a belief that happens to be correct!" No, but that kind of spontaneous generation is highly improbable, just like, oh, say, an egg unscrambling itself.
In Follow-The-Improbability, it's highly suspicious to even talk about a specific hypothesis without having had enough evidence to narrow down the space of possible hypotheses. Why aren't you giving equal air time to a decillion other equally plausible hypotheses? You need sufficient evidence to find the "chocolate cake in the asteroid belt" hypothesis in the hypothesis space—otherwise there's no reason to give it more air time than a trillion other candidates like "There's a wooden dresser in the asteroid belt" or "The Flying Spaghetti Monster threw up on my sneakers."
In Follow-The-Improbability, you are not allowed to pull out big complicated specific hypotheses from thin air without already having a corresponding amount of evidence; because it's not realistic to suppose that you could spontaneously start discussing the true hypothesis by pure coincidence.
A philosopher says, "This zombie's skull contains a Giant Lookup Table of all the inputs and outputs for some human's brain." This is a very large improbability. So you ask, "How did this improbable event occur? Where did the GLUT come from?"
Now this is not standard philosophical procedure for thought experiments. In standard philosophical procedure, you are allowed to postulate things like "Suppose you were riding a beam of light..." without worrying about physical possibility, let alone mere improbability. But in this case, the origin of the GLUT matters; and that's why it's important to understand the motivating question, "Where did the improbability come from?"
The obvious answer is that you took a computational specification of a human brain, and used that to precompute the Giant Lookup Table. (Thereby creating uncounted googols of human beings, some of them in extreme pain, the supermajority gone quite mad in a universe of chaos where inputs bear no relation to outputs. But damn the ethics, this is for philosophy.)
In this case, the GLUT is writing papers about consciousness because of a conscious algorithm. The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing.
"All right," says the philosopher, "the GLUT was generated randomly, and just happens to have the same input-output relations as some reference human."
How, exactly, did you randomly generate the GLUT?
"We used a true randomness source—a quantum device."
But a quantum device just implements the Branch Both Ways instruction; when you generate a bit from a quantum randomness source, the deterministic result is that one set of universe-branches (locally connected amplitude clouds) see 1, and another set of universes see 0. Do it 4 times, create 16 (sets of) universes.
So, really, this is like saying that you got the GLUT by writing down all possible GLUT-sized sequences of 0s and 1s, in a really damn huge bin of lookup tables; and then reaching into the bin, and somehow pulling out a GLUT that happened to correspond to a human brain-specification. Where did the improbability come from?
Because if this wasn't just a coincidence—if you had some reach-into-the-bin function that pulled out a human-corresponding GLUT by design, not just chance—then that reach-into-the-bin function is probably conscious, and so the GLUT is again a cellphone, not a zombie. It's connected to a human at two removes, instead of one, but it's still a cellphone! Nice try at concealing the source of the improbability there!
Now behold where Follow-The-Improbability has taken us: where is the source of this body's tongue talking about an inner listener? The consciousness isn't in the lookup table. The consciousness isn't in the factory that manufactures lots of possible lookup tables. The consciousness was in whatever pointed to one particular already-manufactured lookup table, and said, "Use that one!"
You can see why I introduced the game of Follow-The-Improbability. Ordinarily, when we're talking to a person, we tend to think that whatever is inside the skull, must be "where the consciousness is". It's only by playing Follow-The-Improbability that we can realize that the real source of the conversation we're having, is that-which-is-responsible-for the improbability of the conversation—however distant in time or space, as the Sun moves a wind-up toy.
"No, no!" says the philosopher. "In the thought experiment, they aren't randomly generating lots of GLUTs, and then using a conscious algorithm to pick out one GLUT that seems humanlike! I am specifying that, in this thought experiment, they reach into the inconceivably vast GLUT bin, and by pure chance pull out a GLUT that is identical to a human brain's inputs and outputs! There! I've got you cornered now! You can't play Follow-The-Improbability any further!"
Oh. So your specification is the source of the improbability here.
When we play Follow-The-Improbability again, we end up outside the thought experiment, looking at the philosopher.
That which points to the one GLUT that talks about consciousness, out of all the vast space of possibilities, is now... the conscious person asking us to imagine this whole scenario. And our own brains, which will fill in the blank when we imagine, "What will this GLUT say in response to 'Talk about your inner listener'?"
The moral of this story is that when you follow back discourse about "consciousness", you generally find consciousness. It's not always right in front of you. Sometimes it's very cleverly hidden. But it's there. Hence the Generalized Anti-Zombie Principle.
If there is a Zombie Master in the form of a chatbot that processes and remixes amateur human discourse about "consciousness", the humans who generated the original text corpus are conscious.
If someday you come to understand consciousness, and look back, and see that there's a program you can write which will output confused philosophical discourse that sounds an awful lot like humans without itself being conscious—then when I ask "How did this program come to sound similar to humans?" the answer is that you wrote it to sound similar to conscious humans, rather than choosing on the criterion of similarity to something else. This doesn't mean your little Zombie Master is conscious—but it does mean I can find consciousness somewhere in the universe by tracing back the chain of causality, which means we're not entirely in the Zombie World.
But suppose someone actually did reach into a GLUT-bin and by genuinely pure chance pulled out a GLUT that wrote philosophy papers?
Well, then it wouldn't be conscious. IMHO.
I mean, there's got to be more to it than inputs and outputs.
Otherwise even a GLUT would be conscious, right?
Oh, and for those of you wondering how this sort of thing relates to my day job...
In this line of business you meet an awful lot of people who think that an arbitrarily generated powerful AI will be "moral". They can't agree among themselves on why, or what they mean by the word "moral"; but they all agree that doing Friendly AI theory is unnecessary. And when you ask them how an arbitrarily generated AI ends up with moral outputs, they proffer elaborate rationalizations aimed at AIs of that which they deem "moral"; and there are all sorts of problems with this, but the number one problem is, "Are you sure the AI would follow the same line of thought you invented to argue human morals, when, unlike you, the AI doesn't start out knowing what you want it to rationalize?" You could call the counter-principle Follow-The-Decision-Information, or something along those lines. You can account for an AI that does improbably nice things by telling me how you chose the AI's design from a huge space of possibilities, but otherwise the improbability is being pulled out of nowhere—though more and more heavily disguised, as rationalized premises are rationalized in turn.
So I've already done a whole series of posts which I myself generated using Follow-The-Improbability. But I didn't spell out the rules explicitly at that time, because I hadn't done the thermodynamic posts yet...
Just thought I'd mention that. It's amazing how many of my Overcoming Bias posts would coincidentally turn out to include ideas surprisingly relevant to discussion of Friendly AI theory... if you believe in coincidence.