This post is a followup to "We are not living in a simulation" and intended to help me (and you) better understand the claims of those who took a computationalist position in that thread.  The questions below are aimed at you if you think the following statement both a) makes sense, and b) is true:

"Consciousness is really just computation"

I've made it no secret that I think this statement is hogwash, but I've done my best to make these questions as non-leading as possible: you should be able to answer them without having to dismantle them first. Of course, I could be wrong, and "the question is confused" is always a valid answer. So is "I don't know".

  1. As it is used in the sentence "consciousness is really just computation", is computation:
    a) Something that an abstract machine does, as in "No oracle Turing machine can compute a decision to its own halting problem"?
    b) Something that a concrete machine does, as in "My calculator computed 2+2"?
    c) Or, is this distinction nonsensical or irrelevant?
  2. If you answered "a" or "c" to question 1: is there any particular model, or particular class of models, of computation, such as Turing machines, register machines, lambda calculus, etc., that needs to be used in order to explain what makes us conscious? Or, is any Turing-equivalent model equally valid?
  3. If you answered "b" or "c" to question 1: unpack what "the machine computed 2+2" means. What is that saying about the physical state of the machine before, during, and after the computation?
  4. Are you able to make any sense of the concept of "computing red"? If so, what does this mean?
  5. As far as consciousness goes, what matters in a computation: functions, or algorithms? That is, does any computation that give the same outputs for the same inputs feel the same from the inside (this is the "functions" answer), or do the intermediate steps matter (this is the "algorithms" answer)?
  6. Would an axiomatization (as opposed to a complete exposition of the implications of these axioms) of a Theory of Everything that can explain consciousness include definitions of any computational devices, such as "and gate"?
  7. Would an axiomatization of a Theory of Everything that can explain consciousness mention qualia?
  8. Are all computations in some sense conscious, or only certain kinds?

ETA: By the way, I probably won't engage right away with individual commenters on this thread except to answer requests for clarification.  In a few days I'll write another post analyzing the points that are brought up.

New Comment
89 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I don't know the answer to any of these questions, and I don't know which of them are confused.

Here's a way to make the statement "consciousness is computation" a little less vague, let's call the new version X: "you can simulate a human brain on a fast enough computer, and the simulation will be conscious in the same sense that regular humans are, whatever that means". I'm not completely sure if X is meaningful, but I assign about 80% probability to its being meaningful and true, because current scientific consensus says individual neurons operate in the classical regime, they're too large for quantum effects to be significant.

But even if X turns out to be meaningful and true, I will still have leftover object-level questions about consciousness. In particular, knowing that X is true won't help me solve anthropic problems until I learn more about the laws that govern multiple instantiations of isomorphic conscious thingies, whatever that means. Consciousness could "be" one instantiated computation, or an equivalence class of computations, or an equivalence class plus probability-measure, or something even more weird. I don't believe we can enumerate all the possibilities today, much less choose one.

[-]XiXiDu160

There is too much vagueness involved here. A better question would be if there is any reason to believe that even though evolution could create consciousness we can not.

No doubt we don't know much about intelligence and consciousness. Do we even know enough to be able to tell that the use of the term "consciousness" makes sense? I don't know. But what I know is that we know a lot about physics and biological evolution and that we know that we are physical and an effect of evolution.

We know a bit less about the relation between evolutionary processes and intelligence but we do know that there is an important difference and that the latter can utilize the former.

Given all that we know, is it reasonable to doubt the possibility that we can create "minds", conscious and intelligent agents? I don't think so.

8byrnema
Very good point! Even if consciousness does require something mysterious and metaphysical we don't know about, if it's harnessed within us (and robustly passes from parent to child over billions of births), we can harness it elsewhere.
0Laoch
I reject the "Consciousness is really just computation" if you define computation as the operation of contemporary computers not brains, but I wholeheartedly agree that we are physical and an effect of evolution as is our subjective experience. I just don't think that the mind/consciousness is solely the neural connections of ones brain. Cell metabolism and whole organism metabolism and the environment of that organism define the concious experience also. If it's reduced to a neural net important factors will most certainly be lost.
0Shmi
Does this mean that amputees should be less conscious?
4gwern
Maybe not with humans, but definitely for octopuses! (More seriously, depending on how seriously you take embodied cognition, there may be some small loss. I mean, we know that your gut bacteria influence your mood via the nerves to the gut; so there are connections. And once there are connections, it becomes much more plausible that cut connections may decrease consciousness. After a few weeks in a float tank, how conscious would you be? Not very...)
0Shmi
I'm pretty sure that you agree that none of this means that a human brain in a vat with proper connections to the environment, real or simulated, is inherently less conscious than one attached to a body.
0gwern
I don't take embodiment that far, no, but a simulated amputation in a simulation would seem as problematic as a real amputation in the real-world barring extraordinary intervention on the part of the simulation.
0Laoch
No but subjective conscious experience would change definitely.
0Dolores1984
Well, that ought to be testable. If he upload a human, and the source of consciousness is lost, they should stop feeling it. Provided they're honest, we can just ask them.
0Laoch
That could very well be the case.
0[anonymous]
Well, you're a p-zombie, you would say that.
0lessdazed
Is there a better word than "consciousness" for the explanation for why (I think I) say "I see red" and "I am conscious"? I do (think I) claim those things, so there is a causal explanation.
3Pfft
I think any word would be better than "conciousness"! :) It really is a very confusing term, since it is often used (vaguely) to refer to quite different concepts. Cognitive scientists often use it to mean something similar to "attention" or as the opposite of "unconscious". This is an "implementation level" view -- it refers to certain mechanisms used by the brain to process information. Then there is what Ned Block calls "access consciousness", "the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior" (to quote Wikipedia). This is a "functional specification level" view: conciousness is correctly implemented if it lets you accurately describe the world around you or the state of your own mind. Then finally there's "phenomenological conciousness" or qualia or whatever you want to call it -- the mystical secret sauce. No doubt these are all interrelated in complicated ways, but it certainly does not help matter to use terminology which further blurs the distinction. Especially since they are not equally mysterious: the actual implementation in the brain will take a long time to figure out, and as for the qualia it's hard to say even what a successful answer would look like. But at the functional specification level, it seems quite easy to give a (teleological) explanation. That is, it's easy to see that an agent benefits from being able to represent the world (and be able to say "I see a red thing") and to reason about itself ("each time I see a red thing I feel hungry"). So it's not very mysterious that we have mental concepts for "what I'm currently feeling", etc.
  1. The distinction doesn't make sense to me. But then neither does the statement "Consciousness is really just computation." The only charitable reading I can give that statement is "Consciousness is really just and, as you will notice, the only really powerful or mysterious component of that system is computation". But even with that clarification, I really don't understand what you are getting at with the a vs b distinction. I get the impression that you attach a lot more importance to the (abstract vs concrete) distinction than I

... (read more)
2PhilGoetz
Then why favor torturing and burning them, instead of feeding them ice cream? Please explain - to me, it sounds like you are claiming to be a p-zombie. Even p-zombies shouldn't do that.
[-]ata80

I think you're completely mistaken about what computationalism claims. It's not that consciousness is a mysterious epiphenomenon of computation-in-general; it's more that we expect consciousness to be fully reducible to specific algorithms. "Consciousness is really just computation" left at that would be immediately rejected as a mysterious answer, fake causality, attempting to explain away what needs only to be explained, and other related mistakes; 'computation' only tells us where we should be looking for an explanation of consciousness, it ca... (read more)

Tentatively, my gut reactions are:

  1. (c)
  2. Any Turing, I expect.
  3. To say that a machine computed 2+2 means that it had taken data representing "2" and "2" and performed an operation which, based on the same interpretation that establishes the isomorphism, is equivalent to addition of an arbitrary pair of numbers.
  4. "computing red" makes as much sense as "computing 2", and for roughly the same reasons. "Red" is a symbol representing either emissive or reflective color.
  5. Algorithm, at a guess, but the distinction is
... (read more)
-2Peterdjones
(4) It is easy to see how "red" could be computed in that sense. The OP clearly thinks it is difficult, so presumably had another sense in mind.
1RobinZ
If so, further elaboration would be helpful.
-2Peterdjones
Well, the existence of qualia is a classic objection to computationalism and physicalism, and red is the classic quale.
1RobinZ
I was talking about the experiential phenomenon of seeing redness.
[-][anonymous]40

I think you will find this paper useful--Daniel Dennett answers some of these questions and explains why he thinks Searle is wrong about consciousness. Pretty much all of the positions Dennett endorses therein are computationalist, so it should help you organize your thoughts.

I feel that dfranke's questions make all kinds of implicit assumptions about the reader's worldview which makes them difficult for most computationalists to answer. I've prepared a different list - I'm not really interested in answers, just an opinion as to whether they're reasonable questions to ask people or whether they only make sense to me.

But you can answer them if you like.

For probability estimates, I'm talking about subjective probability. If you believe it doesn't make sense to give a probability, try answering as a yes/no question and then guess ... (read more)

3dfranke
I'll save my defense of these answers for my next post, but here are my answers: 1. Both of them. 2. Yes. The way I understand these words, this is a tautology. 3. No. Actually, hell no. 4. N/A 5. Yes; a. I'm not quite sure how to make sense of "probability" here, but something strictly between 0 and 1; b. Yes. 6. Negligibly larger than 0. 7. 1, tautologically. 8. For the purposes of this discussion, "No". In an unrelated discussion about epistemology, "No, with caveats." 9. This question is nonsense. 10. No. 11. If I answered "yes" to this, it would imply that I did not think question 11 was nonsense, leading to contradiction.
1Giles
I'll try and clarify the questions which came out as nonsense merely due to being phrased badly (rather than philosophical disagreement). 5: I basically meant, "can you simulate a human brain on a computer?". The "any degree of accuracy" thing was just to try and prevent arguments of the kind "well you haven't modelled every single atom in every single neuron", while accepting that a crude chatbot isn't good enough. 7: By "Theory of everything" I mean a set of axioms that will in principle predict the result of any physics experiment. Would you expect to see equations such as "consciousness = f(x), qualia = g(x)"? Or would you instead say "these equations describe the physical world to any required level of detail, yet I still don't see where the consciousness comes from"? (EDIT: I'm still not making sense here, so it may be best just to ignore this one) 8: People seem more eager to taboo the word "real" than the word "conscious". Not sure there's much I can do to rephrase this one. I wrote it in order to frame q9, which was easier to phrase in terms of reality than consciousness. 9: Sorry for the inferential distance. I was basically referring to the concept some people here call "reality fluid". A better question might be: how do you resolve Eliezer Yudkowsky's little confusion here? http://lesswrong.com/lw/19d/the_anthropic_trilemma/ 11: This question is referring to q2-10 only.
1TheOtherDave
Oh, all right. I'm bored and suggestible. 1 - Both potentially meaningful 2 - That's a question about the meanings of words. I don't object to those constraints on the meanings of those words, though I don't feel strongly about them. 3 - If "qualia" is meaningful (see 1), then no. 4 - N/A 5 - Ugh. "Any required degree" is damningly vague. Labeling confidence levels as follows: * C1 that it's in-principle-possible to build as good a simulation of a particular human as any other human is. * C2 that it's ipp to build a good enough simulation of a human that no currently existing test could reliably tell it apart from the original. * C3 that it's ipp to build one that could pass an "interview test" (a la Turing) with the most knowledgeable currently available judges. ...I'd say C1 > C2 > C3 > 99%, though C2 would require also implementing the computer in neurons in a cloned body. 5a - Depends on the required level of accuracy: ~0% for a stone statue, for example. For any of the above examples, I'd expect it to do so as much as the original does. 5b - Not in the sense you mean. 6 - I am not sure that question makes sense. If it does, accurate priors are beyond me. For lack of anything better, I go with a universal prior of 50%. 7 - Mostly that's a question about definitions... if it doesn't explain consciousness, is it really a Theory of Everything? But given what I think you mean by ToE: 99+%. 8 - Question about definitions. I'm willing to constrain my definition of "real" that way, for the sake of discussion. 9 - I have no idea and am not convinced the questions make sense, x4. 10 - x5. 11 - Not entirely, though it is a regular student at a nonsensei-run dojo.
1Perplexed
No, though parts of it were. Of course, people here who agree with me on that will likely disagree as to which parts those are. The main virtue of this list, and of dfranke's list that led to its production, is that the list stimulates thinking. For example, your question 9c struck me as somewhat nonsensical, and I think I learned something by trying to read some sense into it. (A space can have many measures. One imposes a particular measure for some purpose. What are we trying to accomplish by imposing a measure here?) Another thought stimulated by your list of questions was whether it might be interesting/useful/fun to produce a LessWrong version of the Philpapers survey. My conclusion was that it would probably require more work than it would be worth. But YMMV, so I will put the idea "out there".
0cousin_it
I like your questionnaire much more than the OP's. I didn't understand question 7, could you rephrase it? Question 8 seems to be about words. Otherwise everything's fine :-)
[-]zaph30

I would describe myself as a computationalist by default, in that I can't come up with an ironclad argument against it. So, here are my stabs:

1) I'm not sure what you mean by an abstract machine (and please excuse me if that's a formal term). Is that a potential or theoretical machine? That's how I'm reading it. If that's the case, I would say that CIRJC means both a and b. It's a computation of an extremely sophisticated algorithm, the way 2 + 2 = 4 is the computation of a "simple" one (that still needs something really big like math to execute)... (read more)

0dfranke
Searle, to a zeroth approximation. His claims need some surgical repair, but you can do that surgery without killing the patient. See my original post for some "first aid".
0dfranke
I'd certainly regard anything defined within the framework of automata theory as an abstract machine. I'd probably accept substitution of a broader definition.
  1. I'm not sure of what exactly you're after with this question, or what the question would even mean.
  2. Any Turing-equivalent model seems equally valid.
  3. In my mind, "a machine computed X" means that we can use the machine to figure out the answer to X. For instance, John Searle claims that any physical process can be interpreted to instantiate any computation, given a complex enough interpretation. According to this view, e.g. an arbitrary wall can be said to be computing 2+2 as well as 583403 + 573493. But the flaw here is that you cannot actually
... (read more)

I suppose I can consider myself a weak computationalist. I think a computer running a human mind will generate qualia, if it's a simple enough computer. After all, you could interpret a rock in such a way that it's a computer running a human mind.

It's the algorithm that matters.

  1. c, or at least I don't understand the distinction.
  2. Any sufficiently simple Turing machine. Since there's nothing that can clearly be called the output, if you didn't limit it in some way, you could say that a clock is a Turing machine if you map each time to the state the Turing m
... (read more)

I'm currently having an exchange with Massimo Pigliucci of Rationally Speaking who might be known here due to his Bloggingheads debate with Eliezer Yudkowsky where he was claiming that "you can simulate the 'logic' of photosynthetic reactions in a computer, but you ain't gonna get sugar as output." I have a hard time to wrap my mind around his line of reasoning, but I'll try:

Let's assume that you wanted to simulate gold. What does it mean to simulate gold?

According to Wikipedia to simulate something means to represent certain key characteristics... (read more)

5AdeleneDawner
Don't paper money and electronic money represent gold's 'key characteristic' of being useable for monetary exchange?
4[anonymous]
The key word here is "represent", which is not to be confused with "reproduce". No, we don't need a nuclear reactor or particle accelerator to simulate, i.e. to represent the missing properties. We need them to reproduce the missing properties. But to simulate something is to represent characteristics of it, not reproduce them. Now, there's an obvious opening here for someone to try to build an argument based on the fact that a simulation need not reproduce characteristics. It would then be necessary to argue that mere representation of certain characteristics is sufficient to reproduce others. But that would be a new argument, and I'm just addressing this one.
1jtk3
When I run an old 8 bit game on a Commodore-64 emulator it seems to me that the emulation functionally reproduces a Commodore-64. The experience of playing the game can clearly be faithfully reproduced. Hasn't something been reproduced if one cannot tell the difference between the operation of the original system and that of the simulation?
2kurokikaze
In case of C64 emulator, the game is represented, your experience is reproduced. As for second, I think it's purely subjectional as it depends on what level of output you expect from simulation. For gamer the emulator game can be "reproduction", for engineer that seek some details on inner workings of Commodore it can be just an approximation of "real thing" and of no use for him.
2timtyler
That just seems confused to me. Simulated gold would be exchanged on simulated gold markets - where it would work just fine. You can simulate anything - at least according to the Church–Turing–Deutsch principle.
0XiXiDu
See my longer comment here.
2luminosity
Gold in a simulation is less useful to us because we can't use it for everything we could use 'real' gold for. However that gold should be just as useful to anything inside the simulation as our gold is to us, barring changes in value due to changes in quantity. Does anyone really think that we would simulate gold in order to use it in exactly the ways we want to use real gold?
1[anonymous]
But what about Eliezer's reply to Pigliucci's photosynthesis argument? As I understand it, Eliezer's counterargument was that intelligence and consciousness are like math in the sense that the simulation is the same as the real thing. In other words, we don't care about simulated sugar because we want the physical stuff itself, but we aren't so particular when it comes to arithmetic--the same answer in any form will do. As far as I can tell, this argument still applies to gold unless there are good reasons to think that consciousness is substrate dependent. But as Eliezer pointed out in that diavlog, that doesn't seem likely.
5mkehrt
That reply is entirely begging the question. Whether or not consciousness is a phenomenon "like math" or a phenomenon "like photosynthesis" is exactly is being argued about. So it's not an answering argument; it's an assertion.
3[anonymous]
I completely agree--XiXiDu was summarizing Massimo Pigliucci's argument, so I figured I'd summarize Eliezer's reply. The real heart of the question, then, is figuring out which one consciousness is really like. I happen to think that consciousness is closer to math than sugar because we know that intelligence is so, and it seems to me that the rest follows logically from Minsky's idea that minds are simply what brains do. That is, if consciousness is what an intelligent algorithm feels like from the inside, then it wouldn't make much sense for it to be substrate-dependent.
0XiXiDu
This morning I followed another discussion on Facebook between David Pearce and someone else about the same topic and he mentioned a quote by Stephen Hawking: What David Pearce and others seem to be saying is that physics doesn't disclose the nature of the "fire" in the equations. For this and other reasons I am increasingly getting the impression that the disagreement all comes down to the question if the Mathematical universe hypothesis is correct, i.e. if Platonism is correct. None of them seem to doubt that we will eventually be able to "artificially" create intelligent agents. They don't even doubt that we will be able to use different substrates. The basic disagreement seems to be that, as Constant notices in another comment, a representation is distinct from a reproduction. People like David Pearce or Massimo Pigliucci seem to be arguing that we don't accept the crucial distinction between software and hardware. For us the only difference between a mechanical device, a physical object and software is that the latter is the symbolic (formal language) representation of the former. Software is just the static description of the dynamic state sequence exhibited by an object. One can then use that software (algorithm) and some sort of computational hardware and evoke the same dynamic state sequence so that the machine (computer) mimics the relevant characteristics of the original object. Massimo Pigliucci and others actually agree about the difference between a physical thing and its mathematical representation but they don't agree that you can represent the most important characteristic as long as you do not reproduce the physical substrate. The position hold by those people who disagree with the Less Wrong consensus on this topic is probably best represented by the painting La trahison des images. It is a painting of a pipe. It represents a pipe but it is not a pipe, it is an image of a pipe. Why would people concerned with artificial intelligence care ab

1) I don't know. I also think there is a big difference between c) "nonsensical" and c) "irrelevant". To me, "irrelevant" means all possible worlds are instantiated, and those also computed by machines within such worlds are unfathomably thicker.

2) I don't know.

3) Probably causation between before and after is important, because I doubt a single time slice has any experience due to the locality of physics.

4) Traditionally I go point at things, a stop sign, a fire truck, and apple, and say "red" each time. Then I poin... (read more)

0Peterdjones
(4) The question of identical inputs and outputs is a tricky one. No two physically different systems produce unconditionally identical inputs and oputputs imder all circumsntances, since that would imply that there are no circumstances under which there physical differrence could be observed or measured. The "identity" of outputs required by functional equivlance means either a) identity under an abstract definitions which subsumes a number of physical differences (eg. a "1" or "0" can be multiply realised), or (b) absolute identity of a subset of outputs, witht the rest being deamed to be irrelevant., eg we can regard two systems as ebing compuationally equivalent although they produce different amounts of heat and noise when running.
0lessdazed
How, exactly? I am allowing any section of the system to become as if a black box, replaceable with a different black box. As the insides of the boxes are different, they are not identical. Open the boxes, and see the differences. All I'm arguing is that so long as the boxes are closed, they may do the same thing. As an example, imagine a pair of motors that take in sunlight and oil and create heat and energy. One has inefficient sun and oil to energy converters, the other has an efficient oil engine and simply wastes the sunlight as heat. Arbitrarily, its program regulates its efficiency as a function of the sunlight it receives. Or imagine a modern PC emulating a mac OS emulating Windows, as against a slightly older PC. Bear in mind that I didn't understand your a) or b).
-1Peterdjones
One black box is equivalent to another so long as you don't peek inside. So the outputs you get if, or instance, you X ray it, are not part of the subset of outputs under which they are equivalent. If such official, at-the-edge outputs are all that matters for computationalism, then dumb-but-fast Look Up Tables could be conscious, which is a problem. If the inner workings of black boxes count, then the Turing Test is flawed,. for similar reasons.
2TheOtherDave
Sincere question: why would this be a problem? I mean, I get that LUTs violate our intuitions about what ought to be necessary to get genuine consciousness, but then they also violate my intuitions about what ought to be necessary to get a convincing simulation of it. If I throw out the latter intuitions to accept a convincing LUT, I'm not sure why I shouldn't be willing to throw out the former intuitions as well. Is there more here than just dueling intuitions?
1PhilGoetz
See my lower bound for consciousness. Lookup tables don't satisfy the lower bound. The lower bound is that point at which Quine's theory of ontological relativity / confirmation holism is demonstrably false, and so "meaning" can exist.
0TheOtherDave
Do you expect lookup tables to be able to demonstrate convincing consciousnesslike behavior (a la Searle's Chinese Room), while still not satisfying your lower bound? If not, would encountering such a convincing GLUT-based system (that is, one that violated your expectations) change your opinions at all about where the lower bound actually is? Because in general, I agree with you that there exists a lower bound and GLUTs don't satisfy it, but I don't think a GLUT can convincingly simulate consciousness, and if I encountered one that did (as I initially understood Peter to be proposing) I'd have to significantly update my beliefs in this whole area.
1PhilGoetz
I expect them to be theoretically able to exhibit conscious-like behave, but don't endorse the idea that Searle's Chinese Room is a lookup table, or unconscious. Searle's Chinese Room is carrying out algorithms; and Searle's commentary on it is incoherent, and I disagree with his definitions, assumptions, arguments, and conclusions. In practice, I don't expect a lookup table to produce any such behavior until long after we have learned much more about consciousness. A lookup table might be theoretically incapable of exhibiting human-like behavior due to the limited memory and computational capacity of this universe.
0TheOtherDave
Yeah, that's my expectation. So confirming the actual existence of a human-like GLUT would cause me to sharply revise many of my existing beliefs on the whole subject. My confidence, in that scenario, that the GLUT was not conscious would not be very high.
-1Peterdjones
You shouldn't because they are different intuitions. In fact I don't know why you have the intuition that you can't simulate complex processing with a Giant Look Up Table. All you have to do is record a series of inputs and outputs from a piece of software, and there is the database for your GLUT. Of course, that GLUT will only be convincing if it is asked the right questions. If any software is Gluttable up to a point, the Consciousness Programme is Gluttable UTAP. But we don't have to believe a programme that is spitting out pre recorded digits of pi is calculating pi. We can keep that intuition.
0rwallace
That's not a lookup table, that's just a transcript. I only ever heard of one person believing a transcript is conscious. A lookup table gives the right answers for all possible inputs. The reason we have the intuition that you can't simulate complex processing with a lookup table is that it's physically impossible - the size would be exponential in the amount of state, making it larger than the visible universe for anything nontrivial. But it is logically possible to have a lookup table for, say, a human mind for the duration of a human lifespan; and such a thing would, yes, contain consciousness.
0TheOtherDave
Note that the piece of the original comment you don't quote attempts to engage with this, by admitting that such a GLUT "will only be convincing if it is asked the right questions" and thus only simulates the original "up to a point." Which is trivially true with even a physically possible LUT. Heck, a one-line perl script that prints "Yes" every time it is given input simulates the behavior of any person you might care to name, as long as the questioner is sufficiently constrained. Whether Peterdjones intends to generalize from that to a less trivial result, I don't know.
0lessdazed
If I say a GLUT can't compute the output that is consciousness (suppose we have a consciousness detecting machine, the output will be whatever causes the needle on that machine to jump) without a model of a person equivalent to a person, you'll probably say I'm begging the question. I can't think of a way around that, but if you could refute that thought of mine, that would probably resolve a lot for me.
0TheOtherDave
I agree that if the questioner is sufficiently constrained, then a GLUT (or even a Tiny Lookup Table) can simulate any process's responses to that questioner, however complex or self-referential the process. So, yes, any process -- including conscious processes -- can be simulated UTAP by a simple look-up table, in the same sense that living biological systems can be simulated by rocks UTAP. I've lost track of why that is important.
-2Peterdjones
If the intuition that look-up is not sufficient computation for consciousness is correct, then a flaw in the Turing Test is exposed. If a complex Computation Programme could pass the TT, then a GLUT version must be able to as well.
0TheOtherDave
Sure, I agree that with a sufficiently constrained questioner, the Turing Test is pretty much useless.
0Nornagest
The values of the GLUT have to be populated somehow, which means matching an instance of the associated computation against an identical stimulus by some means at some point in the past. Intuitively it seems likely that a GLUT is too simple to instantiate consciousness on its own, but it seems to be better viewed as one component of a larger system that must in practice include a conscious agent, albeit one temporally and spatially removed from the thought experiment's present. Isn't this basically a restatement of the Chinese Room?
0lessdazed
That's not what I claimed, in fact, I was trying to be careful to discredit that. I said the system can be arbitrarily divided, and replacing any part with a different part/black box that gives the same outputs as the original would have would not affect the rest of the system. Some patterns of replacement of parts remove the conscious parts. Some do not. This is important because I am trying to establish "red" and other phenomena as relational properties of a system containing both me and a red object. This is something that I think distinguishes my answer from others'. I'm distinguishing further between removing my eyes and the red object and replacing them with a black box sending inputs into my optic nerves, which preserves consciousness, and replacing my brain with a black box lookup table and keeping my eyes and the object intact, which removes the conscious subsystem of the larger system. Note that some form of the larger system is a requirement for seeing red. My answer highlights how only some parts of the conscious system are necessary for the output we cal consciousness, and makes sure we don't confuse ourselves and think that all elements of the conscious computing system are essential to consciousness, or that all may be replaced. The algorithm is sensitive to certain replacement of its parts with functions, but not others.
0dfranke
I didn't mean to imply otherwise. I meant the "or" there as a logical inclusive or, not a claim of synonymy.

Hm. I am not a 100% computationalist, but let me try.

  1. b) 80% c) 15% a) 5% (there should be a physical structure, but its details probably don't matter. I can imagine several intuition pumps supporting all answers).
  2. Don't know.
  3. There is an isomorphism between instantaneous physical states of the machine and (a subclass of) mathematical formulae, and the machine went from a state representing "2+2" to a state representing "4".
  4. There is an isomorphism between the physical states of the machine and colors (say represented by RGB) and the m
... (read more)

Here is another attempt to rephrase one of the opinions hold within the philosophy camp:

Imagine 3 black boxes, each of them containing a quantum-level emulation of some existing physical system. Two boxes contain the emulations of two different human beings and one box the emulation of an environment.

Assume that if you were to connect all 3 black boxes and observe the behavior of the two humans and their interactions you would be able to verify that the behavior of the humans, including their utterances, would equal that of the originals.

If one was to dis... (read more)

0Dolores1984
...This argument strikes me as, pardon me, tremendously silly. Just off the top of my head, it seems to still hold if you replace the 'quantum level simulation of a person' with an exact duplicate of the original brain in a saline bath, hooked up to a feed of oxygenated blood. Should we therefore conclude that human brains are not conscious? EDIT: Oh blast, didn't realize this was from months ago.

(2) Humans can manually compute any algorithm that a TM compute (this is just the Church Turing conjecture in reverse), so a human has to be at least a UTM. The significant part of the computationalist claim is that humans are at most a UTM.

(4) No.

(5) If intermediate steps matter, the Turing Test is invalidate, since a Giant Lookup table could produce the same results with trivial intermediate steps. However, computationalists do not have to subscribe to the TT.

(7) An "And gate" looks like a piece of hardware, but it is really anything that co... (read more)

  1. (b) Consciousness is something that a concrete machine does, as in "My calculator computed 2+2".

  2. Instructed to skip.

  3. Unpack what "the machine computed 2+2" means. (I'll try.) A machine computes 2+2 if it has an algorithm (perhaps a subroutine) that accepts two inputs a and b (where a and b are some set of numbers containing at least the natural numbers through 5) and generally (almost always) outputs the sum a+b. The machine may output a+b by any means whatsoever -- even just using a look up table or appending two strings of symbol

... (read more)
[-]Cyan10

My own answers, before reading anyone else's, were:

  1. (b)
  2. The calculator processes information. In the same system that gives the interpretation of inputs and outputs as rational numbers, the information processing of the calculator can be seen to be isomorphic to addition.
  3. I can make many senses of the phrase, depending on the context in which it is used. Here's one: light with wavelength around 650 nm was reflected off the petals and entered the two-year-old's eyes; electrochemical signal processing occurred in her brain such that she reported, "The
... (read more)

Okay, here's my answers. Please take note that full answers will be too big, so expect some vagueness:

1) B 3) Big topic For me, It can use result of "computation". 4) Invoking memory or associations? Mostly no. 5) Hard to say yet. I'll take a guess that it's mostly functions, with maybe some parts where steps really matter. 6) I think it's possible. 7) I guess so. 8) They have something in common, but I think it depends on your definition of "conscious". They are most certainly not self-conscious, though.

I think the logic behind this argument is actually much, much simpler.

Let us suppose that consciousness is not a type of computation.

Rational argument, and hence rational description IS a type of computation - it can be made into forms that are computable.

Therefore consciousness, if it is not a type of computation, is also not describeable within, or reducible to, rational argument.

I call this type of thing the para-rational - it's not necessarily against rationality to suppose that something exists which isn't rationally describable. What doesn't make sen... (read more)

  1. There is no such thing as an abstract machine, nor an abstract computation. If you imagine a machine adding two and two, the computation is implemented in your brain, which holds a representation of the operations. Physics is information; information is also physics. There is no information without a physical embodiment; there is no computation without physical operations.

  2. Humans don't have infinite memory, and thus are less-powerful than Turing machines.

  3. "Computing red": Please put more words into that phrase. It's too ambiguous to deal

... (read more)
[-]see00

Either "qualia" are ultimately a type of experience that can be communicated to a conscious being who hasn't had the experience, or they cannot. If they can be, they cease to have any distinction from any other communicable fact. If they cannot, you can't actually use them to determine if something is conscious, because nobody can communicate to you their own individual qualia. Either way, qualia by necessity drop out of any theory of consciousness that can classify whether something as inert as a brick is a conscious being or not. And if a theory of consciousness does not predict, either way, whether or not a brick is conscious, then it is a waste of time.

6ArisKatsaris
That's the sort of dilemma I don't trust as a reasoning step. What if they can partially or vaguely or approximately (but not precisely and entirely) be communicated to a conscious being who hasn't had the experience?
0PhilGoetz
Then they are partially Enlightened.
0see
Insofar as they are communicable, the communication can be emitted by someone that doesn't experience them, and thus doesn't serve as evidence that the communicating being experiences the quale. (In the classic "Mary the color scientist" formulation, Mary, who has never experienced seeing red, can tell people partially/vaguely what it's like to see red, since knows every communicable fact about seeing red, including how people describe it.)
1ArisKatsaris
Let's say you speak to an alien from another universe, and they give you mathematical equations for a phenomenon that only people in that universe experience. For example, a weird slight periodic shift in the gravitational constant. I can communicate this information further, even though I don't experience such shifts to the gravitational constant myself. And yet you're saying that the alien who first originated those equations, that isn't evidence for their own experiences either? Perhaps you mean it isn't proof, but to say it's not evidence at all is a rather big claim.
0see
How would his formulating equations give me any evidence that he feels the shift in the gravitational constant? Newton's laws weren't evidence that Newton ever experienced orbiting another body. Look, back to the basic point of the sterility of qualia, how would you go about distinguishing whether I actually experience qualia, or whether I am just programmed by evolution to mimic the responses of other people when asked about their experiences of qualia?
0ArisKatsaris
Newton did orbit the sun, while riding the Earth; and his laws were certainly evidence for that rather than against it. Saying that event A is zero evidence for event B, really means that the two events are completely uncorrelated with each other -- do you really mean to argue that the existence of Newton's equations is completely uncorrelated with the fact Newton lived in a (to the limits of his understanding) Newtonian universe? Occam's razor can be useful there, I think, until we have enough understanding of neuroscience to be able to tell between a brain doing mimicry, and a brain doing an honest and lucid self-evaluation.
0see
And he had no particular qualia that would distinguish that from any of a billion other arrangements. No, I mean to argue that the existence of Newton's equations is completely uncorrelated with whether Newton experienced any qualia. A properly-designed curve-fitting algorithm, given the right data, could produce them as well; there is no evidence of consciousness (at least distinct from computation) as a result. Aliens arrive to visit Earth. Their knowledge of their own neural architecture is basically useless when evaluating ours. How do they determine that humans "actually experience" qualia, rather than humans simulating the results of experience of qualia as a result of evolution? The Occam's Razor result that "they act in a manner consistent with having qualia, therefore they probably experience qualia, therefore they are probably conscious" is immediately displaced by the Occam's Razor result that "they act in a manner consistent with being conscious, therefore they probably are conscious". The qualia aren't necessary, and therefore drop out of the axiomization of a theory of consciousness.
0ArisKatsaris
You misunderstood my argument. I wasn't talking about qualia when I talked about Newton, I was talking about gravity, another phenomenon. Newton was affected by gravity -- this was highly correlated with the fact he talked about gravity. We talk about qualia -- this is therefore evidence in favour of us being affected by qualia. What would be the evolutionary benefit of simulating the results of experience of qualia, in a world where nobody experiences qualia for real? That's like an alien parrot simulating the voice of a human in a planet where there exist no humans. Highly unlikely to be stumbled upon coincidentally by evolution. What do you mean by "conscious"? Self-aware? Not sleeping or knocked out? These seem different and more complex constructs than qualia, who have the benefit of current seeming irreducability at some level (I might be able to reduce individidual color qualia to separate qualia of red/green/blue and brightness, but not further).
0[anonymous]
What makes qualia problematic - the only thing that makes it problematic - is that it's tied up with the notion of subjectivity. Subjective facts are not 'objective'. Any attempt to define qualia objectively, as something a scientist could detect by careful study of your behaviour and/or neurophysiology, will give you a property X such that Chalmers' hard question remains "and why does having property X feel like this from the inside?" I think it's helpful to consider the analogy (perhaps it's more than an analogy) between subjectivity and indexicality. Obviously science is not going to explain why the universe views itself through my eyes, or why the year is 2011. It's only by 'borrowing' the existence of something called 'you', who is 'here', that indexical statements can have truth values. I think that similarly, you need to 'borrow' the fact that red looks like this in order for red to look like this. The statements that you make in between 'borrowing' subjectivity and 'paying it back' simply do not belong to science - they are not "objectively true or false". Of course the question of who or what does the 'borrowing' is Deeply Mysterious - in fact it's something that even in principle we can have no knowledge of, because it's not something that happens within the universe. (Gee, this is getting dangerously theological. I guess I'm confused about something...) (On this view, whatever kind of fact it is that 'rabbits have colour qualia', it cannot be a fact with an evolutionary explanation. It's not really a fact at all, except from the perspective of a rabbit. And there isn't even such a thing as 'the perspective of a rabbit' except from the perspective of a rabbit.)
0PhilGoetz
I agree; but I don't think it's relevant to the question.
  1. c. The abstraction wouldn't be a very good abstraction if it fails to be similar to real machines except in what it abstracts away.
  2. Tautologically, all equivalent models of computation are equivalent for this purpose.
  3. The portion of the machine which is responsible for that computation was in a state which is isomorphic to "2+2" and is now in a state which is isomorphic to "4".
  4. The phrase “computing red” is too vague/lacking context to interpret.
  5. Functions. Your report of how your algorithm feels from the inside is part of the output o
... (read more)

My $0.02, without reading other answers:

\1. I'm not sure, but I lean towards (b).

Unpacking a bit: As it is used in the sentence "the sum of 1 and 1 to yield 2 is a computation", my intuition is that something like (a) is meant. That said, it seems likely that this intuition comes from me reasoning about a category of computations as a cognitive shortcut, and then sloppily reifying the category. Human brains do that a lot. So I'm inclined to discard that intuition and assert that statements about 1+1=2 are statements about an abstract category in... (read more)

Would an axiomatization of a Theory of Everything that can explain consciousness mention qualia?

LessWrong User:Mitchell_Porter has made some headway on this very interesting question. See his submissions How to think like a quantum monadologist and Consciousness.

[-][anonymous]00

My answers:

  1. Your terminology is confused and the question is ill-formed. There is a difference between mathematically abstract computation and implementation. Implementation usually requires energy to carry out, and (based on concerns around reversible computing) it will always take energy to communicate the output of an implemented computation to some other physical system.

  2. The Church Turing Thesis is probably correct. Moreover, any one of these formalisms can emulate any other with a runtime hit of some constant plus a scalar multiplier.

  3. That a c

... (read more)