Or the information on the tape, or - whatever it is that's conscious, it is a static thing, not a living, dynamic thing. If computation is an output, process doesn't matter. Time doesn't enter into it.
Time is right where it should be in at: In the tape. We, the Matrix-Lords, are above the time of the tape, but it doesn't mean the consciousness within the tape isn't living and dynamic and all that.
The question presented is very intriguing, thank you for it.
This question arises when I consider the moral status of intelligent agents. If I encounter a morally-significant dormant Turing machine with no input devices, do I need to turn it on?
If yes, notice that state N of the machine can be encoded as the initial state of the machine plus the number N. Would it suffice to just start incrementing a counter and say that the machine is running?
If I do not need to turn anything on, I might as well destroy the machine, because the Turing machine will still exist in a Platonic sense, and the Platonic machine won't noti...
ADDED: Many people are replying by saying, "Obviously, option 2 is correct," then listing arguments for, without addressing the problems with option 2. That's cheating.
Phil, I have to say that I don't think the problems with option 2 are actually presented in you post. But that does not mean that we are allowed to dodge the question implicit in your post: how to formally distinguish between two computational processes, one conscious, the other not. Let me start my attempt with a quote:
..."Consciousness is overrated. What we call consciou
What is consciousness, and what kinds of things are conscious?
I've seen this debated many times, and I suspect that people are merely arguing over the meaning of a word that does not happen to carve reality at the joints.
What is it in the physical universe that you call consciousness and which might be present or absent in a computational device? What prediction is being made by any of the theories 1 through 3 in this post, that I can go out and test?
I endorse the first alternative; the intuition at first felt wrong (in a Chinese Room sort of way), but that feeling disappeared when I realized the following:
I was envisioning a tape (call it Tape A) which only recorded some very small end result of Turing Machine A, like the numerical output of a calculation or the move Deep Blue makes. And that seems too "small" somehow to encapsulate consciousness— I felt that I needed the moving Turing machine to make it "live" in all its detail.
But of course, it's trivial to write a different Turi...
Your question is well-posed, but I doubt that it really attacks the problem of consciousness.
I don't understand what it could possibly mean for an output or an algorithm to be consciousness. Consciousness, whatever it might be caused by or composed of, means subjective awareness of qualia.
In this sense, "3" is not even slightly conscious. Neither is "output=0; if input=3, let output=1." Neither is "output=0; if input=3, let output=1, get input by scanning source code of self and reducing it to a number." The last example will ...
In humans today, the output is not produced all at once -but from a computationalist perspective, that isn't important.
I don't think this is valid; the complexity you're discarding here may actually be essential. In particular, interacting with some sort of universe that provides inputs and responds to outputs may be a necessary condition for consciousness. Perhaps consciousness ought to be a two-place predicate. If you have a universe U1 containing a mind M, and you simulate the physics of U1 and save the final state on a tape that exists in universe U2, then conscious(M,U1) but ~conscious(M,U2). On the other hand, if U1 interacts with U2 while it's running, then conscious(M,U2).
Suppose we view consciousness as both a specific type of computation and a specific range of computable functions. For any N, there will always be a lookup table that appears conscious for a length of time N, in particular, "the lifetime of the conscious creature being simulated". A lookup table, as Eliezer once argued, is more like a cellphone than a person - it must have been copied off of some sort of real, conscious entity.
Is the feature of a lookup table that makes it unconscious its improbability without certain types of computation, that ...
A powerful computer in a sealed box is about to be fired away from the Earth at the speed of light; it will never produce output and we'll never see it again. From the point of view of perspective 1, the whole program is thus equivalent to a gigantic no-op. Nonetheless, I'd rather that the program running on it simulated conscious beings in Utopia than conscious beings in Hell. This I think forces me to perspective 2: that actually doing the calculations makes a moral difference.
EDIT: the "speed of light" thing was a mistake. Make that "close to the speed of light".
"Computed" means that only the input and output are important: as long as you can get from "2+2" to "4", it doesn't matter how you do it. "Computation" means that it's the algorithm you use that is important.
If a computer can give the response a human would have to a given situation, despite that computer using an AI which operates on different principles from the human brain (simulating a universe containing a human brain is sufficient), is that computer thinking/conscious? If yes, then thought/consciousness can be computed. If no, then thought/consciousness is the computation.
This is related to the Turing Test, in which a computer is deemed conscious if it can produce responses indistinguishable from those of a human, regardless of the algorithm used.
I think the best definition of consciousness I've come across is Hofstadter's, which is something like "when you are thinking, you can think about the fact that you're thinking, and incorporate that into your conclusions. You can dive down the rabbit hole of meta-thinking as many times as you like." Even there, though, it's hard to tell if it's a verb, a noun, or something else.
If we want to talk about it in computing terms, you can look at the stored-program architecture we use today. Software is data, but it's also data that can direct the hard...
I understand Computationalism as a direct consequence of digital physics and materialism: all of physics is computable and all that exists is equivalent to the execution of the universal algorithm of physics (even if an exact description of said algorithm is unknowable to us).
Thus strictly speaking everything that exists in the universe is computation, and everything is computable. Your option 1 seems to make a false distinction that something non-computational could exist - that consciousness could be something that is computable but is not itself comput...
(Warning: I expect that the following comment has at least one major error, since this topic is well outside my usual area of knowledge. Please read it as a request for edification, not as an attempt to push forward the envelope.)
Until we can detect or explain qualia in the wild, how can we make rational claims about its computability?
To make a simple analogy, suppose we have a machine which consists of a transparent box, a switch, and a speaker. Inside the box is a lightbulb and a light sensor. The switch controls the light, and the light sensor is hooked...
Whatever it is that's conscious, you can compute it and represent it in a static form. The simplest interpretation is that the output itself is conscious. So this leads to the conclusion that, if a Turing machine computes consciousness and summarizes its output in a static representation on a tape, the tape is conscious.
I don't see the contradiction here, although the sheer scale might be throwing off intuition. I'm perfectly willing to say that if you actually had such a tape, it would be conscious. Needless to say, the tape would be inconceivably huge if you really wanted to represent the astronomical amount of inputs and outputs that make up something we would call "conscious."
Methinks the discussion suffers from lacking the distinction between consciousness-as-property: "subsystem C of system X is conscious-in-X" and consciousness-as-ontological-fact: "C exists in a conscious way" . Consciousness (of "C" in "X") is option-1-computed in the sense that it is a Platonic entity (as a property of platonically considered "subsystem C of system X"). It is option-2-computation in the sense that all such entities "C" could be said to perform various computations in "X" (and it is the ensemble of computations that the property detects). To draw moral conclusions ("C exists in a conscious way"), one needs to take X=reality.
Sort of both. I think they reconcile more easily than you think.
Consciousness entities have behavior, including internal behavior (thoughts). Behavior obviously doesn't exist in stasis, which seems to be the point that you don't like about 1.
Consciousness is not the algorithm that generates that behavior, but rather that algorithm (or any equivalent algorithm) in action. It requires inputs, so that behavior can't just be read as a series of outputs and determined to be "the consciousness"; rather, consciousness is a property that entities operati...
Option 4: there is no such thing as consciousness. Its all just an elaborate hoax our minds play on us. In reality we are 'just' state machines with complicated caches that give the appearance of a conscious mind acting. The illusion is good enough to fool us all, and works well enough for real life.
The more i learn off neuro science the more I get the impression that there is no real 'person' hidden in the body, just a shell, that runs lots of software over data storage, and seems to act somewhat consistent. So the closer you look, the less consciousness is left to see.
Please note that I still have trouble understanding how qualia work.
My problem with that sort of explanation is that I don't see how there can be illusion without a consciousness present to be mistaken.
Consciousness is a roughly defined and (leaky) abstraction.
So this leads to the conclusion that, if a Turing machine computes consciousness and summarizes its output in a static representation on a tape, the tape is conscious.
Without context the content of the tape has no meaning. So the consciousness that has been output on the tape, is only a consciousness in the context that can use it to generate the consciousness abstraction.
It is the set of "stuff" that produces the consciousness abstraction that can be called conscious. In a Turing ma...
I understand Computationalism as a direct consequence of digital physics and materialism: all of physics is computable and all that exists is equivalent to the execution of the universal algorithm of physics (even if an exact description of said algorithm is unknowable to us).
Thus strictly speaking everything that exists in the universe is computation, and everything is computable. Your option 1 seems to make a false distinction that something non-computational could exist - that consciousness could be something that is computable but is not itself comput...
If I were to attempt to characterize consciousness in computational terms, I would probably start with a diagram like that for the Mealy machine in this pdf. I would label the top box simply "computation" and the lower box "short term memory. I would speculate that consciousness has something to do with that feedback loop through short term memory. I might even go so far as to claim that the information flowing through short term memory constitutes the "stream of consciousness".
If this approach is taken, there are some consequen...
This post comprises one question and no answers. You have been warned.
I was reading "How minds can be computational systems", by William Rapaport, and something caught my attention. He wrote,
Rapaport was talking about cognition, not consciousness. The contention between these hypothesis is, however, only interesting if you are talking about consciousness; if you're talking about "cognition", it's just a choice between two different ways to define cognition.
When it comes to consciousness, I consider myself a computationalist. But I hadn't realized before that my explanation of consciousness as computational "works" by jumping back and forth between those two incompatible positions. Each one provides part of what I need; but each, on its own, seems impossible to me; and they are probably mutually exclusive.
Option 1: Consciousness is computed
If consciousness is computed, then there are no necessary dynamics. All that matters is getting the right output. It doesn't matter what algorithm you use to get that output, or what physical machinery you use to compute it. In the real world, it matters how fast you compute it; but surely you can provide a simulated world at the right speed for your slow or fast algorithm. In humans today, the output is not produced all at once - but from a computationalist perspective, that isn't important. I know "emergence" is wonderful, but it's still Turing-computable. Whatever a "correct" sequence of inputs and outputs is, even if they overlap in time, you can summarize the inputs over time in a single static representation, and the outputs in a static representation.
So what is conscious, in this view? Well, the algorithm doesn't matter - remember, we're not asking for O(consciousness); we're saying that consciousness is computed, and therefore is the output of a computation. The machine doing the computing is one step further removed than the algorithm, so it's certainly not eligible as the seat of consciousness; it can be replaced by an infinite number of computationally-equivalent different substrates.
Whatever it is that's conscious, you can compute it and represent it in a static form. The simplest interpretation is that the output itself is conscious. So this leads to the conclusion that, if a Turing machine computes consciousness and summarizes its output in a static representation on a tape, the tape is conscious. Or the information on the tape, or - whatever it is that's conscious, it is a static thing, not a living, dynamic thing. If computation is an output, process doesn't matter. Time doesn't enter into it.
The only way out of this is to claim that an output that, when coming out of a dynamic real-time system, is conscious, becomes unconscious when it's converted into a static representation, even if the two representations contain exactly the same information. (X and Y have the same information if an observer can translate X into Y, and Y into X. The requirement for an observer may be problematic here.) This strikes me as not being computationalist at all. Computationalism means considering two computational outputs equivalent if they contain the same information, whether they're computed with neurons and represented as membrane potentials, or computed with Tinkertoys and represented by rotations of a set of wheels. Is the syntactic transformation from a dynamic to a static representation a greater qualitative change than the transformation from tinkertoys to neurons? I don't think so.
Option 2: Consciousness is computation
If consciousness is computation, then we have the satisfying feeling that how we do those computations matters. But then we're not computationalists anymore!
A computational analysis will never say that one algorithm for producing a series of outputs produces an extra computational effect (consciousness) that another method does not. If it's not output, or internal representational state, it doesn't count. There are no other "by-products of computation". If you use a context-sensitive grammar to match a regular expression, it doesn't make the answer more special than if you used a regular grammar.
Don't protest that a human talks and walks and thereby produces side-effects during the computation. That is not a computational analysis. A computational analysis will give the same result if you translate whatever the algorithm and machine running it is, onto tape in a Turing machine. Anything that gives a different result is not a computational analysis. If these side-effects don't show up on the tape, it's because you forgot to represent them.
An analysis of the actual computation process, as opposed to its output, could be a thermodynamic analysis, which would care about things like how many bits the algorithm erased internally. I find it hard to believe that consciousness is a particular pattern of entropy production or waste heat. Or it could be a complexity or runtime analysis, that cared about how long it took. A complexity analysis has a categorical output; there's no such thing as a function being "a little bit recursively enumerable", as I believe there is with consciousness. So I'd be surprised if "conscious" is a property of an algorithm in the same way that "recursively enumerable" is. A runtime analysis can give more quantitative answers, but I'm pretty sure you can't become conscious by increasing your runtime. (Otherwise, Windows Vista would be conscious.)
Option 3: Consciousness is the result of quantum effects in microtubules
Just kidding. Option 3 is left as an exercise for the reader, because I'm stuck. I think a promising angle to pursue would be the necessity of an external observer to interpret the "conscious tape". Perhaps a conscious computational device is one that observes itself and provides its own semantics. I don't understand how any process can do that; but a static representation clearly can't.
ADDED
Many people are replying by saying, "Obviously, option 2 is correct," then listing arguments for, without addressing the problems with option 2. That's cheating.