This post comprises one question and no answers. You have been warned.
I was reading "How minds can be computational systems", by William Rapaport, and something caught my attention. He wrote,
Computationalism is - or ought to be - the thesis that cognition is computable ... Note, first, that I have said that computationalism is the thesis that cognition is computable, not that it is computation (as Pylyshyn 1985 p. xiii characterizes it). ... To say that cognition is computable is to say that there is an algorithm - more likely, a collection of interrelated algorithms - that computes it. So, what does it mean to say that something 'computes cognition'? ... cognition is computable if and only if there is an algorithm ... that computes this function (or functions).
Rapaport was talking about cognition, not consciousness. The contention between these hypothesis is, however, only interesting if you are talking about consciousness; if you're talking about "cognition", it's just a choice between two different ways to define cognition.
When it comes to consciousness, I consider myself a computationalist. But I hadn't realized before that my explanation of consciousness as computational "works" by jumping back and forth between those two incompatible positions. Each one provides part of what I need; but each, on its own, seems impossible to me; and they are probably mutually exclusive.
Option 1: Consciousness is computed
If consciousness is computed, then there are no necessary dynamics. All that matters is getting the right output. It doesn't matter what algorithm you use to get that output, or what physical machinery you use to compute it. In the real world, it matters how fast you compute it; but surely you can provide a simulated world at the right speed for your slow or fast algorithm. In humans today, the output is not produced all at once - but from a computationalist perspective, that isn't important. I know "emergence" is wonderful, but it's still Turing-computable. Whatever a "correct" sequence of inputs and outputs is, even if they overlap in time, you can summarize the inputs over time in a single static representation, and the outputs in a static representation.
So what is conscious, in this view? Well, the algorithm doesn't matter - remember, we're not asking for O(consciousness); we're saying that consciousness is computed, and therefore is the output of a computation. The machine doing the computing is one step further removed than the algorithm, so it's certainly not eligible as the seat of consciousness; it can be replaced by an infinite number of computationally-equivalent different substrates.
Whatever it is that's conscious, you can compute it and represent it in a static form. The simplest interpretation is that the output itself is conscious. So this leads to the conclusion that, if a Turing machine computes consciousness and summarizes its output in a static representation on a tape, the tape is conscious. Or the information on the tape, or - whatever it is that's conscious, it is a static thing, not a living, dynamic thing. If computation is an output, process doesn't matter. Time doesn't enter into it.
The only way out of this is to claim that an output that, when coming out of a dynamic real-time system, is conscious, becomes unconscious when it's converted into a static representation, even if the two representations contain exactly the same information. (X and Y have the same information if an observer can translate X into Y, and Y into X. The requirement for an observer may be problematic here.) This strikes me as not being computationalist at all. Computationalism means considering two computational outputs equivalent if they contain the same information, whether they're computed with neurons and represented as membrane potentials, or computed with Tinkertoys and represented by rotations of a set of wheels. Is the syntactic transformation from a dynamic to a static representation a greater qualitative change than the transformation from tinkertoys to neurons? I don't think so.
Option 2: Consciousness is computation
If consciousness is computation, then we have the satisfying feeling that how we do those computations matters. But then we're not computationalists anymore!
A computational analysis will never say that one algorithm for producing a series of outputs produces an extra computational effect (consciousness) that another method does not. If it's not output, or internal representational state, it doesn't count. There are no other "by-products of computation". If you use a context-sensitive grammar to match a regular expression, it doesn't make the answer more special than if you used a regular grammar.
Don't protest that a human talks and walks and thereby produces side-effects during the computation. That is not a computational analysis. A computational analysis will give the same result if you translate whatever the algorithm and machine running it is, onto tape in a Turing machine. Anything that gives a different result is not a computational analysis. If these side-effects don't show up on the tape, it's because you forgot to represent them.
An analysis of the actual computation process, as opposed to its output, could be a thermodynamic analysis, which would care about things like how many bits the algorithm erased internally. I find it hard to believe that consciousness is a particular pattern of entropy production or waste heat. Or it could be a complexity or runtime analysis, that cared about how long it took. A complexity analysis has a categorical output; there's no such thing as a function being "a little bit recursively enumerable", as I believe there is with consciousness. So I'd be surprised if "conscious" is a property of an algorithm in the same way that "recursively enumerable" is. A runtime analysis can give more quantitative answers, but I'm pretty sure you can't become conscious by increasing your runtime. (Otherwise, Windows Vista would be conscious.)
Option 3: Consciousness is the result of quantum effects in microtubules
Just kidding. Option 3 is left as an exercise for the reader, because I'm stuck. I think a promising angle to pursue would be the necessity of an external observer to interpret the "conscious tape". Perhaps a conscious computational device is one that observes itself and provides its own semantics. I don't understand how any process can do that; but a static representation clearly can't.
ADDED
Many people are replying by saying, "Obviously, option 2 is correct," then listing arguments for, without addressing the problems with option 2. That's cheating.
Phil, I have to say that I don't think the problems with option 2 are actually presented in you post. But that does not mean that we are allowed to dodge the question implicit in your post: how to formally distinguish between two computational processes, one conscious, the other not. Let me start my attempt with a quote:
I believe with Minsky that consciousness is a very anthropocentric concept, inheriting much of the complexity of its originators. I actually have no problem with an anthropocentric approach to consciousness, so I like the following intuitive "definition": X is conscious if it is not silly to ask "what is it like to be X?". The subtle source of anthropocentrism here, of course, is that it is humans who do the asking. As materialists, we just can't formalize this intuitive definition without mapping specific human brain functions to processes of X. In short, we inherently need human neuroscience. So it is not too surprising that we will not find a nice, clean decision procedure to distinguish between two computational processes, one conscious the other not.
Most probably you are not happy with this anthropocentric approach. Then you will have to distill some clean, mathematically tractable concept from the messy concept of consciousness. If you agree with Hofstadter and Minsky, then you will probably reach something related to self-reflection. This may or may not work, but I believe that you will lose the spirit of the original concept during such a formalization. Your decision procedure will probably give unexpected results for many things: various simple, very unintelligent computer programs, hive minds, and maybe even rooms full of people.