This post comprises one question and no answers. You have been warned.
I was reading "How minds can be computational systems", by William Rapaport, and something caught my attention. He wrote,
Computationalism is - or ought to be - the thesis that cognition is computable ... Note, first, that I have said that computationalism is the thesis that cognition is computable, not that it is computation (as Pylyshyn 1985 p. xiii characterizes it). ... To say that cognition is computable is to say that there is an algorithm - more likely, a collection of interrelated algorithms - that computes it. So, what does it mean to say that something 'computes cognition'? ... cognition is computable if and only if there is an algorithm ... that computes this function (or functions).
Rapaport was talking about cognition, not consciousness. The contention between these hypothesis is, however, only interesting if you are talking about consciousness; if you're talking about "cognition", it's just a choice between two different ways to define cognition.
When it comes to consciousness, I consider myself a computationalist. But I hadn't realized before that my explanation of consciousness as computational "works" by jumping back and forth between those two incompatible positions. Each one provides part of what I need; but each, on its own, seems impossible to me; and they are probably mutually exclusive.
Option 1: Consciousness is computed
If consciousness is computed, then there are no necessary dynamics. All that matters is getting the right output. It doesn't matter what algorithm you use to get that output, or what physical machinery you use to compute it. In the real world, it matters how fast you compute it; but surely you can provide a simulated world at the right speed for your slow or fast algorithm. In humans today, the output is not produced all at once - but from a computationalist perspective, that isn't important. I know "emergence" is wonderful, but it's still Turing-computable. Whatever a "correct" sequence of inputs and outputs is, even if they overlap in time, you can summarize the inputs over time in a single static representation, and the outputs in a static representation.
So what is conscious, in this view? Well, the algorithm doesn't matter - remember, we're not asking for O(consciousness); we're saying that consciousness is computed, and therefore is the output of a computation. The machine doing the computing is one step further removed than the algorithm, so it's certainly not eligible as the seat of consciousness; it can be replaced by an infinite number of computationally-equivalent different substrates.
Whatever it is that's conscious, you can compute it and represent it in a static form. The simplest interpretation is that the output itself is conscious. So this leads to the conclusion that, if a Turing machine computes consciousness and summarizes its output in a static representation on a tape, the tape is conscious. Or the information on the tape, or - whatever it is that's conscious, it is a static thing, not a living, dynamic thing. If computation is an output, process doesn't matter. Time doesn't enter into it.
The only way out of this is to claim that an output that, when coming out of a dynamic real-time system, is conscious, becomes unconscious when it's converted into a static representation, even if the two representations contain exactly the same information. (X and Y have the same information if an observer can translate X into Y, and Y into X. The requirement for an observer may be problematic here.) This strikes me as not being computationalist at all. Computationalism means considering two computational outputs equivalent if they contain the same information, whether they're computed with neurons and represented as membrane potentials, or computed with Tinkertoys and represented by rotations of a set of wheels. Is the syntactic transformation from a dynamic to a static representation a greater qualitative change than the transformation from tinkertoys to neurons? I don't think so.
Option 2: Consciousness is computation
If consciousness is computation, then we have the satisfying feeling that how we do those computations matters. But then we're not computationalists anymore!
A computational analysis will never say that one algorithm for producing a series of outputs produces an extra computational effect (consciousness) that another method does not. If it's not output, or internal representational state, it doesn't count. There are no other "by-products of computation". If you use a context-sensitive grammar to match a regular expression, it doesn't make the answer more special than if you used a regular grammar.
Don't protest that a human talks and walks and thereby produces side-effects during the computation. That is not a computational analysis. A computational analysis will give the same result if you translate whatever the algorithm and machine running it is, onto tape in a Turing machine. Anything that gives a different result is not a computational analysis. If these side-effects don't show up on the tape, it's because you forgot to represent them.
An analysis of the actual computation process, as opposed to its output, could be a thermodynamic analysis, which would care about things like how many bits the algorithm erased internally. I find it hard to believe that consciousness is a particular pattern of entropy production or waste heat. Or it could be a complexity or runtime analysis, that cared about how long it took. A complexity analysis has a categorical output; there's no such thing as a function being "a little bit recursively enumerable", as I believe there is with consciousness. So I'd be surprised if "conscious" is a property of an algorithm in the same way that "recursively enumerable" is. A runtime analysis can give more quantitative answers, but I'm pretty sure you can't become conscious by increasing your runtime. (Otherwise, Windows Vista would be conscious.)
Option 3: Consciousness is the result of quantum effects in microtubules
Just kidding. Option 3 is left as an exercise for the reader, because I'm stuck. I think a promising angle to pursue would be the necessity of an external observer to interpret the "conscious tape". Perhaps a conscious computational device is one that observes itself and provides its own semantics. I don't understand how any process can do that; but a static representation clearly can't.
ADDED
Many people are replying by saying, "Obviously, option 2 is correct," then listing arguments for, without addressing the problems with option 2. That's cheating.
I understand Computationalism as a direct consequence of digital physics and materialism: all of physics is computable and all that exists is equivalent to the execution of the universal algorithm of physics (even if an exact description of said algorithm is unknowable to us).
Thus strictly speaking everything that exists in the universe is computation, and everything is computable. Your option 1 seems to make a false distinction that something non-computational could exist - that consciousness could be something that is computable but is not itself computation. This is impossible - everything that is computable and exists is necessarily some form of computation. Computation is the underlying essence of reality - another word for physics.
But the word 'consciousness', to the extent it has useful meaning, implies a particular dynamic process of computation. Consciousness is the state of being conscious, the state of being conscious of things, a state of active cognition. Thus any static system can not be conscious. A mind frozen in time would necessarily be unconscious.
This doesn't seem quite correct. There are necessary dynamics - out of the space of all dynamics (all potential computational or physical processes), some set of them are 'conscious'. There is no single correct output. There is a near infinite set of correct outputs - the definition of which is based on what the correct dynamics computes based on the inputs.
Consciousness is not the output anymore than a car is it's exhaust or microsoft windows is a word document.
You can use the input->output mappings to understand and define the black-box process within because the black-box process is physical and so it is governed by some algorithm. But the algorithm is not just it's output.
No, not quite - computationalism via functionalism means considering two processes functionally equivalent if they produce the same outputs for the same inputs. It's not just about outputs.
A key idea in functionalism is that one physical system can realize many different functional algorithms simultaneously. A computer is a computational system running physics on the most base level, but it also can have other programs running at an entirely different functional encoding level - like words written in letters that are themselves composed of sentences or entire books.
Err yes and no. Consciousness is always strictly defined in relation to an environment, so speed is always important. But beyond that, how you do the comptuations matters not at all. There are an infinite number of equivalent algorithms and computations in theory, but the set of realizable equivalent algorithms and computational processes that enact them is finite in reality because of physics.
Another way of looking at:
There are many possible patterns of matter/energy that are all automobiles, or dinosaurs, or brains.
There are many possible patterns of matter/energy that are all conscious - defined as patterns of matter/energy that enact a set of intelligence algorithms we label "conscious". The label is necessarily functional.
I think you're basically saying "Option 2".