This post is a followup to "We are not living in a simulation" and intended to help me (and you) better understand the claims of those who took a computationalist position in that thread. The questions below are aimed at you if you think the following statement both a) makes sense, and b) is true:
"Consciousness is really just computation"
I've made it no secret that I think this statement is hogwash, but I've done my best to make these questions as non-leading as possible: you should be able to answer them without having to dismantle them first. Of course, I could be wrong, and "the question is confused" is always a valid answer. So is "I don't know".
- As it is used in the sentence "consciousness is really just computation", is computation:
a) Something that an abstract machine does, as in "No oracle Turing machine can compute a decision to its own halting problem"?
b) Something that a concrete machine does, as in "My calculator computed 2+2"?
c) Or, is this distinction nonsensical or irrelevant? - If you answered "a" or "c" to question 1: is there any particular model, or particular class of models, of computation, such as Turing machines, register machines, lambda calculus, etc., that needs to be used in order to explain what makes us conscious? Or, is any Turing-equivalent model equally valid?
- If you answered "b" or "c" to question 1: unpack what "the machine computed 2+2" means. What is that saying about the physical state of the machine before, during, and after the computation?
- Are you able to make any sense of the concept of "computing red"? If so, what does this mean?
- As far as consciousness goes, what matters in a computation: functions, or algorithms? That is, does any computation that give the same outputs for the same inputs feel the same from the inside (this is the "functions" answer), or do the intermediate steps matter (this is the "algorithms" answer)?
- Would an axiomatization (as opposed to a complete exposition of the implications of these axioms) of a Theory of Everything that can explain consciousness include definitions of any computational devices, such as "and gate"?
- Would an axiomatization of a Theory of Everything that can explain consciousness mention qualia?
- Are all computations in some sense conscious, or only certain kinds?
ETA: By the way, I probably won't engage right away with individual commenters on this thread except to answer requests for clarification. In a few days I'll write another post analyzing the points that are brought up.
This morning I followed another discussion on Facebook between David Pearce and someone else about the same topic and he mentioned a quote by Stephen Hawking:
What David Pearce and others seem to be saying is that physics doesn't disclose the nature of the "fire" in the equations. For this and other reasons I am increasingly getting the impression that the disagreement all comes down to the question if the Mathematical universe hypothesis is correct, i.e. if Platonism is correct.
None of them seem to doubt that we will eventually be able to "artificially" create intelligent agents. They don't even doubt that we will be able to use different substrates. The basic disagreement seems to be that, as Constant notices in another comment, a representation is distinct from a reproduction.
People like David Pearce or Massimo Pigliucci seem to be arguing that we don't accept the crucial distinction between software and hardware.
For us the only difference between a mechanical device, a physical object and software is that the latter is the symbolic (formal language) representation of the former. Software is just the static description of the dynamic state sequence exhibited by an object. One can then use that software (algorithm) and some sort of computational hardware and evoke the same dynamic state sequence so that the machine (computer) mimics the relevant characteristics of the original object.
Massimo Pigliucci and others actually agree about the difference between a physical thing and its mathematical representation but they don't agree that you can represent the most important characteristic as long as you do not reproduce the physical substrate.
The position hold by those people who disagree with the Less Wrong consensus on this topic is probably best represented by the painting La trahison des images. It is a painting of a pipe. It represents a pipe but it is not a pipe, it is an image of a pipe.
Why would people concerned with artificial intelligence care about all this? That is up to the importance and nature of consciousness and to what extent general intelligence is dependent upon the the brain as a biological substrate and its properties (e.g. the chemical properties of carbon versus silicon).
(Note that I am just trying to account for the different positions here and not argue in favor of substrate-dependence.)