I’ve updated quite hard against computational functionalism (CF) recently (as an explanation for phenomenal consciousness), from ~80% to ~30%. Of course it’s more complicated than that, since there are different ways to interpret CF and having credences on theories of consciousness can be hella slippery.

So far in this sequence, I’ve scrutinised a couple of concrete claims that computational functionalists might make, which I called theoretical and practical CF. In this post, I want to address CF more generally.

Like most rationalists I know, I used to basically assume some kind of CF when thinking about phenomenal consciousness. I found a lot of the arguments against functionalism, like Searle’s Chinese room, unconvincing. They just further entrenched my functionalismness. But as I came across and tried to explain away more and more criticisms of CF, I started to wonder, why did I start believing in it in the first place? So I decided to trace back my intellectual steps by properly scrutinising the arguments for CF.

In this post, I’ll first give a summary of the problems I have with CF, then summarise the main arguments for CF and why I find them sus, and finally briefly discuss what my view means for AI consciousness and mind uploading.

My assumptions

  • I assume realism about phenomenal consciousness: Given some physical process, there is an objective fact of the matter whether or not that process is having a phenomenal experience, and what that phenomenal experience is. I am in camp #2 of Rafael’s two camps.
  • I assume a materialist position: that there exists a correct theory of phenomenal consciousness that specifies a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness (and if so, the nature of that phenomenal experience).
  • I assume that phenomenal consciousness is a sub-component of the mind.

Defining computational functionalism

Here are some definitions of CF I found:

  • Computational functionalism: the mind is the software of the brain. (Piccinini 2010)
  • [Putnam] proposes that mental activity implements a probabilistic automaton and that particular mental states are machine states of the automaton’s central processor. (SEP)
  • Computational functionalism is the view that mental states and events – pains, beliefs, desires, thoughts and so forth – are computational states of the brain, and so are defined in terms of “computational parameters plus relations to biologically characterized inputs and outputs”  (Shagir 2005)

Here’s my version:

Computational functionalism: the activity of the mind is the execution of a program.

I’m most interested in CF as an explanation for phenomenal consciousness. Insofar as phenomenal consciousness is a real thing, and that phenomenal consciousness can be considered an element of the mind,[1] CF then says about phenomenal consciousness:

Computational functionalism (applied to phenomenal consciousness): phenomenal consciousness is the execution of a program.

What I want from a theory of phenomenal consciousness is for it to tell me what third-person properties to look for in a system to decide if, and if so what, phenomenal consciousness is present.

Computational functionalism (as a classifier for phenomenal consciousness): The right program running on some system is necessary and sufficient for the presence of phenomenal consciousness in that system.[2] If phenomenal consciousness is present, all aspects of the corresponding experience is specified by the program.

The second sentence might be contentious, but this follows from the last definition. If this sentence isn’t true, then you can’t say that conscious experience is that program, because the experience has properties that the program does not. If the program does not fully specify the experience, then the best we can say is that the program is but one component of the experience, a weaker statement.

When I use the phrase computational functionalism (or CF) below, I’m referring to the “classifier for phenomenal consciousness” version I’ve defined above.

Arguments against computational functionalism so far

Previously in the sequence, I defined and argued against two things computational functionalists tend to say:

  1. Theoretical CF: A simulation of a human brain on a computer, with physics perfectly simulated down to the atomic level, would cause the same conscious experience as that brain.
  2. Practical CF: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality[3], would cause the same conscious experience as that brain.

I argued against practical CF here and theoretical CF here. These two claims are two potential ways to cash out the CF classifier I defined above. Practical CF says that a particular conscious experience in a human brain is identical to the execution of a program that is simple enough to run on a classical computer on Earth, which requires it to be implemented on a level of abstraction of the brain higher than biophysics. Theoretical CF says the program that creates the experience in the brain is (at least a submodule of) the “program” that governs all physical degrees of freedom in the brain.[4]

These two claims both have the strong subclaim that the simulation must have the same conscious experience as the brain they are simulating. We could weaken the claims to instead say “the simulation would have a similar conscious experience”, or even just “it would have a conscious experience at all. These weaker claims are much less sensitive to my arguments against theoretical & practical CF.

But as I said above, if the conscious experience is different, that tells me that the experience cannot be fully specified by the program being run, and therefore the experience cannot be fully explained by that program. If we loosen the requirement of the same experience happening, this signals that the experience is also sensitive to other details like hardware, which constitutes a weaker statement than my face-value reading of the CF classifier.

I hold that a theory of phenomenal consciousness should probably have some grounding in observations of the brain, since that’s the one datapoint we have. So if we look at the brain, does it look like something implementing a program? In the practical CF post, I argue that the answer is no, by calling into question the presence of a “software level of abstraction” (cf. Marr’s levels) below behavior and above biophysics.

In the theoretical CF post, I give a more abstract argument against the CF classifier. I argue that computation is fuzzy, it’s a property of our map of a system rather than the territory. In contrast, given my realist assumptions above, phenomenal consciousness is not a fuzzy property of a map, it is the territory. So consciousness cannot be computation.

When I first realized these problems, it updated me only a little bit away from CF. I still said to myself “well, all this consciousness stuff is contentious and confusing. There are these arguments against CF I find convincing, but there are also good arguments in favor of it, so I don’t know whether to stop believing in it or not.” But then I actually scrutinized the arguments in favor CF, and realized I don’t find them very convincing. Below I’ll give a review of the main ones, and my problems with them.

Arguments in favor of computational functionalism

Please shout if you think I’ve missed an important one!

We can reproduce human capabilities on computers, why not consciousness?

AI is achieving more and more things that we used to think were exclusively human. First they came for chess. Then they came for Go, visual recognition, art, natural language. At each step the naysayers dragged their heels, saying “But it’ll never be able to do thing  that humans can do!”.

Are the non-functionalists just making the same mistake by claiming that consciousness will never be achieved on computers?

First of all, I don’t think computational functionalism is strictly required for consciousness on computers.[5] But sure, computational functionalism feels closely related to the claim that AI will become conscious, so let’s address it here.

This argument assumes that phenomenal consciousness is in the same reference class as the set of previously-uniquely-human capabilities that AI has achieved. But is it? Is phenomenal consciousness a cognitive capability?

Hang on a minute. Saying that phenomenal consciousness is a cognitive capability sounds a bit… functionalist. Yes, if consciousness is a nothing but particular function that the brain does, and AI is developing the ability to reproduce more and more functions of the human brain, then it seems reasonable to expect the consciousness function to eventually arrive in AI.

But if you don’t accept that phenomenal consciousness is a function, then it’s not in the same reference class as natural language etc. Then the emergence of those capabilities in AI does not tell us about the emergence of consciousness.

So this argument is circular. Consciousness is a function, AI can do human functions, so consciousness will appear in AI. Take away the functionalist assumption, and the argument breaks down.

The computational lens helps explain the mind

Functionalism, and computational functionalism in particular, was also motivated by the rapid progress of computer science and early forms of AI in the second half of the 20th Century (Boden 2008; Dupuy 2009). These gains helped embed the metaphor of the brain as a computer, normalise the language of information processing as describing what brains do, and arguably galvanise the discipline of cognitive science. But metaphors are in the end just metaphors (Cobb 2020). The fact that mental processes can sometimes be usefully described in terms of computation is not a sufficient basis to conclude that the brain actually computes, or that consciousness is a form of computation.

(Seth 2024)

Human brains are the main (only?) datapoint from which we can induct a theory of phenomenal consciousness. So when asking what properties are required for phenomenal consciousness, we can investigate what properties of the human brain are necessary for the creation of a mind.

The human brain seems to give rise to the human mind at least a bit like how a computer gives rise to the execution of programs. Modelling the brain as a computer has proven to have a lot of explanatory power: via many algorithmic models including models of visual processing, memory, attention, decision-making, perception & learning, and motor control.

These algorithms are useful maps of the brain and mind. But is computation also the territory? Is the mind a program? Such a program would need to exist as a high-level abstraction of the brain that is causally closed and fully encodes the mind.

In my previous post assessing practical CF, I explored whether or not such an abstraction exists. I concluded that it probably doesn’t. There is probably no software/hardware separation in the brain. Such a separation is not genetically fit since it is energetically expensive and there is no need for brains to download new programs in the way computers do. There is some empirical evidence consistent with this: The mind and neural spiking is sensitive to many biophysical details like neurotransmitter trajectories and mitochondria.

The computational lens is powerful for modelling the brain. But if you look close enough, it breaks down. Computation is a map of the mind, but it probably isn’t the territory.

Chalmer’s fading qualia

David Chalmers argued for substrate independence with his fading qualia thought experiment. Imagine you woke up in the middle of the night to find out that Chalmers had kidnapped you and tied you to a hospital bed in his spooky lab at NYU. “I’m going to do a little experiment on you, but don’t worry it’s not very invasive. I’m just going to remove a single neuron from your brain, and replace it with one of these silicon chips my grad student invented.”

He explains that the chip performs the exact input/output behavior as the real neuron he’s going to remove, right down to the electrical signals and neurotransmitters. “Your experience won’t change” he claims, “your brain will still function just as it used to, so your mind will still all be there”. Before you get the chance to protest, his grad student puts you under general anesthetic and performs the procedure.

When you wake up David asks “Are you still conscious?” and you say “yes”. “Ok, do another one,” he says to his grad student, and you’re under again. The grad student continues to replace neurons with silicon chips one by one, checking each time if you are still conscious. Since each time it’s just a measly neuron that was removed, your answer never changes.

This is a great photo of David Chalmers.

After one hundred billion operations, every neuron has been replaced with a chip. “Are you still conscious?” you answer “Yes”, because of course you do, your brain is functioning exactly like it did before. “Aha! I have proved substrate independence once and for all!” Chalmers exclaims, “Your mind is running on different hardware, yet your conscious experience has remained unchanged.”

Surely there can’t be a single neuron replacement that turns you into a philosophical zombie? That would mean your consciousness was reliant on that single neuron, which seems implausible.

The other option is that your consciousness gradually fades over the course of the operations. But surely you would notice that your experience was gradually fading and report it? To not notice the fading would be a catastrophic failure of introspection.

There are a couple of rebuttals to this I want to focus on.

But non-functionalists don’t trust cyborgs?

Schwitzgebel points out that Chalmers has an “audience problem”. Those he is trying to convince of functionalism are those who do not yet believe in functionalism. These non-functionalists are skeptical that the final product of the experiment, the person made of silicon chips, is conscious. So despite the fully silicon person reporting consciousness, the non-functionalist does not believe them[6], since behaving like you are conscious is not conclusive evidence that you’re conscious.

The non-functionalist audience is also not obliged to trust the introspective reports at intermediate stages. A person with 50% neurons and 50% chips will report unchanged consciousness, but for the same reason as for the final state, the non-functionalist need not believe that report. Therefore, for a non-functionalist, it’s perfectly possible that the patient could continue to report normal consciousness while in fact their consciousness is fading.

Are neuron-replacing chips physically possible?

In how much detail would Chalmer’s silicon chips have to simulate the in/out behavior of the neurons? If the neuron doctrine was true, the chips could simply have protruding wires that give and receive electrical impulses. Then, it could have a tiny computer on board that has learned the correct in/out mapping.

But as I brought up in a previous post, neurons do not only communicate via electrical signals.[7] The precise trajectories of neurotransmitters might also be important. When neurotransmitters arrive, where on the surface they penetrate, and how deep they get all influence the pattern of firing of the receiving neuron and what neurotransmitters it sends on to other cells.

The silicon chip would need detectors for each species of neurotransmitter on every point of its surface. It must use that data to simulate the neuron’s processing of the neurotransmitters. To simulate many precise trajectories within the cell could be very expensive. Could any device ever run such simulations quickly enough (so as to keep up with the pace of the biological neurons) on a chip small enough (so as to fit in amongst the biological neurons)?

It must also synthesize new neurotransmitters to send out to other neurons. To create the new neurotransmitters, the chip needs to have a supply of chemicals to build new neurotransmitters with. As to not run out, the chip will have to re-use the chemicals from incoming neurotransmitters.

And hey, since a large component of our expensive simulation is going to be tracking the transformation of old neurotransmitters to new neurotransmitters, we can dispose of that simulation since we’re actually just running those reactions in reality. Wait a minute, is this still a simulation of a neuron? Because it’s starting to just feel like a neuron.

Following this to its logical conclusion: when it comes down to actually designing these chips, a designer may end up discovering that the only way to reproduce all of the relevant in/out behavior of a neuron, is just to build a neuron![8]

Putnam’s octopus

So I’m not convinced by any of the arguments so far. This makes me start to wonder, where did CF come from in the first place? What were the ideas that first motivated it? CF was first defined and argued for by Hilary Putnam in the 60s, who justified it with the following argument.

An octopus mind is implemented in a radically different way than a human mind. Octopuses have a decentralized nervous system with most neurons located in their tentacles, a donut-shaped brain, and a vertical lobe instead of hippocampi (hippocampuses?) or neocortexes (neocortexi?).

But octopuses can have experiences like ours. Namely: octopuses seem to feel pain. They demonstrate aversive responses to harmful stimuli and they learn to avoid situations where they have been hurt. So we have the pain experience being created by two very different physical implementations. So pain is substrate-independent! Therefore multiple realizability, therefore CF.

I think this only works when we interpret multiple realizability at a suitably coarse-grained level of description of mental states (Bechtel & Mundale 2022). You can certainly argue that Octopus pain and human pain are of the same “type” (they play a similar function or have similar effects on the animal’s behavior). But since we’re interested in the phenomenal texture of that experience, we’re left with the question: how can we assume that octopus pain and human pain have the same quality?

If you want to use octopuses to argue that phenomenal consciousness is a program, the best you can do is a circular argument. How might we argue that human pain and octopus pain are the same experience? They seem to be playing the same function - both experiences are driving the animal to avoid harmful stimuli. Oh, so octopus pain and human pain are the same because they play the same function, in other words, because functionalism?

This concludes the tour of (what I interpret to be) the main arguments for CF.

What does a non-functionalist world look like?

What this means for AI consciousness

I still think conscious AI is possible even if CF is wrong.

If CF is true then AI might be conscious, since the AI could be running the same algorithms that make the human mind conscious. But does negating CF make AI consciousness impossible? To claim this without further argument is a denying the antecedent fallacy.[9]

To say that CF is false is to say that consciousness isn’t totally explained by computation. But it’s another thing to say that the computational lens doesn’t tell you anything about how likely it is to be conscious. To claim that consciousness cannot emerge from silicon, one cannot just deny functionalism but they must also explain why biology has the secret sauce while chips do not.[10]

If computation isn’t the source of consciousness, it could still be correlated with whatever that true source is. Cao 2022 argues that since function constrains implementation, then function tells us at least some things about other properties of the system (like the physical makeup):

From an everyday perspective, it may seem obvious that function constrains material make-up. A bicycle chain cannot be made of just anything—certain physical properties are required in order for it to perform as required within the functional organisation of the bicycle. Swiss cheese would make for a terrible bicycle chain, let alone a mind.

(Cao 2022)

For example, consciousness isn’t a function under this view, it probably still plays a function in biology.[11] If that function is useful for future AI, then we can predict that consciousness will eventually appear in AI systems, since whatever property creates consciousness will be engineered into AI to improve its capabilities.

What this means for mind uploading

There is probably no simple abstraction of brain states that captures the necessary dynamics that encode consciousness. Scanning one’s brain and finding the “program of your mind” might be impractical because your mind, memories, personality etc are deeply entangled into the biophysical details of your brain. Geoffrey Hinton calls this mortal computation, a kind of information processing that involves an inseparable entanglement between software and hardware. If the hardware dies, the software dies with it.

Perhaps we will still be able to run coarse-grained simulations of our brains that capture various traits of ourselves, but if CF is wrong, those simulations will not be conscious. This makes me worried about a future where the lightcone is tiled with what we think to be conscious simulations, when in fact they are zombies with no moral value.

Conclusion

Computational functionalism has some problems: the most pressing one (in my book) is the problem of algorithmic arbitrariness. But are there strong arguments in favour of CF to counterbalance these problems? In this post, I went through the main arguments and have argued that they are not strong. So the overall case for CF is not strong either.

I hope you enjoyed this sequence. I’m going to continue scratching my head about computational functionalism, as I think it’s an important question and it’s tractable to update on its validity. If I find more crucial considerations, I might add more posts to this sequence.

Thanks for reading!

 
  1. ^

     I suspect this jump might be a source of confusion and disagreement in this debate. “The mind” could be read in a number of ways. It seems clear that many aspects of the mind can be explained by computation (e.g. its functional properties). In this article I’m only interested in the phenomenal consciousness element of the mind.

  2. ^

     CF could accommodate for “degrees of consciousness” rather than a binary on/off conception of consciousness, by saying that the degree of consciousness is defined by the program being run. Some programs are not conscious at all, some are slightly conscious, and some are very conscious.

  3. ^

     1 second of simulated time is computed at least every second in base reality.

  4. ^

     down to some very small length scale.

  5. ^

    See the "What this means for AI consciousness" section.

  6. ^

     Do non-fuctionalists say that we can’t trust introspective reports at all? Not necessarily. A non-functionalst would believe the introspective reports of other fully biological humans, because they are biological humans themselves and they are extrapolating the existence of their own consciousness to the other person. We’re not obliged to believe all introspective reports. A non-functionalist could poo-poo the report of the 50/50 human for the same reason that they poo-poo the reports of LaMBDA: reports are not enough to guarantee consciousness.

  7. ^

     Content warning: contentious claims from neuroscience. Feel free to skip or not update much, I won't be offended.

  8. ^

     Perhaps it doesn’t have to be exactly a neuron, there may still be some substrate flexibility - e.g., we have the freedom to rearrange certain internal chemical processes without changing the function. But in this case we have less substrate flexibility than computational functionalists usually assume, the replacement chip still looks very different to a typical digital computer chip.

  9. ^

     Denying the antecedent: A implies B, and not A, so not B. In our case: CF implies conscious AI, so not CF implies not conscious AI.

  10. ^

     The AI consciousness disbeliever must state a “crucial thesis” that posits a link between biology and consciousness tight enough to exclude the possibility of consciousness on silicon chips, and argue for that thesis.

  11. ^

     For example, modelling the environment in an energy efficient way.

New Comment
13 comments, sorted by Click to highlight new comments since:

These algorithms are useful maps of the brain and mind. But is computation also the territory? Is the mind a program? Such a program would need to exist as a high-level abstraction of the brain that is causally closed and fully encodes the mind.

I said it in one of your previous posts but I’ll say it again: I think causal closure is patently absurd, and a red herring. The brain is a machine that runs an algorithm, but algorithms are allowed to have inputs! And if an algorithm has inputs, then it’s not causally closed.

The most obvious examples are sensory inputs—vision, sounds, etc. I’m not sure why you don’t mention those. As soon as I open my eyes, everything in my field of view has causal effects on the flow of my brain algorithm.

Needless to say, algorithms are allowed to have inputs. For example, the mergesort algorithm has an input (namely, a list). But I hope we can all agree that mergesort is an algorithm!

The other example is: the brain algorithm has input channels where random noise enters in. Again, that doesn’t prevent it from being an algorithm. Many famous, central examples of algorithms have input channels that accept random bits—for example, MCMC.

And in regards to “practical CF”, if I run MCMC on my computer while sitting outside, and I use an anemometer attached to the computer as a source of the random input bits entering the MCMC run, then it’s true that you need an astronomically complex hyper-accurate atmospheric simulator in order to reproduce this exact run of MCMC, but I don’t understand your perspective wherein that fact would be important. It’s still true that my computer is implementing MCMC “on a level of abstraction…higher than” atoms and electrons. The wind flowing around the computer is relevant to the random bits, but is not part of the calculations that comprise MCMC (which involve the CPU instruction set etc.). By the same token, if thermal noise mildly impacts my train of thought (as it always does), then it’s true that you need to simulate my brain down to the jiggling atoms in order to reproduce this exact run of my brain algorithm, but this seems irrelevant to me, and in particular it’s still true that my brain algorithm is “implemented on a level of abstraction of the brain higher than biophysics”. (Heck, if I look up at the night sky, then you’d need to simulate the entire Milky Way to reproduce this exact run of my brain algorithm! Who cares, right?)

I think the simplest objection to your practical CF part is that (here and further, phenomenal) consciousness is physiologically and evolutionarily robust: an infinite number of factors can't be required to have consciousness because the probability of having an infinite number of factors right is zero.

On the one hand, we have evolutionary robustness: it seems very unlikely that any single mutation could cause Homo sapiens to become otherwise intellectually capable zombies.

You can consider two extreme possibilities. Let's suppose that Homo sapiens is conscious and Homo erectus isn't. Therefore, there must be a very small number of structural changes in the brain that cause consciousness among a very large range of organisms (different humans), and ATP is not included here, as both Homo sapiens and Homo erectus have ATP.

Consider the opposite situation: all organisms with a neural system are conscious. In that case, there must be a simple property (otherwise, not all organisms in the range would be conscious) common among neural systems causing consciousness. Since neural systems of organisms are highly diverse, this property must be something with a very short description.

For everything in between: if you think that hydras don't have consciousness but proconsuls do, there must be a finite change in the genome, mRNAs, proteins, etc., between a hydra egg and a proconsul egg that causes consciousness to appear. Moreover, this change is smaller than the overall distance between hydras and proconsuls because humans (descendants of proconsuls) have consciousness too.

From a physiological point of view, there is also extreme robustness. You need to be hit in the head really hard to lose consciousness, and you preserve consciousness under relatively large ranges of pH. Hemispherectomy often doesn't even lead to cognitive decline. Autism, depression, and schizophrenia are associated with significant changes in the brain, yet phenomenal consciousness still appears to be here.

EDIT: in other words, imagine that we have certain structure in Homo sapiens brain absent in Homo erectus brain which makes us conscious. Take all possible statements distinguishing this structure from all structures in Homo erectus brain. If we exclude all statements, logically equivalent to "this structure implements such-n-such computation", we are left... exactly with what? We are probably left with something like "this structure is a bunch of lipid bubbles pumping sodium ions in certain geometric configuration" and I don't see any reasons for ion pumping in lipid bubbles to be relevant to phenomenal consciousness, even if it happens in a fancy geometric configuration.

I'm on board with being realist about your own consciousness. Personal experience and all that. But there's an epistemological problem with generalizing - how are you supposed to have learned that other humans have first-person experience, or indeed that the notion of first-person experience generalizes outside of your own head?

In Solomonoff induction, the mathematical formalization of Occam's razor, it's perfectly legitimate to start by assuming your own phenomenal experience (and then look for hypotheses that would produce that, such as the external world plus some bridging laws). But there's no a priori reason those bridging laws have to apply to other humans. It's not that they're assumed to be zombies, there just isn't a truth of the matter that needs to be answered.

To solve this problem, let me introduce you to schmonciousness, the property that you infer other people have based on their behavior and anatomy. You're conscious, they're schmonscious. These two properties might end up being more or less the same, but who knows.

Where before one might say that conscious people are moral patients, now you don't have to make the assumption that the people you care about are conscious, and you can just say that schmonscious people are moral patients.

Schmonsciousness is very obviously a functional property, because it's something you have to infer about other people (you can infer it about yourself based on your behavior as well, I suppose). But if consciousness is different from schmonsciousness, you still don't have to be a functionalist about consciousness.

In Solomonoff induction, the mathematical formalization of Occam's razor, it's perfectly legitimate to start by assuming your own phenomenal experience (and then look for hypotheses that would produce that, such as the external world plus some bridging laws). But there's no a priori reason those bridging laws have to apply to other humans.

You can reason that a universe in which you are conscious and everyone else is not is more complex than a universe in which everyone is equally conscious, therefore Solomonoff Induction privileges consciousness for everyone.

If consciousness is not functional, then Solomonoff induction will not predict it for other people even if you assert it for yourself. This is because "asserting it for yourself" doesn't have a functional impact on yourself, so there's no need to integrate it into the model of the world - it can just be a variable set to True a priori.

As I said, if you use induction to try to predict your more fine-grained personal experience, then the natural consequence (if the external world exists) is that you get a model of the external world plus some bridging laws that say how you experience it. You are certainly allowed to try to generalize these bridging laws to other humans' brains, but you are not forced to, it doesn't come out as an automatic part of the model.

If consciousness is not functional, then Solomonoff induction will not predict it for other people even if you assert it for yourself.

Agreed. But who's saying that consciousness isn't functional? "Functionalism" and "functional" as you're using it are similar sounding words, but they mean two different things. "Functionalism" is about locating consciousness on an abstracted vs. fundamental level. "Functional" is about consciousness being causally active vs. passive.[1] You can be a consciousness realist, think consciousness is functional, but not a functionalist.

You can also phrase the "is consciousness functional" issue as the existence or non-existence of bridging laws (if consciousness is functional, then there are no bridging laws). Which actually also means that Solomonoff Induction privileges consciousness being functional, all else equal (which circles back to your original point, though of course you can assert that consciousness being functional is logically incoherent and then it doesn't matter if the description is shorter).


  1. I would frame this as dual-aspect monism [≈ consciousness is functional] vs. epiphenomenalism [≈ consciousness is not functional], to have a different sounding word. Although there are many other labels people use to refer to either of the two positions, especially for the first, these are just what I think are clearest. ↩︎

You can also phrase the "is consciousness functional" issue as the existence or non-existence of bridging laws (if consciousness is functional, then there are no bridging laws). Which actually also means that Solomonoff Induction privileges consciousness being functional, all else equal.

Just imagine using your own subjective experience as the input to Solomonoff induction. If you have subjective experience that's not connected by bridging laws to the physical world, Solomonoff induction is happy to try to predict its patterns anyhow.

Solomonoff induction only privileges consciousness being functional if you actually mean schmonsciousness.

You're using 'bridging law' differently from I was, so let me rephrase.

To explain subjective experience, you need bridging-laws-as-you-define-them. But it could be that consciousness is functional and the bridging laws are implicit in the description of the universe, rather than explicit. Differently put, the bridging laws follow as a logical consequences of how the remaining universe is defined, rather than being an additional degree of freedom.[1]

In that case, since bridging laws do not add to the length of the program,[2] Solomonoff Induction will favor a universe in which they're the same for everyone, since this is what happens by default (you'd have a hard time imagining that bridging laws follow by logical necessity but are different for different people). In fact, there's a sense in which the program that SI finds is the same as the program SI would find for an illusionist universe; the difference is just about whether you think this program implies the existence of implicit bridging laws. But in neither cases is there an explicit set of bridging laws that add to the length of the program.


  1. Most of Eliezer's anti zombie sequence, especially Zombies Redacted can be viewed as an argument for bridging laws being implicit rather than explicit. He phrases this as "consciousness happens within physics" in that post. ↩︎

  2. Also arguable but something I feel very strongly about; I have an unpublished post where I argue at length that and why logical implications shouldn't increase program length in Solomonoff Induction. ↩︎

Chalmer’s fading qualia [...]

Worth noting that Eliezer uses this argument as well in The Generalized Anti-Zombie Principle, as the first line of his Socrates Dialogue (I don't know if he has it from Chalmers or thought of it independently):

Albert: "Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules."

He also acknowledges that this could be impossible, but only considers one reason why (which at least I consider highly implausible):

Sir Roger Penrose: "The thought experiment you propose is impossible. You can't duplicate the behavior of neurons without tapping into quantum gravity. That said, there's not much point in me taking further part in this conversation." (Wanders away.)

Also worth noting that another logical possibility (which you sort of get at in footnote 9) is that the thought experiment does go through, and a human with silicon chips instead of neurons would still be conscious, but CF is still false. Maybe it's not the substrate but the spatial location of neurons that's relevant. ("Substrate-indendence" is not actually a super well-defined concept, either.)

If you do reject CF but do believe in realist consciousness, then it's interesting to consider what other property is the key factor for human consciousness. If you're also a physicalist, then whichever property that is probably has to play a significant computational role in the brain, otherwise you run into contradicts when you compare the brain with a system that doesn't have the property and is otherwise as similar as possible. Spatial location has at least some things going for it here (e.g., ephaptic coupling and neuron synchronization).

For example, consciousness isn’t a function under this view, it probably still plays a function in biology.[12] If that function is useful for future AI, then we can predict that consciousness will eventually appear in AI systems, since whatever property creates consciousness will be engineered into AI to improve its capabilities.

This is a decent argument for why AI consciousness will happen, but actually "AI consciousness is possible" is a weaker claim. And it's pretty hard to see how that weaker claim could ever be false, especially if one is a physicalist (aka what you call "materialist" in your assumptions of this post). It would imply that consciousness in the brain depends on a physical property, but that physical property is impossible to instantiate in an artificial system; that seems highly suspect.

If we deny practical computational functionalism (CF), we need to pay a theoretical cost:

1. One such possible cost is that we have to assume that the secret of consciousness lies in some 'exotic transistors,' meaning that consciousness depends not on the global properties of the brain, but on small local properties of neurons or their elements (microtubules, neurotransmitter concentrations, etc.).

1a. Such exotic transistors are also internally unobservable. This makes them similar to the idea of soul, as criticized by Locke. He argued that change or replacement of the soul can't be observed. Thus, Locke's argument against the soul is similar to the fading qualia argument.

1b. Such exotic transistors should be inside the smallest animals and even bacteria. This paves the way to panpsychism, but strong panpsychism implies that computers are conscious because everything is conscious. (There are theories that a single electron is the carrier of consciousness – see Argonov).

There are theories which suggest something like a "global quantum field" or "quantum computer of consciousness" and thus partially escape the curse of exotic transistors. The assume global physical property which is created by many small exotic transistors.  

2 and 3. If we deny exotic transistors, we remain either with exotic computations or soul.

"Soul" here includes non-physicalist world models, e.g., qualia-only world, which is a form of solipsism or requires the existence of God who produces souls and installs them in minds (and can install them in computers).

Exotic computations can be either extremely complex or require very special computational operations (strange loop by Hofstadter).

I don't like this writing style. It feels like you are saying a lot of things, without trying to demarcate boundaries for what you actually mean, and I also don't see you criticizing your sentences before you put them down. For example, with these two paragraphs:

Surely there can’t be a single neuron replacement that turns you into a philosophical zombie? That would mean your consciousness was reliant on that single neuron, which seems implausible.

The other option is that your consciousness gradually fades over the course of the operations. But surely you would notice that your experience was gradually fading and report it? To not notice the fading would be a catastrophic failure of introspection.

If you're aware that there is a map and a territory, you should never be dealing with absolutes like, "a single neuron..." You're right that the only other option (I would say, the only option) is your consciousness gradually fades away, but what do you mean by that? It's such a loose idea, which makes it harder to look at it critically. I don't really understand the point of this thought experiment, because if it wasn't phrased in such a mysterious manner, it wouldn't seem relevant to computational functionalism.

I also don't understand a single one of your arguments against computational functionalism, and that's because I think you don't understand them either. For example,

In the theoretical CF post, I give a more abstract argument against the CF classifier. I argue that computation is fuzzy, it’s a property of our map of a system rather than the territory. In contrast, given my realist assumptions above, phenomenal consciousness is not a fuzzy property of a map, it is the territory. So consciousness cannot be computation.

You can't just claim that consciousness is "real" and computation is not, and thus they're distinct. You haven't even defined what "real" is. Besides, most people actually take the opposite approch: computation is the most "real" thing out there, and the universe—and any consciouses therein—arise from it. Finally, how is computation being fuzzy even related to this question? Consciousness can be the same way.

In response to the two reactions:

  1. Why do you say, "Besides, most people actually take the opposite approch: computation is the most "real" thing out there, and the universe—and any consciouses therein—arise from it."

Euan McLean said at the top of his post he was assuming a materialist perspective. If you believe there exists "a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness" you believe you can define consciousness with a computation. In fact, anytime you believe something can be explicitly defined and manipulated, you've invented a logic and computer. So, most people who take the materialist perspective believe the material world comes from a sort of "computational universe", e.g. Tegmark IV.

  1. Soldier mindset.

Here's a soldier mindset: you're wrong, and I'm much more confident on this than you are. This person's thinking is very loosey-goosey and someone needed to point it out. His posts are mostly fluff with paradoxes and questions that would be completely answerable (or at least interesting) if he deleted half the paragraphs and tried to pin down definitions before running rampant with them.

Also, I think I can point to specific things that you might consider soldier mindset. For example,

It's such a loose idea, which makes it harder to look at it critically. I don't really understand the point of this thought experiment, because if it wasn't phrased in such a mysterious manner, it wouldn't seem relevant to computational functionalism.

If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away. I wasn't giving him the answer, because his entire post is full of this same error: not defining his terms, running rampant with them, and then being shocked when things don't make sense.

I reacted locally invalid (but didn't downvote either comment) because I think "computation" as OP is using it is about the level of granularity/abstraction at which consciousness is located, and I think it's logically coherent to believe both (1) materialism[1] and (2) consciousness is located at a fundamental/non-abstract level.

To make a very unrealistic analogy that I think nonetheless makes the point: suppose you believed that all ball-and-disk integrators were conscious. Do you automatically believe that consciousness can be defined with a computation? Not necessarily -- you could have a theory according to which a digital computer computing the same integrals is not consciousness (since, again, consciousness is about the fine-grained physical steps, rather than the abstracted computational steps, and a digital computer calculating performs very different physical steps than a ball-and-disk integrator doing the same). The only way you now care about "computation" is if you think "computation" does refer to low-level physical steps. In that case, your implication is correct, but this isn't what OP means, and OP did define their terms.


  1. as OP defines the term; in my terminology, materialism means something different ↩︎