[Thanks to Charlie Steiner, Richard Kennaway, and Said Achmiz for helpful discussion. Extra special thanks to the Long-Term Future Fund for funding research related to this post.]
[Epistemic status: confident]
There's a common pattern in online debates about consciousness. It looks something like this:
One person will try to communicate a belief or idea to someone else, but they cannot get through no matter how hard they try. Here's a made-up example:
"It's obvious that consciousness exists."
-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so-
"I'm not just talking about the computational process. I mean qualia obviously exist."
-Define qualia.
"You can't define qualia; it's a primitive. But you know what I mean."
-I don't. How could I if you can't define it?
"I mean that there clearly is some non-material experience stuff!"
-Non-material, as in defying the laws of physics? In that case, I do get it, and I super don't-
"It's perfectly compatible with the laws of physics."
-Then I don't know what you mean.
"I mean that there's clearly some experiential stuff accompanying the physical process."
-I don't know what that means.
"Do you have experience or not?"
-I have internal representations, and I can access them to some degree. It's up to you to tell me if that's experience or not.
"Okay, look. You can conceptually separate the information content from how it feels to have that content. Not physically separate them, perhaps, but conceptually. The what-it-feels-like part is qualia. So do you have that or not?"
-I don't know what that means, so I don't know. As I said, I have internal representations, but I don't think there's anything in addition to those representations, and I'm not sure what that would even mean.
and so on. The conversation can also get ugly, with boldface author accusing quotation author of being unscientific and/or quotation author accusing boldface author of being willfully obtuse.
On LessWrong, people are arguably pretty good at not talking past each other, but the pattern above still happens. So what's going on?
The Two Intuition Clusters
The basic model I'm proposing is that core intuitions about consciousness tend to cluster into two camps, with most miscommunication being the result of someone failing to communicate with the other camp. I will call the camp of boldface author Camp #1 and the camp of quotation author Camp #2.
Characteristics
Camp #1 tends to think of consciousness as a non-special high-level phenomenon. Solving consciousness is then tantamount to solving the Meta-Problem of consciousness, which is to explain why we think/claim to have consciousness. In other words, once we've explained the full causal chain that ends with people uttering the sounds kon-shush-nuhs, we've explained all the hard observable facts, and the idea that there's anything else seems dangerously speculative/unscientific. No complicated metaphysics is required for this approach.
Conversely, Camp #2 is convinced that there is an experience thing that exists in a fundamental way. There's no agreement on what this thing is – some postulate causally active non-material stuff, whereas others agree with Camp #1 that there's nothing operating outside the laws of physics – but they all agree that there is something that needs explaining. Therefore, even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding. A complete solution (if it is even possible) may also have a nontrivial metaphysical component.
The camps are ubiquitous; once you have the concept, you will see it everywhere consciousness is discussed. Even single comments often betray allegiance to one camp or the other. Apparent exceptions are usually from people who are well-read on the subject and may have optimized their communication to make sense to both sides.
The Generator
So, why is this happening? I don't have a complete answer, but I think we can narrow down the disagreement. Here's a somewhat indirect explanation of the proposed crux.
Suppose your friend John tells you he has a headache. As an upstanding citizen Bayesian agent, how should you update your beliefs here? In other words, what is the explanandum – the thing-your-model-of-the-world-needs-to-explain?
You may think the explanandum is "John has a headache", but that's smuggling in some assumptions. Perhaps John was lying about the headache to make sure you leave him alone for a while! So a better explanandum is "John told me he's having a headache", where the truth value of the claim is unspecified.
(If we want to get pedantic, the claim that John told you anything is still smuggling in some assumptions since you could have also hallucinated the whole thing. But this class of concerns is not what divides the two camps.)
Okay, so if John tells you he has a headache, the correct explanandum is "John claims to have a headache", and the analogous thing holds for any other sensation. But what if you yourself seem to experience something? This question is what divides the two camps:
-
According to Camp #1, the correct explanandum is only slightly more than "I claim to have experienced X" (where X is the apparent experience). After all, if we can explain exactly why you, as a physical system, uttered the words "I experienced X", then there's nothing else to explain. The reason it's slightly more is that you do still have some amount of privileged access to your own experience: a one-sentence testimony doesn't communicate the full set of information contained in a subjective state – but this additional information remains metaphysically non-special. (HT: wilkox.)
-
According to Camp #2, the correct explanandum is "I experienced X". After all, you perceive your experience/consciousness directly, so it is not possible to be wrong about its existence.
In other words, the two camps disagree about the epistemic status of apparently perceived experiences: for Camp #2, they're epistemic bedrock, whereas for Camp #1, they're model outputs of your brain, and like all model outputs of your brain, they can be wrong. The axiom of Camp #1 can be summarized in one sentence as "you should treat your own claims of experience the same way you treat everyone else's".
From the perspective of Camp #1, Camp #2 is quite silly. People have claimed that fire is metaphysically special, then intelligence, then life, and so on, and their success rate so far is 0%. Consciousness is just one more thing on this list, so the odds that they are right this time are pretty slim.
From the perspective of Camp #2, Camp #1 is quite silly. Any apparent evidence against the primacy of consciousness necessarily backfires as it must itself be received as a pattern of consciousness. Even in the textbook case where you're conducting a scientific experiment with a well-defined result, you still need to look at your screen to read the result, so even science bottoms out in predictions about future states of consciousness!
An even deeper intuition may be what precisely you identify with. Are you identical to your physical brain or body (or program/algorithm implemented by your brain)? If so, you're probably in Camp #1. Are you a witness of/identical to the set of consciousness exhibited by your body at any moment? If so, you're probably in Camp #2. That said, this paragraph is pure speculation, and the two camp phenomenon doesn't depend on it.
Representations in the literature
If you ask GPT-4 about the two most popular academic books about consciousness, it usually responds with
-
Consciousness Explained by Daniel Dennett; and
-
The Conscious Mind by David Chalmers.
If the camps are universal, we'd expect the two books to represent one camp each because economics. As it happens, this is exactly right!
Dennett devotes an entire chapter to the proper evaluation of experience claims, and the method he champions (called "heterophenomenology") is essentially a restatement of the Camp #1 axiom. He suggests that we should treat experience claims like fictional worldbuilding, where such claims are then "in good standing in the fictional world of your heterophenomenology". Once this fictional world is complete, it's up to the scientist to evaluate how its components map to the real world. Crucially, you're supposed to apply this principle even to yourself, so the punchline is again that the epistemic status of experience claims is always up for debate.
Conversely, Chalmers says this in the introductory chapter of his book (emphasis added):
Some say that consciousness is an "illusion," but I have little idea what this could even mean. It seems to me that we are surer of the existence of conscious experience than we are of anything else in the world. I have tried hard at times to convince myself that there is really nothing there, that conscious experience is empty, an illusion. There is something seductive about this notion, which philosophers throughout the ages have exploited, but in the end it is utterly unsatisfying. I find myself absorbed in an orange sensation, and something is going on. There is something that needs explaining, even after we have explained the processes of discrimination and action: there is the experience.
True, I cannot prove that there is a further problem, precisely because I cannot prove that consciousness exists. We know about consciousness more directly than we know about anything else, so "proof" is inappropriate. The best I can do is provide arguments wherever possible, while rebutting arguments from the other side. There is no denying that this involves an appeal to intuition at some point; but all arguments involve intuition somewhere, and I have tried to be clear about the intuitions involved in mine.
In other words, Chalmers is having none of this heterophenomenology stuff; he wants to condition on "I experience X" itself.
On Researching Consciousness
Before we return to the main topic of communication, I want to point out that the camps also play a major role for research programs into consciousness. The reason is that the work you do to make progress on understanding a phenomenon is different if you expect the phenomenon to be low level vs. high level. A mathematical equation is a good goal if you're trying to describe planetary motions, but less so if you're trying to describe the appeal of Mozart.
The classic example of how this applies to consciousness is the infamous[1] Integrated Information Theory (IIT). For those unfamiliar, IIT is a theory that takes as input a description of a system based on a set of elements with state-space and probability transition matrix,[2] which it uses to construct a mathematical object (that is meant to correspond to the system's consciousness). The math to construct this object is extensive but precisely defined. (The object includes a qualitative description and a scalar quantity meant to describe the 'amount' of consciousness.) As far as I know, IIT is the most formalized theory of consciousness in the literature.
Attempting to describe consciousness with a mathematical object assumes that consciousness is a low-level phenomenon. What happens if this assumption is incorrect? I think the answer is that the approach becomes largely useless. At best, IIT's output could be a correlate of consciousness (though there probably isn't much reason to expect so), but it cannot possibly describe consciousness precisely because no precise description exists. In general, approaches that assume a Camp #2 perspective are in bad shape if Camp #1 ends up correct.
Is the reverse also true? Interestingly, the answer is no. If Camp #2 is correct, then research programs assuming a Camp #1 perspective are probably not optimal, but they aren't useless, either. The reason is that attempting to formalize a high-level property is not as big of a mistake as trying to informally describe a low-level property. (This is true even for our leading example: a mathematical equation for the appeal of Mozart will very likely be unhelpful, whereas an informal description of planetary motions could plausibly still be useful.) With respect to consciousness, the most central example of a Camp #1 research agenda is Global Workspace Theory, which is mostly a collection of empirical results and, as such, is still of interest to Camp #2 people.
So there is an inherent asymmetry where Camp #1 reasoning tends to appeal to the opposing camp in a way that Camp #2 reasoning does not, which is also a segue into our next section.
On Communication
In light of the two camp model, how does one write or otherwise communicate effectively about consciousness?
Because of the asymmetry we've just talked about, a pretty good strategy is probably "be in Camp #1". This is also born out empirically:
-
Consciousness Explained is more popular than The Conscious Mind (or any other Camp #2 book).
-
Global Workspace Theory is more popular than Integrated Information Theory (or any other Camp #2 theory).
-
Virtually every high karma post about consciousness ever published on LessWrong takes a Camp #1 perspective, with the possible exception of Eliezer's posts in the sequences.
If you're going to write something from the Camp #2 perspective, I advise making it explicit that you're doing so (even though I don't have empirical evidence that this is enough to get a positive reception on LessWrong). One thing I've seen a lot is people writing from a Camp #2 perspective while assuming that everyone agrees with them. Surprisingly often, this is even explicit, with sentences like "everyone agrees that consciousness exists and is a mystery" (in a context where "consciousness" clearly refers to Camp #2 style consciousness). This is probably a bad idea.
If you're going to respond to something about consciousness, I very much advise trying to figure out which perspective the author has taken. Chances are this is easy to figure out even if they haven't made it explicit.
On Terminology
(Section added on 2025/01/14.)
I think one of the main culprits of miscommunication is overloaded terminology. When someone else uses a term, a very understandable assumption is that they mean the same thing you do, but when it comes to consciousness, this assumption is false surprisingly often. Here is a list of what I think are the most problematic terms.
-
Consciousness itself is overloaded (go figure!) since it can refer to both "a high-level computational process" and "an ontologically fundamental property of the universe". I recommend making the meaning explicit. Ways to signal the former meaning include stating that you're Camp #1, calling yourself an Illusionist or Eliminativist, or mentioning that you like Dennett. Ways to clarify the latter meaning include stating that you're in Camp #2 or calling yourself a (consciousness) realist.
-
Emergence can mean either "Camp #2 consciousness appears when xyz happens due to a law of the universe" or "a computational process matches the 'consciousness' cluster in thingspace sufficiently to deserve the label". I recommend either not using this term, or specifying strong vs. weak emergence, or (if the former meaning is intended) using "epiphenomenalism" instead.
-
Materialist can mean "I agree that the laws of physics exhaustively describe the behavior of the universe" or "the above plus I am an Illusionist" or "the above plus I think the universe is apriori unconscious" (which may be compatible with epiphenomenalism). I recommend never using this term.
-
Qualia can be a synonym for consciousness (if you are in Camp #2) or mean something like "this incredibly annoying and ill-defined concept that confused people insist on talking about" (if you're in Camp #1). I recommend only using this term if you're talking to a Camp #2 audience.
-
Functionalist can mean "I am a Camp #2 person and additionally believe that a functional description (whatever that means exactly) is sufficient to determine any system's consciousness" or "I am a Camp #1 person who takes it as reasonable enough to describe consciousness as a functional property". I would nominate this as the most problematic term since it is almost always assumed to have a single meaning while actually describing two mutually incompatible sets of beliefs.[3] I recommend saying "realist functionalism" if you're in Camp #2, and just not using the term if you're in Camp #1.
Whenever you see any of those terms used by other people, alarm bells should go off in your head. There is a high chance that they mean something else than you do, especially if what they're saying doesn't seem to make sense.
I'm calling it infamous because it has a very bad reputation on LessWrong specifically. In the broader literature, I think a lot of people take it seriously. In fact, I think it's still the most popular Camp #2 proposal. ↩︎
You can think of this formalism as a strictly more complex description than specifying the system as a graph. While edges are not specified explicitly, all relevant information about how any two nodes are connected should be implicit in how the probability to transition to any next state depends on the system's current state. IIT does have an assumption of independence, meaning that the probability of landing in a certain state is just the product of the probabilities of landing in the corresponding states for each element/node. This is written as , where is a state of and is the total system of elements. ↩︎
For example, I think both Daniel Dennett and Giulio Tononi (the creator of IIT) could reasonably be described as a functionalist (precisely because IIT relies on an abstract description of a system). However, the approaches that both of them defend could hardly be more different. ↩︎
I'm two years late to the discussion, but I think I can clear this up. The idea is that a person without qualia might still have sensory processing that leads to the construction of percepts which can inform our actions, but without any consciousness of sensation. There is also a distinction between sensory data and sensation. Consider this scenario:
I am looking at a red square on a white wall. The light from some light source reflects off the wall and enters my eye, where it activates cone and rod cells. This is sensory data, but it is not sensation, in that I do not feel the activation of my cone and rod cells. My visual cortex processes the sensory data, and generates a sensory experience (qualia) corresponding in some way to the wall I am looking at. I analyze this sensory experience and thus derive percepts like "white wall" and "red square". The generation of these percepts will typically also lead to a sensory experience (qualia) in the form of an inner monologue: "that's a red square on a white wall". But sometimes it won't, since I don't always have an inner monologue. Yet, even when it doesn't, I am still able to act on the basis of having seen a red square on a white wall. For example, if I am subsequently quizzed on what I saw, I will be able to answer it correctly.
Well, that's my formulation of how qualia works, having thought about it a great deal. But there are people who profess that they experience qualia and yet suspect that the generation of percepts does not come from the analysis of conscious sensory experience, but from the processing of sensory data itself, and that the analysis of sensory experience just happens to coincide with it (Leibniz's pre-ordained harmony of God).
Finally, we could also imagine cases where the sensory experience is not generated at all; where there is merely sensory data that, despite being processed by the visual cortex, never becomes sensory experience (never generates the visual analogue of an internal monologue), but still crystallises into sufficiently ordered sensory data that it can give rise to percepts. This would be the hypothetical "philosophical zombie".
I don't think this last scenario is possible, because I don't think qualia are epiphenomena; I think they are an intrinsic part of the process by which human beings (and probably other entities with metacognition) make decisions on the basis of sensory data. Without this, I do not believe our cognition could advance significantly beyond that of infancy (I do not think infants possess qualia), but there are certain cases where our instincts can respond to sensory data in a manner that does not require attention to qualia, and may indeed not require qualia at all.