As someone who could be described as "pro-qualia": I think there are still a number of fundamental misconceptions and confusions that people bring to this debate. We could have a more productive dialogue if these confusions were cleared up. I don't think that clearing up these confusions will make everyone agree with me on everything, but I do think that we would end up talking past each other less if the confusions were addressed.
First, a couple of misconceptions:
1.) Some people think that part of the definition of qualia is that they are necessarily supernatural or non-physical. This is false. A qualia is just a sense perception. That's it. The definition of "qualia" is completely, 100% neutral as to the underlying ontological substrate. It could certainly be something entirely physical. By accepting the existence of qualia, you are not thereby committing yourself to anti-physicalism.
2.) An idea I sometimes see repeated is that qualia are this sort of ephemeral, ineffable "feeling" that you get over and above your ordinary sense perception. It's as if, you see red, and the experience of seeing red gives you a certain "vibe", a...
I have a simple, yet unusual, explanation for the difference between camp #1 and camp#2: we have different experiences of consciousness. Believing that everyone has our kind of consciousness, of course we talk past each other.
I’ve noticed that in conversations about qualia, I’m always in the position of Mr Boldface in the example dialog: I don’t think there is anything that needs to be explained, and I’m puzzled that nobody can tell me what qualia are using sensible words. (I‘m not particularly stupid or ignorant; I got a degree in philosophy and linguistics from MIT.) I suggest a simple explanation: some of us have qualia and some of us don’t. I‘m one of those who don’t. And when someone tries to point at them, all I can do is to react with obtuse incomprehension, while they point at the most obvious thing in the world. It apparently is the most obvious thing in the world, to a lot of people.
Obviously I have sensory impressions; I can tell you when something looks red. And I have sensory memories; I can tell you when something looked red yesterday. But there isn’t any hard-to-explain extra thing there.
One might object that qualia are...
Alternative explanation: everyone has qualia, but some people lack the mental mechanism that makes them feel like qualia require a special metaphysical explanation. Since qualia are almost always represented as requiring such an explanation (or at least as ineffable, mysterious and elusive), these latter people don't recognize their own qualia as that which is being talked about.
How can people lack such a mental mechanism? Either
I don't have a clue about the relative prevalences of these groups, nor do I mean to make a claim about which group you personally are in.
That's interesting, but I doubt it's what's going on in general (though maybe it is for some camp #1 people). My instinct is also strongly camp #1, but I feel like I get the appeal of camp #2 (and qualia feel "obvious" to me on a gut level). The difference between the camps seems to me to have more to do with differences in philosophical priors.
D) the idea that the word must mean something weird, since it is a strange word -- it cannot be an unfamilar term for something familiar.
You said you had the experience of redness. I told you that's a quale. Why didn't that tell you what "qualia" means?
“There’s some confusing extra thing on top of behavior, namely sensations.” Wow, that’s a fascinating notion. But presumably if we didn’t have visual sensations, we’d be blind, assuming the rest of our brain worked the same, right? So what exactly requires explanation? You’re postulating something that acts just like me but has no sensations, I.e. is blind, deaf, etc. I don’t see how that can be a coherent thing you’re imagining.
When I read you saying “is like something to be,” I get the same feeling I get when someone tries to tell me what qualia are— it’s a peculiar collection of familiar words. It seems to me that you’re trying to turn a two-place predicate “A imagines what it feels like to be B” into a one-place predicate “B is like something to be”, where it’s a pure property of B.
Integrated Information Theory is peak Camp #2 stuff
As a Camp #2 person, I just want to remark that from my personal viewpoint, Integrated Information Theory is sharing the key defect with Global Workspace Theory, and hence is no better.
Namely, I think that the Hard Problem of Consciousness has the Hart Part: the Hard Problem of Qualia. As soon as the Hard Problem of Qualia is solved, the rest of the Hard Problem of Consciousness is much less mysterious (perhaps, the rest can be treated in the spirit of the "Easy Problems of Consciousness", e.g. the question why I am me and not another person might be treatable as a symmetry violation, a standard mechanism in physics, and the question why human qualia seem to normally cluster into belonging to a particular subject (my qualia vs. all other qualia) might not be excessively mysterious either).
So the theory purporting to actually solve the Hard Problem of Consciousness needs to shed some light onto the nature and the structure of the space of qualia, in order to be a viable contender from my personal viewpoint.
Unfortunately, I am not aware of any such viable contenders, i.e. of any theories shedding much light onto the nature and the...
I think a lot of Camp #2 people would agree with you that IIT doesn't make meaningful progress on the hard problem. As far as I remember, it doesn't even really try to; it just states that consciousness is the same thing as integrated information and then argues why this is plausible based on intuition/simplicity/how it applies to the brain and so on.
I think IIT "is Camp #2 stuff" in the sense that being in Camp #2 is necessary to appreciate IIT - it's definitely not sufficient. But it does seem necessary because, for Camp #1, the entire approach of trying to find a precise formula for "amount of consciousness" is just fundamentally doomed, especially since the math doesn't require any capacity for reporting on your conscious states, or really any of the functional capabilities of human consciousness. In fact, Scott Aaronson claims (haven't read the construction myself) here that
the system that simply applies the matrix W to an input vector x—has an enormous amount of integrated information Φ
So yeah, Camp #2 is necessary but not sufficient. I had a line in an older version of this post where I suggested that the Camp #2 memeplex is so large that, even if you're firmly in Camp #2, you'll probably find some things in there that are just as absurd to you as the Camp #1 axiom.
This is a really clear breakdown. Thank you for writing it up!
I'm struck by the symmetry between (a) these two Camps and (b) the two hemispheres of the brain as depicted by Iain McGilchrist. Including the way that one side can navigate the relationship between both sides while the other thinks the first is basically just bonkers!
It's a strong enough analogy that I wonder if it's causal. E.g., I expect someone from Camp 1 to have a much harder time "vibing". I associate Camp 1 folk with rejection of ineffable insights, like "The Tao that can be said is not the true Tao" sounding to them like "the Tao" is just incoherent gibberish.
In which case the "What is this 'qualia' thing you're talking about?" has an awful lot in common with the daughter's arm phenomenon. The whole experience of seeing a rainbow, knowing it's beautiful, and then witnessing the thought "Wow, what a beautiful rainbow!" would be hard to pin down because the only way to pin it down in Camp 1 is by modeling the experiential stream and then talking about the model. The idea that there could be a direct experience that is itself being modeled and is thus prior to any thoughts about it… just doesn't make sense to the left hemisphere. It's like talking about the Tao.
I don't know how big a factor, if at all, this plays in the two camps thing. It's just such a striking analogy that it seems worth bringing up.
This is a clear and convincing account of the intuitions that lead to people either accepting or denying the existence of the Hard Problem. I’m squarely in Camp #1, and while I think the broad strokes are correct there are two places where I think this account gets Camp #1 a little wrong on the details.
...According to Camp #1, the correct explanandum is still "I claim to have experienced X" (where X is the apparent experience). After all, if we can explain exactly why you, as a physical system, uttered the words "I experienced X", then there's nothing else to
Good writeup, I certainly agree with the frustration of people talking past each other with no convergence in sight.
First, I don't understand why IIT is still popular, Scott Aaronson showed its fatal shortcomings 10 years ago, as soon as it came out.
Second, I do not see any difference between experiencing something and claiming to experience something, outside of intentionally trying to deceive someone.
Third, I don't know which camp I am in, beyond "of course consciousness is an emergent concept, like free will and baseball". Here by emergence ...
I think there's a pervasive error being made by both camps, although more especially Camp 2 (and I count myself in Camp 2). There is a frantic demand for and grasping after explanations, to the extent of counting the difficulty of the problem as evidence for this or that solution. "What else could it be [but my theory]?"
We are confronted with the three buttons labelled "Explain", "Ignore", and "Worship". A lot of people keep on jabbing at the "Explain" button, but (in my view) none of the explanations get anywhere. Some press the "Ignore" button and procla...
I'm going to argue a complementary story: The basic reason why it's so hard to talk about consciousness has to do with 2 issues that are present in consciousness research, and both make it impossible to do productive research on:
Extraordinarily terrible feedback loops, almost reminiscent of the pre-deep learning alignment work on LW (I'm looking at you MIRI, albeit even then it achieved more than the consciousness research to date, and LW is slowly shifting to a mix of empirical and governance work, which is quite a lot faster than any consciousness rel
Great post! I think this captures most of the variance in consciousness discussions.
I've been interested in consciousness through a 23 year career in computational cognitive neuroscience. I think making progress on bridging the gap between camp 1 and camp 2 requires more detailed explanations of neural dynamics. Those can be inferred from empirical data, but not easily, so I haven't seen any explanations similar to the one I've been developing in my head. I haven't published on the topic because it's more of a liability for a neuroscience career than an as...
Strong agree all around—this post echoes a comment I made here (me in Camp #1, talking to someone in Camp #2):
...If you ask me a question about, umm, I’m not sure the exact term, let’s say “3rd-person-observable properties of the physical world that have something to do with the human brain”…then I feel like I’m on pretty firm ground, and that I’m in my comfort zone, and that I’m able to answer such questions, at least in broad outline and to some extent at a pretty gory level of detail. (Some broad-outline ingredients are in my old post here, and I’m open to
-It's obvious that conscious experience exists.
-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so
-You mean, it looks from the outside. But I'm not just talking about the computational process, which I am not even aware of as such, I am talking about conscious experience.
-Define qualia
-Look at a sunset. The way it looks is a quale. taste some chocolate,. The way it tastes is a quale.
-Well, I got my experimental subject to look at a sunset and taste some chocolate,...
There are many features you get right about the stubbornness of the problem/discussion. Certainly, modulo the choice to stop the count at two camps, you've highlighted some crucial facts about these clusters. But now I'm going to complain about what I see as your missteps.
Moreover, even if consciousness is compatible with the laws of physics, ... [camp #2 holds] it's still metaphysically tricky, i.e., it poses a conceptual mystery relative to our current understanding.
I think we need to be careful not to mush together metaphysics and epistemics...
I think you are a bit off the mark.
As a reductive materialist, expecting to find a materialistic explanation for consciousness, in your model I'd be Camp 2. And yet in the dialogue
..."It's obvious that consciousness exists."
-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so-
"I'm not just talking about the computational process. I mean qualia obviously exists."
-Define qualia.
"You can't define qualia; it's a primitive. But you know what I mean."
-I don't. How could I if
Thanks for that comment. Can you explain why you think you're Camp #2 according to the post? Because based on this reply, you seem firmly (in fact, quite obviously) in Camp #1 to me, so there must be some part of the post where I communicated very poorly.
( ... guessing for the reason here ...) I wrote in the second-last section that consciousness, according to Camp #1, has fuzzy boundaries. But that just means that the definition of the phenomenon has fuzzy boundaries, meaning that it's unclear when consciousness would stop being consciousness if you changed the architecture slightly (or built an AI with similar architecture). I definitely didn't mean to say that there's fuzziness in how the human brain produces consciousness; I think Camp #1 would overwhelmingly hold that we can, in principle, find a full explanation that precisely maps out the role of every last neuron.
Was that section the problem Or sth else?
I don't feel like I fall super hard into one of these camps or the other, although I agree they exist. I think from the outside folks would probably say I'm a very camp 2 person, but as I see it that's only insofar as I'm not willing to give up and say that there's nothing of value beyond the camp 1 approach.
This is perhaps reflected in my own thinking about "consciousness". I think the core thing going on is not complex, but instead quite simple: negative feedback loops that create information that's internal to a system. I identify signals within a feedb...
I agree that the epistemic status of experience is important, but... First of all does anyone actually disagree with concrete things that Dennett says? That people are often wrong about their experiences is obviously true. If that was the core disagreement, it would be easy to persuade people. The only persistent disagreement seems to be about whether there is something additional to the physical explanation of experience (hence the zombies argument) or whether fundamental consciousness is even coherent concept at all - just replacing absolute certainty with uncertainty wouldn't solve it, when you can't even communicate what's your evidence is.
The disagreement is about whether qualia exist enough to need explaining. A rainbow is ultimately explained as a kind of illusion, but to arrive at the explanation , you have to accept that they appear to exist, that people aren't lying about them.
Dennett doesn't just think you can be wrong about what's going on in your mind, he thinks qualia don't exist at all, and that he is zombie ... but his opponents don't all think that qualia are fundamental, indefinable, non physical etc. It's important to remember that the camp #2 argument given here is very exagerated.
Great post, I felt it really defined and elaborated on a phenomena I've seen recur on a regular basis.
It's funny how consciousness is so difficult to understand, to the point that it seems pre-paradigmatic to me. At this point, I like, like presumably many others, evaluate claims of conscientiousness by setting the prior that I'm personally conscious to near 1, and then evaluating the consciousness of other entities primarily by their structural similarity to my own computational substrate, the brain.
So another human is almost certainly conscious, most mam...
All this sounds correct to me.
Reflecting on some previous conversations in which parallel the opening vignette, I now suspect that many people are just not conscious in the way that I am / seem to be.
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
I'm wondering where Biological Naturalism[1] falls within these two camps? It seems like sort of a "third way" in between them, and incidentally, is the explanation that I personally have found most compelling.
Here's GPT-4's summary:
...Biological Naturalism is a theory of mind proposed by philosopher John Searle. It is a middle ground between two dominant but opposing views of the mind: materialism and dualism. Materialism suggests that the mind is completely reducible to physical processes in the brain, while dualism posits that the mind and body are di
[Thanks to Charlie Steiner, Richard Kennaway, and Said Achmiz for helpful discussion. Extra special thanks to the Long-Term Future Fund for funding research related to this post.]
[Epistemic status: my best guess after having read a lot about the topic, including all LW posts and comment sections with the consciousness tag]
There's a common pattern in online debates about consciousness. It looks something like this:
One person will try to communicate a belief or idea to someone else, but they cannot get through no matter how hard they try. Here's a made-up example:
"It's obvious that consciousness exists."
-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so-
"I'm not just talking about the computational process. I mean qualia obviously exist."
-Define qualia.
"You can't define qualia; it's a primitive. But you know what I mean."
-I don't. How could I if you can't define it?
"I mean that there clearly is some non-material experience stuff!"
-Non-material, as in defying the laws of physics? In that case, I do get it, and I super don't-
"It's perfectly compatible with the laws of physics."
-Then I don't know what you mean.
"I mean that there's clearly some experiential stuff accompanying the physical process."
-I don't know what that means.
"Do you have experience or not?"
-I have internal representations, and I can access them to some degree. It's up to you to tell me if that's experience or not.
"Okay, look. You can conceptually separate the information content from how it feels to have that content. Not physically separate them, perhaps, but conceptually. The what-it-feels-like part is qualia. So do you have that or not?"
-I don't know what that means, so I don't know. As I said, I have internal representations, but I don't think there's anything in addition to those representations, and I'm not sure what that would even mean.
and so on. The conversation can also get ugly, with boldface author accusing quotation author of being unscientific and/or quotation author accusing boldface author of being willfully obtuse.
On LessWrong, people are arguably pretty good at not talking past each other, but the pattern above still happens. So what's going on?
The Two Intuition Clusters
The basic model I'm proposing is that core intuitions about consciousness tend to cluster into two camps, with most miscommunication being the result of someone failing to communicate with the other camp. For this post, we'll call the camp of boldface author Camp #1 and the camp of quotation author Camp #2.
Characteristics
Camp #1 tends to think of consciousness as a non-special high-level phenomenon. Solving consciousness is then tantamount to solving the Meta-Problem of consciousness, which is to explain why we think/claim to have consciousness. (Note that this means explaining the full causal chain in terms of the brain's physical implementaton.) In other words, once we've explained why people keep uttering the sounds kon-shush-nuhs, we've explained all the hard observable facts, and the idea that there's anything else seems dangerously speculative/unscientific. No complicated metaphysics is required for this approach.
Conversely, Camp #2 is convinced that there is an experience thing that exists in a fundamental way. There's no agreement on what this thing is – some postulate causally active non-material stuff, whereas others agree with Camp #1 that there's nothing operating outside the laws of physics – but they all agree that there is something that needs explaining. Moreover, even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding. A complete solution (if it is even possible) may also have a nontrivial metaphysical component.
The camps are ubiquitous; once you have the concept, you will see it everywhere consciousness is discussed. Even single comments often betray allegiance to one camp or the other. Apparent exceptions are usually from people who are well-read on the subject and may have optimized their communication to make sense to both sides.
The Generator
With the description out the way, let's get to the interesting question: why is this happening? I don't have a complete answer, but I think we can narrow down the disagreement. Here's a somewhat indirect explanation of the proposed crux.
Suppose your friend John tells you he has a headache. As an upstanding
citizenBayesian agent, how should you update your beliefs here? In other words, what is the explanandum – the thing-your-model-of-the-world-needs-to-explain?You may think the explanandum is "John has a headache", but that's smuggling in some assumptions. Perhaps John was lying about the headache to make sure you leave him alone for a while! So a better explanandum is "John told me he's having a headache", where the truth value of the claim is unspecified.
(If we want to get pedantic, the claim that John told you anything is still smuggling in some assumptions since you could have also hallucinated the whole thing. But this class of concerns is not what divides the two camps.)
Okay, so if John tells you he has a headache, the correct explanandum is "John claims to have a headache", and the analogous thing holds for any other sensation. But what if you yourself seem to experience something? This question is what divides the two camps:
According to Camp #1, the correct explanandum is only slightly more than "I claim to have experienced X" (where X is the apparent experience). After all, if we can explain exactly why you, as a physical system, uttered the words "I experienced X", then there's nothing else to explain. The reason it's slightly more is that you do still have some amount of privileged access to your own experience: a one-sentence testimony doesn't communicate the full set of information contained in a subjective state – but this additional information remains metaphysically non-special. (HT: wilkox.)
According to Camp #2, the correct explanandum is "I experienced X". After all, you perceive your experience/consciousness directly, so it is not possible to be wrong about its existence.
In other words, the two camps disagree about the epistemic status of apparently perceived experiences: for Camp #2, they're epistemic bedrock, whereas for Camp #1, they're model outputs of your brain, and like all model outputs of your brain, they can be wrong. The axiom of Camp #1 can be summarized in one sentence as "you should treat your own claims of experience the same way you treat everyone else's".
From the perspective of Camp #1, Camp #2 is quite silly. People have claimed that fire is metaphysically special, then intelligence, then life, and so on, and their success rate so far is 0%. Consciousness is just one more thing on this list, so the odds that they are right this time are pretty slim.
From the perspective of Camp #2, Camp #1 is quite silly. Any apparent evidence against the primacy of consciousness necessarily backfires as it must itself be received as a pattern of consciousness. Even in the textbook case where you're conducting a scientific experiment with a well-defined result, you still need to look at your screen (or other output device) to read the result, so even science bottoms out in predictions about future states of consciousness!
An even deeper intuition may be what precisely you identify with. Are you identical to your physical brain or body (or program/algorithm implemented by your brain)? If so, you're probably in Camp #1. Are you a witness of/identical to the set of consciousness exhibited by your body at any moment? If so, you're probably in Camp #2. That said, this paragraph is pure speculation, and the two-camp phenomenon doesn't depend on it.
Representations in the literature
If you ask GPT-4 about the two most popular academic books about consciousness, it usually responds with
Consciousness Explained by Daniel Dennett; and
The Conscious Mind by David Chalmers.
If the camps are universal, we'd expect the two books to represent one camp each because economics. As it happens, this is exactly right!
Dennett devotes an entire chapter to the proper evaluation of experience claims, and the method he champions (called "heterophenomenology") is essentially a restatement of the Camp #1 axiom. He suggests that we should treat experience claims like fictional worldbuilding, where such claims are then "in good standing in the fictional world of your heterophenomenology". Once this fictional world is complete, it's up to the scientist to evaluate how its components map to the real world. Crucially, you're supposed to apply this principle even to yourself, so the punchline is again that the epistemic status of experience claims is always up for debate.
Conversely, Chalmers says this in the introductory chapter of his book (emphasis added):
In other words, Chalmers is having none of this heterophenomenology stuff; he wants to condition on "I experience X" itself.
Why it matters
While my leading example was about miscommunication, I think the camps have consequences in other areas as well, which are arguably more significant. To see why, suppose we
For someone in Camp #1, the answer has to be something like this:
I.e., consciousness is [the part of our brain that creates a unified narrative and produces our reports about "consciousness"].[1] So consciousness will be a densely connected part of this network – that is, unless you dispute that it's even possible to restrict it to just a part of the network, in which case it's more "some of the activity of the full network". Either way, consciousness is identified with its functional role, which makes the concept inherently fuzzy. If we built an AI with a similar architecture, we'd probably say it also had consciousness – but if someone came along and claimed, "wait a minute, that's not consciousness!", there'd be no fact of the matter as to who is correct, any more than there's a fact of the matter about the precise number of pebbles required to form a heap. The concept is inherently fuzzy, so there's no right or wrong here.
Conversely, Camp #2 views consciousness as a precisely defined phenomenon. And if this phenomenon is causally responsible for our talking about it,[2] then you can see how this view suggests a very different picture: consciousness is now a specific thing in the brain (which may or may not be physically identifiable with a part of the network), and the reason we talk about it is that we have it – we're reporting on a real thing.
These two views suggest substantially different approaches to studying the phenomenon – whether or not something has clear boundaries is an important property! So the camps don't just matter for esoteric debates about qualia but also for attempts to reverse-engineer consciousness, and to a lesser extent, for attempts to reverse-engineer the brain...
... and also for morality, which is a case where the camps are often major players even if consciousness isn't mentioned. Camp #2 tends to view moral value as mostly or entirely reducible to conscious states, an intuition so powerful that they sometimes don't realize it's controversial. But the same reduction is problematic for Camp #1 since consciousness is now an inherently fuzzy phenomenon – and there's no agreed-upon way to deal with this problem. Some want to tie morality to consciousness anyway, which can arguably work under a moral anti-realist framework. Others deny that morality should be about consciousness to begin with. And some bite the bullet and accept that their views imply moral nihilism. I've seen all three views (plus the one from Camp #2) expressed on LessWrong.
Discussion/Conclusions
Given the gulf between the two camps, how does one avoid miscommunication?
The answer may depend on which camp you're in. For the reasons we've discussed, it tends to be easier for ideas from Camp #1 to make sense to Camp #2 than vice-versa. If you study the brain looking for something fuzzy, there's no reason you can't still make progress if the thing actually has crisp boundaries – but if you bake the assumption of crisp boundaries into your approach, your work will probably not be useful if the thing is fuzzy. Once again, we need only look at the two most prominent theories in the literature for an example of this. Global Workspace Theory is peak Camp #1 stuff,[3] but it tends to be at least interesting to most people in Camp #2. Integrated Information Theory is peak Camp #2 stuff,[4] and I'm yet to meet a Camp #1 person who takes it seriously. Global Workspace Theory is also the more popular one of the two, even though Camp #1 is supposedly in the minority among researchers.[5]
The same pattern seems to hold on LessWrong across the board: Consciousness Explained gets brought up a lot more than The Conscious Mind, Global Workspace Theory gets brought up a lot more than Integrated Information Theory, and most high karma posts (modulo those of Eliezer) are Camp #1 adjacent – even though there are definitely a lot of Camp #2 people here. Kaj Sotala's Multi Agent Models of Mind series is a particularly nice example of a Camp #1 idea[6] with cross-camp appeal, and there's nothing analogous out of Camp #2.
So if you want to share ideas about this topic, it's probably a good idea to be in Camp #1. If that's not possible, I think just having a basic understanding of how ~half your audience thinks is helpful. There are a lot of cases where asking, "does this argument make sense to people with the other epistemic starting point?" is all you need to avoid the worst misunderstandings.
You can also try to convince the other side to switch camps, but this tends to work only around 0% of the time, so it may not be the best practical choice.
This doesn't mean anything that claims to be conscious is conscious. Under this view, consciousness is about the internal organization of the system, not just about its output. After all, a primitive chatbot can be programmed to make arbitrary claims about consciousness. ↩︎
This assumption is not trivial. For example, David Chalmers' theory suggests that consciousness has little to no impact on whether we talk about it. The class of theories that model consciousness as causally passive is called epiphenomenalism. ↩︎
Global Workspace Theory is an umbrella term for a bunch of high-level theories that attempt to model the observable effects of consciousness under a computational lens. ↩︎
Integrated Information theory holds that consciousness is identical to the integrated information of a system, modeled as a causal network. There are precise rules to determine which part(s) of a network are conscious, and there is a scalar quantity called Φ ("big phi") that determines the amount of consciousness of a system, as well as a much more complex object (something like a set of points in high-dimensional Euclidean space) that determines its character. ↩︎
According to David Chalmer's book, the proportion skews about 2/3 vs. 1/3 in favor of Camp #2, though he provides no source for this, merely citing "informal surveys". The phenomenon he describes isn't exactly the same as the two-camp model, but it's so similar that I expect high overlap. ↩︎
I'm calling it a Camp #1 idea because Kaj defines consciousness as synonymous with attention for the purposes of the sequence. Of course, this is just a working definition. ↩︎