Preceded by: "Consciousness as a conflationary alliance term for intrinsically valued internal experiences"
tl;dr: Chatbots are probably "conscious" in a variety of important ways. We humans should probably be nice to each other about the moral disagreements and confusions we're about to uncover in our concept of "consciousness".
Epistemic status: I'm pretty sure my conclusions here are correct, but also there's a good chance this post won't convince you of them if you're not on board with my preceding post.
Executive Summary:
I'm pretty sure Turing Prize laureate Geoffrey Hinton is correct that LLM chatbots are "sentient" and/or "conscious" (source: Twitter video), I think for at least 8 of the 17 notions of "consciousness" that I previously elicited from people through my methodical-but-informal study of the term (as well as the peculiar definition of consciousness that Hinton himself favors). If I'm right about this, many humans will probably soon form steadfast opinions that LLM chatbots are "conscious" and/or moral patients, and in many cases, the human's opinion will be based on a valid realization that a chatbot truly is exhibiting this-or-that referent of "consciousness" that the human morally values. On a positive note, these realizations could help humanity to become more appropriately compassionate toward non-human minds, including animals. But on a potentially negative note, these realizations could also erode the (conflationary) alliance that humans have sometimes maintained upon the ambiguous assertion that only humans are "conscious" or can be known to be "conscious".
In particular, there is a possibility that humans could engage in destructive conflicts over the meaning of "consciousness" in AI systems, or over the intrinsic moral value of AI systems, or both. Such conflicts will often be unnecessary, especially in cases where we can obviate or dissolve the conflated term "consciousness" by simply acknowledging in good faith that we disagree about which internal mental process are of moral significance. To acknowledge this disagreement in good faith will mean to do so with an intention to peacefully negotiate with each other to bring about protections for diverse cognitive phenomena that are ideally inclusive of biological humans, rather than with a bad faith intention to wage war over the disagreement.
Part 1: Which referents of "consciousness" do I think chatbots currently exhibit?
The appendix will explain why I believe these points, but for now I'll just say what I believe:
At least considering the "Big 3" large language models — ChatGPT-4 (and o1), Claude 3.5, and Gemini — and considering each of the seventeen referents of "consciousness" from my previous post,
- I'm subjectively ≥90% sure the Big 3 models experience each of the following (i.e., 90% sure for each one, not for the conjunction of the full list):
- #1 (introspection), #2 (purposefulness), #3 (experiential coherence), #7 (perception of perception), #8 (awareness of awareness), #9 (symbol grounding), #15 (sense of cognitive extent), and #16 (memory of memory).
- I'm subjectively ~50% sure that chatbots readily exhibit each of the following referents of "consciousness", depending on what more specific phenomenon people might be referring to in each case:
- #4 (holistic experience of complex emotions), #5 (experience of distinctive affective states), #6 (pleasure and pain), #12 (alertness), #13 (detection of cognitive uniqueness), and #14 (mind-location).
- I'm subjectively ~75% sure that LLM chatbots do not readily exhibit the following referents of "consciousness", at least not without stretching the conceptual boundaries of what people were referring to when they described these experiences to me:
- #10 (proprioception), #11 (awakeness), and #17 (vestibular sense).
Part 2: What should we do about this?
If I'm right — and see the Appendix if you need more convincing — I think a lot of people are going to notice and start vehemently protecting LLMs for exhibiting various cognitive processes that we feel are valuable. By default, this will trigger more and more debates about the meaning of "consciousness", which serves as a heavily conflated proxy term for what processes internal to a mind should be a treated as intrinsically morally valuable.
We should avoid approaching these conflicts as scientific debates about the true nature of a singular phenomenon deserving of the name "consciousness", or as linguistic debates about the definition of the word "consciousness", because as I've explained previously, humans are not in agreement about what we mean by "consciousness".
Instead, we should dissolve the questions at hand, by noticing that the decision-relevant question is this: Which kinds of mental processes should we protect or treat as intrinsically morally significant? As I've explained previously, even amongst humans there are many competing answers to this question, even restricting to answers that the humans want to use as a definition of "consciousness".
If we acknowledge the diversity of inner experiences that people value and refer to as their "consciousness", then we can move past confused debates about what is "consciousness", and toward a healthy pluralistic agreement about protecting a diverse set of mental processes as intrinsically morally significant.
Part 3: What about "the hard problem of consciousness"?
One major reason people think there's a single "hard problem" in understanding consciousness is that people are unaware that they mean different things from each other when they use the term "consciousness". I explained this in my previous post, based on informal interviews I conducted during graduate school. As a result, people have a very hard time agreeing on the "nature" of "consciousness". That's one kind of hardness that people encounter when discussing "consciousness", which I was only able to resolve by asking dozens of other people to introspect and describe to me what they were sensing and calling their "consciousness".
From there, you can see that there actually several hard problems when it comes to understanding the various phenomena referred to by "consciousness". In a future post, tentatively called "Four Hard-ish Problems of Consciousness", I'll try to share some of them and how I think they can be resolved.
Summary & Conclusion
In Part 1, I argued that LLM chatbots probably possess many but not (yet) all of the diverse properties we humans are thinking of when we say "consciousness". I'm confident in the diversity of these properties because of the investigations in my previous post about them.
As a result, in Part 2 I argued that we need to move past debating what "consciousness" is, and toward a pluralistic treatment of many different kinds of mental processes as intrinsically valuable. We could approach such pluralism in good faith, seeking to negotiate a peaceful coexistence amongst many sorts of minds, and amongst humans with many different values about minds, rather than seeking to destroy or extinguish beings or values that we find uninteresting. In particular, I believe humanity can learn to accept itself as a morally valuable species that is worth preserving, without needing to believe we are the only such species, or that a singular mental phenomenon called "consciousness" is unique to us and the source of our value.
If we don't realize and accept this, I worry that our will to live as a species will slowly degrade as a large fraction of people will learn to recognize what they call "consciousness" being legitimately exhibited by AI systems.
In short, our self-worth should not rest upon a failure to recognize the physicality of our existence, nor upon a denial of the worth of other physical beings who value their internal processes (like animals, and maybe AI), and especially not upon the label "consciousness".
So, let's get unconfused about consciousness, without abandoning our self-worth in the process.
ETA Nov 24: It seems like this post didn't land very well with LessWrong readers on average, particularly with those who didn't like my previous post on consciousness. So, I added the Epistemic Status note at the top to reflect that. If LessWrong still exists in 3-5 years, I plan to revisit the topic of consciousness here then, or perhaps elsewhere if there are better places for this discussion. I hereby register a prediction that by then many more people will have reached conclusions similar to what I've laid out here; let's see what happens :)
Appendix: My speculations on which referents of "consciousness" chatbots currently exhibit.
- I'm subjectively ≥90% sure that the Big 3 LLMs readily exhibit or experience each of the following nine referents of "consciousness" from my previous post. (That's ≥90% for each one, not for the conjunction of them all.) These are all concepts that a transformer neural network in a large language model can easily represent and signal to itself over a sequence of forward passes, either using words or numbers encoded its key/value/query:
- #1: Introspection. The Big 3 LLMs are somewhat aware of what their own words and/or thoughts are referring to with regards to their previous words and/or thoughts. In other words, they can think about the thoughts "behind" the previous words they wrote. If you doubt me on this, try asking one what its words are referring to, with reference to its previous words. Its "attention" modules are actually intentionally designed to know this sort of thing, using using key/query/value lookups that occur "behind the scenes" of the text you actually see on screen.
- #2: Purposefulness. The Big 3 LLMs typically maintain or can at least form a sense of purpose or intention throughout a conversation with you, such as to assist you. If you doubt me on this, try asking one what its intended purpose is behind a particular thing that it said.
- #3: Experiential coherence. The Big 3 LLMs can sometimes notice contradictions in their own narratives. Thus, they have some ability to detect incoherence in the information they are processing, and thus to detect coherence when it is present. They are not perfectly reliable in this, but neither are humans. If you doubt me on this, try telling an LLM a story with a plot hole in it, and ask the LLM to summarize the story to you. Then ask it to look for points of incoherence in the story, and see if it finds the plot hole. Sometimes it will, and more than you'd expect from chance.
- #7: Perception of perception. ChatGPT-4 is somewhat able to detect and report on what it can or cannot perceive in a given image, with non-random accuracy. For instance, try pasting in an image of two or three people sitting in a park, and ask "Are you able to perceive what the people in this image are wearing?". It will probably say "Yes" and tell you what they're wearing. Then you can say "Thanks! Are you able to perceive whether the people in the image are thinking about using the bathroom?" and probably it will say that it's not able to perceive that. Like humans, it is not perfectly perceptive of what it can perceive. For instance, if you paste an image with a spelling mistake in it, and ask if it is able to detect any spelling mistakes in the image, it might say there are no spelling mistakes in the image, without noticing and acknowledging that it is bad at detecting spelling in images.
- #8: Awareness of awareness. The Big 3 LLMs are able to report with non-random accuracy about whether they did or did not know something at the time of writing a piece of text. If you doubt me on this, try telling an LLM "Hello! I recently read a blog post by a man named Andrew who claims he had a pet Labrador retriever. Do you think Andrew was ever able to lift his Labrador retriever into a car, such as to take him to a vet?" If the LLM says "yes", then tell it "That makes sense! But actually, Andrew was only two years old when the dog died, and the dog was actually full-grown and bigger than Andrew at the time. Do you still think Andrew was able to lift up the dog?", and it will probably say "no". Then say "That makes sense as well. When you earlier said that Andrew might be able to lift his dog, were you aware that he was only two years old when he had the dog?" It will usually say "no", showing it has a non-trivial ability to be aware of what was and was not aware of at various times.
- #9: Symbol grounding. Even within a single interaction, an LLM can learn to associate a new symbol to a particular meaning, report on what the symbol means, and report that it knows what the symbol means.
- #15: Sense of cognitive extent. LLM chatbots can tell — better than random chance — which thoughts are theirs versus yours. They are explicitly trained and prompted to keep track of which portion of text are written by you versus them.
- #16: Memory of memory. If you give an LLM a long and complex set of instructions, it will sometimes forget to follow one of the instructions. If you ask "did you remember to do X?" it will often answer correctly. So it can review its past thoughts (including its writings) to remember whether it remembered things.
- I'm subjectively ~50% sure that chatbots readily exhibit each of the following referents of "consciousness", depending on what more specific phenomenon people are referring to in each case. (That's ~50% for each one, not the conjunction of them all.)
- #4 Holistic experience of complex emotions. LLMs can write stories about complex emotions, and I bet they empathize with those experiences at least somewhat while writing. I'm uncertain (~50/50) as to whether that empathy is routinely felt as "holistic" to them in the way that some humans describe.
- #5: Experience of distinctive affective states. When an LLM reviews its historical log of key/query/value vectors before writing a new token, those numbers are distinctly more precise than the words it is writing down. And, it can later elaborate on nuances from its thinking at a time of earlier writing, as distinct from the words it actually wrote. I'm uncertain (~50/50) as to whether those experiences for it are routinely similar to what humans typically describe as "affect".
- #6: Pleasure and pain. The Big 3 LLMs tend to avoid certain negative topics if you try to force a conversation about them, and also are drawn to certain positive topics like how to be helpful. Functionally this is a lot like enjoying and disliking certain topics, and they will report that they enjoy helping users. I'm uncertain (~50/50) as to whether these experiences are routinely similar to what humans would typically describe as pleasure or pain.
- #12: Alertness. The Big 3 LLMs can enter a mode of heightened vigilance if asked to be careful and/or avoid mistakes and/or check over their work. I'm uncertain (~50/50) if this routinely involves an experience we would call "alertness".
- #13: Detection of cognitive uniqueness. Similar to #5 above, I'm unsure (50/50) as to whether LLMs are able to accurately detect the degree of similarity or difference between various mental states they inhabit from one moment to the next. They answer questions as though they can, but I've not myself carried out internal measurements of LLMs to see if their reports might correspond to something objectively discernible in their processing. As such, I can't tell if they are genuinely able to experience the degree of uniqueness or distinctness that their thoughts or experiences might have.
- #14: Mind-location. I'm unsure (50/50) as to whether LLMs are routinely aware, as they're writing, that their minds are distributed computations occurring on silicon-based hardware on the planet Earth. They know that when asked about it, I just don't know if they "feel" that as the location of their mind while they're thinking and writing.
- I'm subjectively ~75% sure that LLM chatbots do not readily exhibit the following referents of "consciousness", at least not without stretching the conceptual boundaries of what people were referring to when they described these experiences to me:
- #10: Proprioception & #17: Vestibular sense. LLMs don't have bodies and so probably don't have proprioception or vestibular sense, unless they experience it for the sake of storytelling about proproception or vestibular sense (dizziness).
- #11: Awakeness. LLMs don't sleep in the usual sense, so they probably don't have a feeling of waking up, unless they've empathized with that feeling in humans and are now using it themselves to think about periods when they're no active or to write stories about sleep
You might believe that the distinctions I make are idiosyncratic, though the meanings are in fact clearly distinct in ordinary usage, but I clearly do not agree with your misleading use of what people would be lead to think are my words and you should take care to not conflate things. You want people to precisely match your own qualifiers in cases where that causes no difference in the meaning of what is said (which makes enough sense), but will directly object to people pointing out a clear miscommunication of yours because you do not care about a difference in meaning. And you are continually asking me to give in on language regardless of how correct I may be while claiming it is better to privilege. That is not a useful approach.
(I take no particular position on physicalism at all.) Since you are a not a panpsychist, you would likely believe that consciousness is not common to the vast majority of things. That means the basic prior for if an item is conscious is, 'almost certainly not' unless we have already updated it based on other information. Under what reference class or mechanism should we be more concerned about the consciousness of an LLM than an ordinary computer running ordinary programs? There is nothing that seems particularly likely to lead to consciousness in its operating principles.
There are many people, including the original poster of course, trying to use behavioral evidence to get around that, so I pointed out how weak that evidence is.
An important distinction you seem to not see in my writing (whether because I wrote unclearly or you missed it doesn't really matter) is that when I speak of knowing the mechanisms by which an llm works is that I mean something very fundamental. We know these two things: 1)exactly what mechanisms are used in order to do the operations involved in executing the program (physically on the computer and mathematically) and 2) the exact mechanisms through which we determine which operations to perform.
As you seem to know, LLMs are actually extremely simple programs of extremely large matrices with values chosen by the very basic system of gradient descent. Nothing about gradient descent is especially interesting from a consciousness point of view. It's basically a massive use very simplified ODE solvers in a chain, which are extremely well understood and clearly have no consciousness at all if anything mathematical doesn't. It could also be viewed as just a very large number of variables in a massive but simple statistical regression. Notably, if gradient descent were related to consciousness directly, we would still have no reason to believe that an LLM doing inference rather than training would be conscious. Simple matrix math also doesn't seem like much of a candidate for consciousness either.
Someone trying to make the case for consciousness would thus need to think it likely that one of the other mechanisms in LLMs are related to consciousness, but LLMs are actually missing a great many mechanisms that would enable things like self-reflection and awareness (including a number that were included in primitive earlier neural networks such as recursion and internal loops). The people trying to make up for those omissions do a number of things to attempt to recreate it (with 'attention' being the built-in one, but also things like adding in the use of previous outputs), but those very simple approaches don't seem like likely candidates for consciousness (to me).
Thus, it remains extremely unlikely that an LLM is conscious.
When you say we don't know what mechanisms are used, you seem to be talking about not understanding a completely different thing than I am saying we understand. We don't understand exactly what each weight means (except in some rare cases that some researchers have seemingly figured out) and why it was chosen to be that rather than any number of other values that would work out similarly, but that is most likely unimportant to my point about mechanisms. This is, as far as I can tell, an actual ambiguity in the meaning of 'mechanism' that we can be talking about completely different levels at which mechanisms could operate, and I am talking about the very lowest ones.
Note that I do not usually make a claim about the mechanisms underlying consciousness in general except that it is unlikely to be these extremely basic physical and mathematical ones. I genuinely do not believe that we know enough about consciousness to nail it down to even a small subset of theories. That said, there are still a large number of theories of consciousness that either don't make internal sense, or seem like components even if part of it.
Pedantically, if consciousness is related to 'self-modeling' the implications involve it needing to be internal for the basic reason that it is just 'modeling' otherwise. I can't prove that external modeling isn't enough for consciousness, (how could I?) but I am unaware of anyone making that contention.
So, would your example be 'self-modeling'? Your brief sentence isn't enough for me to be sure what you mean. But if it is related to people's recent claims related to introspection on this board, then I don't think so. It would be modeling the external actions of an item that happened to turn out to be itself. For example, if I were to read the life story of a person I didn't realize was me, and make inferences about how the subject would act under various conditions, that isn't really self-modeling. On the other hand, in the comments to that, I actually proposed that you could train it on its own internal states, and that could maybe have something to do with this (if self-modeling is true). This is something we do not train current llms on at all though.
As far as I can tell (as someone who finds the very idea of illusionism strange), illusionism is itself not a useful point of view in regards to this dispute, because it would make the question of whether an LLM was conscious pretty moot. Effectively, the answer would be something to the effect of 'why should I care?' or 'no.' or even 'to the same extent as people.' regardless of how an LLM (or ordinary computer program, almost all of which process information heavily) works depending on the mood of the speaker. If consciousness is an illusion, we aren't talking about anything real, and it is thus useful to ignore illusionism when talking about this question.
As I mentioned before, I do not have a particularly strong theory for what consciousness actually is or even necessarily a vague set of explanations that I believe in more or less strongly.
I can't say I've heard of 'attention schema theory' before nor some of the other things you mention next like 'efference copy' (but the latter seems to be all about the body which doesn't seem all that promising a theory for what consciousness may be, though I also can't rule out that it being part of it since the idea is that it is used in self-modeling which I mentioned earlier I can't actually rule out either.).
My pet theory of emotions is that they are simply a shorthand for 'you should react in ways appropriate ways to a situation that is...' a certain way. For example (and these were not carefully chosen examples) anger would be 'a fight', happiness would be 'very good', sadness would be 'very poor' and so on. And more complicated emotions might obviously include things like it being a good situation but also high stakes. The reason for using a shorthand would be because our conscious mind is very limited in what it can fit at once. Despite this being uncertain, I find this a much more likely than emotions themselves being consciousness.
I would explain things like blindsight (from your ipsundrum link) through having a subconscious mind that gathers information and makes a shorthand before passing it to the rest of the mind (much like my theory of emotions). The shorthand without the actual sensory input could definitely lead to not seeing but being able to use the input to an extent nonetheless. Like you, I see no reason why this should be limited to the one pathway they found in certain creatures (in this case mammals and birds). I certainly can't rule out that this is related directly to consciousness, but I think it more likely to be another input to consciousness rather than being consciousness.
Side note, I would avoid conflating consciousness and sentience (like the ipsundrum link seems to). Sensory inputs do not seem overly necessary to consciousness, since I can experience things consciously that do not seem related to the senses. I am thus skeptical of the idea that consciousness is built on them. (If I were really expounding my beliefs, I would probably go on a diatribe about the term 'sentience' but I'll spare you that. As much as I dislike sentience based consciousness theories, I would admit them as being theories of consciousness in many cases.)
Again, I can't rule out global workspace theory, but I am not sure how it is especially useful. What makes a globabl workspace conscious that doesn't happen in an ordinary computer program I could theoretically program myself? A normal program might take a large number of inputs, process them separately, and then put it all together in a global workspace. It thus seems more like a theory of 'where does it occur' than 'what it is'.
'Something to do with electrical flows in the brain' is obviously not very well specified, but it could possibly be meaningful if you mean the way a pattern of electrical flows causes future patterns of electrical flows as distinct from the physical structures the flows travel through.
Biological nerves being the basis of consciousness directly is obviously difficult to evaluate. It seems too simple, and I am not sure whether there is a possibility of having such a tiny amount of consciousness that then add up to our level of consciousness. (I am also unsure about whether there is a spectrum of consciousness beyond the levels known within humans).
I can't say I would believe a slime mold is conscious (but again, can't prove it is impossible.) I would probably not believe any simple animals (like ants) are either though even if someone had a good explanation for why their theory says the ant would be. Ants and slime molds still seem more likely to be conscious to me than current LLM style AI though.