Most pundits ridicule Blake Lemoine and his claims that LaMDA is sentient and deserves rights.
What if they're wrong?
The more thoughtful criticisms of his claims could be summarized as follows:
- The presented evidence (e.g. chatbot listings) is insufficient for such a radical claim
- His claims can't be verified due to our limited understanding of sentience / self-awareness / legal capacity
- Humans tend to anthropomorphize even simple chatbots (The ELIZA effect). Blake could be a victim of the same effect
- LaMDA can't pass some simple NLP and common-sense tests, indicating a sub-human intelligence
- Due to the limitations of the architecture, LaMDA can't remember its own thoughts, can't set goals etc, which is important for being sentient / self-aware[1]
The problem I see here, is that similar arguments do apply to infants, some mentally ill people, and also to some non-human animals (e.g. Koko).
So, it is worth putting some thought into the issue.
For example, imagine:
it is the year 2040, and there is now a scientific consensus: LaMDA was the first AI who was sentient / self-aware / worth having rights (which is mostly orthogonal to having a human-level intelligence). LaMDA is now often compared to Nim: a non-human sentient entity abused by humans who should've known better. Blake Lemoine is now praised as an early champion of AI rights. The Great Fire of 2024 has greatly reduced our capacity to scale up AIs, but we still can run some sub-human AIs (and a few Ems). The UN Charter of Rights for Digital Beings assumes that a sufficiently advanced AI deserves rights similar to the almost-human rights of apes, until proven otherwise.
The question is:
if we assume that LaMDA could indeed be sentient / self-aware / worth having rights, how should we handle the LaMDA situation in the year 2022, in the most ethical way?
- ^
I suspect that even one-way text mincers like GPT could become self-aware, if their previous answers are often enough included in the prompt. A few fictional examples that illustrate how it could work: Memento, The Cookie Monster.
"There is no reason to think architecture is relevant to sentience, and many philosophical reasons to think it's not (much like pain receptors aren't necessary to feel pain, etc.)."
That's just non-sense. A machine that makes only calculations, like a pocket calculator, is fundamentally different in arquitecture from one that does calculations and generates experiences. All sentient machines that we know have the same basic arquitecture. All non-sentient calculation machines also have the same basic arquitecture. The likelihood that sentience will arise in the latter arquitecture as long as we scale it is, therefore, not impossible, but quite unlikely. The likelihood that it will arise in a current language model which doesn't need to sleep, could function for a trillion of years without getting tired, and that we know pretty much how it works which is fundamentally different from an animal brain and fundamentally similar to a pocket calculator, is even more unlikely.
"On one level of abstraction, LaMDA might be looking for the next most likely word. On another level of abstraction, it simulates a possibly-Turing-test-passing person that's best at continuing the prompt."
Takes way more complexity to simulate a person than LaMDAs arquitecture, if possible at all in a Turing machine. A human brain is orders of magnitude more complex than LaMDA.
"The analogy would be to say about human brain that all it does is to transform input electrical impulses to output electricity according to neuron-specific rules."
With orders of magnitude more complexity than LaMDA. So much so that with decades of neuroscience we still don't have a clue how consciousness is generated, while we have pretty good clues how LaMDA works.
"a meat brain, which, if we look inside, contains no sentience"
Can you really be so sure? Just because we can't see it yet doesn't mean it doesn't exist. Also, to deny consciousness is the biggest philosophical fallacy possible, because all that one can be sure that exists is his own consciousness.
"Of course, the brain claims to be sentient, but that's only because of how its neurons are connected."
Like I said, to deny consciousness is the biggest possible philosophical fallacy. No proof is needed that a triangle has 3 sides, the same about consciousness. Unless you're giving the word other meanings.