Review

AI Wellbeing” by Goldstein & Kirk-Giannini (linked in the AI Safety Newsletter #18) presents arguments for viewing “Wellbeing” as a moral concern separable from Consciousness. I believe these arguments lead to right conclusions for the wrong reasons. We don’t need a new concept of moral value. However, they highlight (at least indirectly) a real problem with the existing AI consciousness debate: We don’t know where consciousness starts, and our implicit optimism on being able to observe it in future is entirely unfounded.

Terminology: Throughout, I use consciousness to refer to phenomenal consciousness or sentience, the experience of “it-feeling-like-something”. I write cold to emphasize the absence of consciousness. Wellbeing encompasses mental states such as knowledge or desire. Wellbeing-ists like Goldstein & Kirk-Giannini argue for wellbeing to be morally relevant independently of whether it invokes consciousness. Keeping this terminology precisely in mind helps avoiding confusion here with the otherwise conflationary terms.

Wellbeing and Consciousness

I had initially been surprised by the claim in “AI Wellbeing”, that little attention had been paid to the question: What conditions lead to an AI with (presumably ethically relevant) wellbeing? Haven’t we spent the past decade or so discussing this on LessWrong, in Effective Altruism, and, as of late, ad nauseam in all media? Eventually I realized my mistake. We have been discussing consciousness, much less wellbeing, which are not synonyms.

Then again, isn’t the critical issue for “AI wellbeing”, still: What possesses phenomenal consciousness? Surely we need some hint about this first. Then only, may we fully devote ourselves to whether the wellbeing of that conscious agent is high or low. There is hope that this second question is simpler than the first. At a high level, we likely agree on how wellbeing may differ across diverse human experiences or between you, your cat, and a factory-farmed chicken – assuming they all possess consciousness.

Wellbeing as independent moral concern

How then is the critical relevance of (cold) Wellbeing justified? The answer in “AI Wellbeing” is simple: It rejects the Consciousness Requirement. Put simply, this rejection means: You may be mistreating your computer even if it doesn’t develop phenomenal consciousness. This rejection complicates the problem of machine moral status, as it requires us to address, right from the start, both consciousness and wellbeing, as entirely separable concepts.

Although this distinction may seem peculiar, the authors of "AI Wellbeing" note that the Consciousness Requirement is rejected by the majority of "philosophers of welfare." We could question whether selection bias affected this statistics, but let's address it with a simple counterpoint: many philosophers have been mistaken on many topics.

Addressing the arguments against the Consciousness Requirement

A first specific argument provided against the Consciousness Requirement is that it's a live question whether language agents are phenomenally conscious. It is indeed totally frustrating that we have no good grounds to decide what exactly does or doesn’t have consciousness. But I disagree that this problem would imply that for a being with an assumed absence of consciousness, other features of its mind matter morally. Even if I cannot answer what feels, I still rather happily insist that what matters morally, is exactly what does feels somehow – and only that. Despite my disagreement, the argument rightly calls out the weakness in all our consciousness theorizing, which we'll take up below in the overall take-aways from the paper.

The second argument states that a minority of philosophers believe that mental states like knowledge and desire require phenomenal consciousness. While it's true that a non-conscious robot may possess some types of knowledge and desire – by definition those of the cold, non-feely sort –, I don't see why we can't restrict morally relevant forms of desires to sentient agents. This perspective could still align with the majority view of philosophers without questioning the Consciousness Requirement. How strongly or loosely one may relate the concept of wellbeing to knowledge and desire, doesn’t affect this conclusion.[1]

A further argument is based on the insight that dominant theories about consciousness are implausible sources for necessary conditions on wellbeing. For example, higher order representations are an implausible sine qua non for wellbeing. We may agree. But, I reckon I'm no exception if I agree just as easily with anyone stating higher order representations may be an unsatisfactory condition also for consciousness itself. We may even extend this caveat to quite any of the postulated requirements of well-known theories of consciousness. Lacking majority-winning sufficient requirements for consciousness, we cannot convincingly explain what exactly leads to morally relevant states of mind, indeed. Is it, then, that the lack of good understanding of what exactly, based on consciousness theories, leads to wellbeing, shows morally relevant wellbeing is independent of consciousness? I see this as an invalid shortcut, a sort of free-riding on the hardness of the hard problem. Logically equivalent would be a claim of the sort: ‘A morally relevant “very happy” feeling cannot depend on consciousness, as any so far proposed conditions for consciousness are not convincing candidate prerequisites for a “very happy” feeling’. If generalized, it would be paramount to saying, nothing can ever depend on anything we have not (yet) understood – which of course would be an absurd claim. So we can reject also this attempt to convince that morally relevant wellbeing without consciousness must exist. Yes, the Consciousness Requirement friend must be embarrassed. It is a problem that he can explain neither a very happy feeling, nor wellbeing truly convincingly. But this is not a reason for him to be more embarrassed than he had always been, with his unsolved hard problem; concluding that wellbeing (and happiness) did not depend on consciousness, would not reduce the mess.

The next argument starts with a zombie with human desires – cold desires devoid of feelings. She is then warped it into a zombie+ with a single fixed phenomenal trait: A persistent phenomenal experience of mild pleasure. The paper rightly explains that this mutation may not affect the moral relevance of her desires. I understand the line of argument to suggest that, now that the zombie+ has a single inert spot of phenomenology, the Consciousness Requirement advocate would face a challenge: For him, the desires originally judged as irrelevant in the zombie would have to become part of the morally relevant features of zombie+. While one could agree that in this case, our consciousness friend would be claiming the absurd, this story hinges on an overly simplified Consciousness-based view. It seems perfectly natural for the Consciousness Requirement to regard only the single-spot phenomenal experience as morally salient, leaving the rest of the still-cold desires morally irrelevant. So, the Consciousness Requirement friend does not necessarily have to claim any implausible change in the moral relevance of the desire. In this case, there is no cause for concern for the consciousness camp.

For the final thought experiment, let’s call it X, we imagine someone in an unconscious sleep with desires satisfied, later waking up. Without going into detail of the paper’s argumentation, I postulate, we don’t need to expect any trouble here for consciousness as sole basis for sentience, as long as we’re strictly separating what the generally ambiguous term “unconscious” here means. First, if “unconscious” means, the sleeper has no phenomenal consciousness, we may safely dismiss moral concern about his actual state, but the consequence of his time asleep for his experience once he wakes up remains of concern. Second, if “unconscious” only means he’s unaware of the real world around him, so that he’s still feeling something, maybe his dream, then the Consciousness Requirement friend may care about both, the person’s during-sleep feeling and the potential future feeling after waking up. Even if the paper introduces the thought experiment for a counterargument against the Consciousness Requirement, I do not see it refute this simple way in which the Consciousness Requirement friend can happily live with X.[2]

Attempt to empathize with cold Wellbeing

When trying to understand independently why one may insist on wellbeing to matter in the absence of consciousness, I stumble upon difficulties to imagine a person having ‘wellbeing’, say, desires, that may be fulfilled or frustrated, but without having consciousness. We may easily end up inadvertently imagining the person being something like sad if her desires are frustrated.

We may at least slightly more easily entertain that a cat lacks any consciousness. This cat still desperately wants to explore the house or the neighborhood. Locking it in a drawer may feel cruel, and we’d much rather unlock it, “just to be sure”, despite its supposed unconsciousness. Does it reveal we’re not confident in the implausibility of the moral relevance of a non-conscious mind? I don't think so. Give me a small play toy that also is programmed – that desires – to walk around and cartography the space around it, and when locked in, turns on a siren. We won’t get bad feelings about it – other than for the draining battery and the unpleasant noise. Our feeling about the unconscious, locked-in cat therefore seems more an artifact of our failure to fully adjust all our senses to the abstract and implausible content of the fictive story (that is, of us being absolutely certain about the cat’s unconsciousness). Even our (potential) issue with the imaginary cold cat does therefore not necessarily show a problem with the Consciousness Requirement.

Caution against confidence with Phenomenal Consciousness

"AI Wellbeing" concludes with a call for caution when dealing with today's AI, as we may unknowingly create countless AIs with suffering or negative wellbeing. The authors may have not intended to argue that each of their points strictly refutes the Consciousness Requirement, but rather to emphasize that many questions warrant exploration before accepting it – although I find some of the above, specific questions they raise have relatively straightforward answers.

Interpreted a bit freely, "AI Wellbeing" serves as a reminder: Whenever we assert that complex stuff is required for sentience, we better think twice. The morally relevant stuff – sentience, or wellbeing depending on definitions and convictions – might arise earlier than we think. Aside from the fact that we humans are somewhat complex, there's no evidence to support that morally salient states require our level of complexity or even only that of mammals or insects. Maybe consciousness is even too simple for us to understand it, rather than the other way round. We really, truly don't know much.

Nevertheless, the provided reasons for having to add a new term or concept into the mix when talking about these issues, seem unconvincing. Good old phenomenal consciousness, and its “does it feel like something” question seem just fine for a start.

I find particular value in emphasizing our lack of knowledge about what gives rise to consciousness because I sense we're being overconfident. In much of the commentary on AI and sentience, there seems to be an implicit assumption that we may well be able to observe when the AIs reach levels of sophistication that imply phenomenal consciousness. This confidence seems wholly unfounded. What we’re likely to observe is when AIs become more human-like in their capabilities or conscious in the sense of material awareness. However, consciousness as a phenomenal state will not be strictly observable, or, rather, it will strictly not be observable. An AI talking deeply about consciousness only tells us we’ve created a machine to generate thoughts about consciousness. We can’t observe whether it feels them.[3] The paper’s emphasis on our uncertainties is a good reminder of the unsolved, and potentially unsolvable, questions in this area. As well as of how easily we may end up all talking past each other, each convinced of understanding key aspects of moral valance, while in reality, we're as clueless as ever.
 


[1] Of course, this basic rejection of the argument would be different, if a majority of philosophers claimed intrinsically morally relevant knowledge and desires require no consciousness, but this is not what the paper argues (and neither what I expect, though I have not researched it).

[2] I do not attempt to more strictly reject a given argument here, as we’re not in fact provided with a complete argument, based on X, against the Consciousness Requirement. The logical structure of what the paper instead proposes is: ‘Author Z uses version B of the Consciousness Requirement, as its version A of the Consciousness Requirement is vulnerable to X. But B is weak.’ This is my abbreviated paraphrasing, but I don’t think it is a strawman of what we get in the paper. We never learn exactly why (version A of) the Consciousness Requirement is vulnerable to X, the satisfied sleeper, and as I propose, the most natural look at that sleeper, does not raise any obvious concerns for the Consciousness Requirement, though I’m keen to learn if I have overlooked something.

[3]  After all, even for us humans, the main thing we can bring forward against illusionism about consciousness – if anything –, may be our own introspective insight.

New Comment
6 comments, sorted by Click to highlight new comments since:

You may be mistreating your computer even if it doesn’t develop phenomenal consciousness.

I realize you are not endorsing this yourself, but regardless, it's a classic example of "rationalists" following ideas to their ultimate conclusion without sanity-checking them or applying Chesterton's fence.

Indeed, just as you do, I very much reject that statement, which are only the words I used to very bluntly put what the paper authors really imply.

Then, I find your claim slightly too strong. I would not want to claim to know for sure the authors have not tried to sanity-check their conclusions, and I'm not 100% sure they have not thought quite deeply about the consciousness concept and its origins (despite my puzzlement about their conclusions), so I wouldn't have dared to state it's a classical Chesterton's fence trespassing. That said, indeed, I find the incompatibility between their claims and how I understand consciousness quite so fundamental that I guess you're quite spot on (assuming I actually don't myself fully misinterpret your point).

It's a Chesterton's Fence trespassing because every single other person would say that you can't mistreat a computer.  If you don't understand why everyone thinks this way, beyond just "well, they're ignorant", you shouldn't be treating the opposite seriously.

[-]lc30

I'm not sure what you think Chesterton's fence is, but I've never heard it used straightfowardly as a plea to do what everybody else is doing for Modest Epistemology reasons.

[-]TAG20

Everyone is ignorant about consciousness and ethics, even the experts.

I understand your concern, about the authors deviating from a consensus without good reasons. However, from the authors' perspective, they probably believe that they have compelling arguments to support their view, and therefore think they're rejecting the consensus for valid reasons. In this case, just pointing to Chesterton's fence isn't going to resolve the disagreement.

Since so much around consciousness is highly debated and complex (or as some might hold simple and trivial but difficult to see for the others), departing from the consensus isn't automatically a mistake, which I think is the same as or close to what @lc  points out.