In this recent paper, the author study whether the octopi can suffer. Well, how are you going to answer that? Ask him?

It turns out there are two ways you can decide whether the animal can suffer or cannot. The first is simple. You just define that vertebrates can suffer (therefore experiments with them should follow ethical regulations) and that all other animals can not. Simple, but kind of arbitrary. And yes, obviously octopi can't suffer then by definition.

Another approach is functional. If an animal can learn to avoid pain (not just avoid it right now, but memorize what happened at given conditions and avoid these conditions), then the animal can suffer.  According to this definition and experiments conducted in the abovementioned paper, octopi can suffer - so should be treated as vertebrates. 

The second definition looks more logical to me. However, there is one problem with this one as well. If we make it purely functional, we should agree that an artificial neural network under the procedure of reinforcement learning suffers. Or instead of introducing an arbitrary threshold of "vertebrate - invertebrate", we introduce not less arbitrary threshold "biological neurons - artificial neurons". 

Does it mean we should worry about the suffering of our models? And this is not the whole story. From the mathematical point of view, many things can be mapped onto the learning process.  Does it all involve suffering? 

So far, I see the following possible answers. Maybe there are more, that I am unaware of. 

  1. Indeed all abovementioned suffers.
  2. There is a magical biological threshold.
  3. The ability to suffer is determined in an anthropocentric way. I think that a cat suffers because I see it and I feel really sorry, I can relate to it. For octopus, I kind of can relate too, but less than to a cat. For code running on my laptop, I can't relate at all. An interesting corollary would be that a teddy bear suffers.


     
New Comment
16 comments, sorted by Click to highlight new comments since:

I think the functional approach is ultimately correct, but that suffering is a much more complex way of being than any system having learnable negative feedback. A simple way to illustrate this is to notice that the amount of suffering changes drastically (in my own experience) across functionally very similar negative feedback loops. Walking along a path involves negative feedback (which I can learn from), but I don't feel like I'm suffering at all when I notice I'm deviating slightly (and even if it turns out I am, the amount of suffering is still much much lower than for standard suffering). In fact, it suspiciously seems like the amount of suffering is correlated with experiences which would likely have harmed my reproductive fitness 20,000 years ago. It even disconnects from the sensation of pain: e.g. I suffer much less from painful experiences which I believe are healing compared to ones which I believe are harmful, even if the raw sensation feels the same. Another strange thing about suffering is that it increases the more attention I pay to a constant pain signal. On the other hand, emotional pain doesn't seem (in my experience) to be as separated from suffering. Anyway, the point is that we need to vet definitions of what suffering is in just our own experiences before trying to use them on animals.

Upvoted for being a great reply with opinion, argument, example, and context.

I disagree though. I think a functional approach is ultimately the most likely to be adopted in the long term - being the only feasible one. But I think the correct answer is

The ability to suffer is determined in an anthropocentric way.

There is no natural category of suffering outside of humans. And that mainly because what is meant by suffering is no longer just something that goes on in the brain but also to a significant degree a social construct. A linguistic concept and a political agenda. Probably we will factor it into multiple clearly defined sub-parts earlier or later. One of them being a functional part like Adele seems to mean. But at that point that cuts out most of what is actually going on.   

Thanks!

Suffering seems like a natural category to me, at least in the way it's used to classify experiences.

Even if it is a social construct, that doesn't mean that animals or AIs couldn't experience a meaningfully similar experience. I'd be quite surprised if it turns out that e.g. chimps truly do not suffer in any of the common senses of the word.

For sure chimps perceive pain and avoid it. But there seem to be quite significant differences between pain and suffering. You mention this yourself:

It even disconnects from the sensation of pain: e.g. I suffer much less from painful experiences which I believe are healing compared to ones which I believe are harmful, even if the raw sensation feels the same. Another strange thing about suffering is that it increases the more attention I pay to a constant pain signal.

This and also my personal experience seems to imply that suffering (but not pain) depends on consciousness and maybe even social identity expectations.

Yeah, I meant what I said about chimps experiencing suffering. To the extent that consciousness and social identity are relevant, I believe chimps have those to a sufficient degree.

Maybe. Chimps and gorillas for sure have some consciousness. They can recognize themselves and they have social cognition. They can express frustration. I am not sure they can represent frustration.  

https://wiki.c2.com/?LeibnizianDefinitionOfConsciousness

Though arguing about whether that is required to call it suffering is haggling over the definition of a word. I don't want to that. I want to defend the claim 

The ability to suffer is determined in an anthropocentric way.

We may disagree on where to draw a line or how to assign weight to what we call suffering but the key point is not about the is but about the ought. And at least the ought is anthropocentric: Whether some structure in nature ('suffering') compels us to act in a certain way to it ('minimize it') is a social construct. It results from empathy and social expectations that are generalized. 

Note that just saying this doesn't make it less so. I do have empathy with chimps and other animals. I would do (some) things to reduce it. For sure if everybody around me agrees that reducing suffering is the right thing to do I would take that as strong evidence in its favor. I'm just aware of it

PS. Thank you for continuing to discuss a controversial discussion.

Let me try to rephrase it in terms of something that can be done in a lab and see if I get your point correctly. We should conduct experiments with humans, identifying what causes sufferings with which intensity, and what happens in the brain during it. Then, if the animal has the same brain regions, it is capable to suffer, otherwise, it is not. But it won't be the functional approach, we can't extrapolate it blindly to the  AI.

If we want the functional approach, we can only look at the behavior. What we do when we suffer, after it, etc. Then being suffers if it demonstrates the same behavior. Here the problem will be how to generalize human behavior to animals and AI.

I think the experiments you describe on humans is a reasonable start, but that you would then need to ask: "Why did suffering evolve as a distinct sensation from pain?" I don't think you can determine the function of suffering without being able to answer that. Then you could look at other systems and see if something with the same functionality exists. I think that's how you could generalize to both other animals and AI.

You left out the category of possible answers "Such-and-such type of computational process corresponds to suffering", and then octopuses and ML algorithms might or might not qualify depending on how exactly the octopus brain works, and what the exact ML algorithm is, and what exactly is that "such-and-such" criterion I mentioned. I definitely put far more weight on this category of answers than the two you suggested.

I like the idea. Basically, you suggest taking the functional approach and advance it. What do you think can be this type of process?  

If an animal can learn to avoid pain (not just avoid it right now, but memorize what happened at given conditions and avoid these conditions), then the animal can suffer.

Is it any easier to determine if an animal is in "pain" than if it is "suffering"? The quotes are because I can't be sure what you mean by these words, which have a range of uses. "Pain" is sometimes broadened to mean the same as "suffering", and "suffering" broadened so far as to mean "not getting what one wanted."

How do other entities fit into this, such as plants, self-driving cars, and configurations in Conway's Life?

First of all, it is my mistake - in the paper they used pain more like a synonym to suffering. They wanted to clarify that the animal avoids tissue damage (heat, punching, electric shock etc.) not just on the place, but learns to avoid it. To avoid it right there is simply nociception that can be seen in many low-level animals.

I don't know much about the examples you mentioned. For example, bacterias certainly can't learn to avoid stimuli associated with something bad for them. (Well, they can on the scale of evolution, but not as a single bacteria). 

Presumably people think that at some point an AI is able to suffer. So why wouldn't a neural network be able to suffer?

If it is, does it mean that we should all artificial neural network training consider as animal experiments? Should we put something like "code welfare is also animal welfare"?

You have to define "suffer" more precisely.  Which is the whole problem with qualia and experiential judgement.  Personally, my answer is "anything with two neurons rubbing together can feel reward/loss results".  And I'm not sure other humans feel the same things the same way I do, let alone flatworms (though I generally stipulate that humans are close enough to reason about). 

It seems likely (based on priors and analogy, not any measure I can think of) that, as brains get more complicated, that emotions and experiences are more intense.  If so, one could set an arbitrary threshold of complexity for what one labels "suffering".  But why?  The question should be "what and how much does X experience, and how do I aggregate that experience across beings in my moral calculations?"  I don't have an answer, and I don't currently believe that there is an objective answer. 

I agree with the point about the continuous ability to suffer rather than a threshold. I totally agree that there is no objective answer, we can't measure sufferings. The problem is, however, that it leaves a practical question that is not clear how to solve, namely how we should treat other animals and our code.