Matthew Adelstein has recently published a post on arthropod (specifically shrimp) sentience. He defends a comparable degree of pain between shrimps and humans (shrimp=20% human). My position is that arthropod consciousness is “too small to measure”, so there is not a conscious self on which pain can be inflicted, and there is no point in any intervention for their welfare, no matter how cheap. I have argued before in that direction, so I will freely use my previous texts without further (self) citation.

The “pretty hard” problem of consciousness

In “Freedom under naturalistic dualism” (forthcoming in Journal of Neurophilosophy) I have argued that consciousness is radically noumenal, that is, it is the most real thing in the Universe, but also totally impossible to be observed by others.

Under physicalist epiphenomenalism the mind is super-impressed on reality, perfectly synchronized, and parallel to it. Physicalist epiphenomenalism is the only philosophy that is compatible with the autonomy of matter and my experience of consciousness, so it has not competitors as a cosmovision. Understanding why some physical systems make an emergent consciousness appear (the so called “hard problem of consciousness”) or finding a procedure that quantifies the intensity of consciousness emerging from a physical system (the so called “pretty hard” problem of consciousness) is not directly possible: the most Science can do is to build a Laplace demon that replicates and predicts reality. But even the Laplacian demon (the most phenomenally knowledgeable possible being) is impotent to assess consciousness. In fact, regarding Artificial Intelligence we are in the position of the Laplace's demon: we have the perfectly predictive source code, but we don’t know how to use this (complete) scientific knowledge of the system for consciousness assessment.

Matthew suggests in his post that there is strong “scientific evidence” of fish consciousness, but of course there is no scientific evidence of any sentience beyond your (my!) own. Beyond your own mind, consciousness is not “proven” nor “observed” but postulated: we have direct access to our own stream of consciousness and given our physical similarity with other humans and the existence of language, we can confidently accept the consciousness of other humans and their reporting of their mental states. 

Even you are a generous extrapolator and freely consider both dogs and pigs as conscious beings (I do), they cannot report their experience, so they are of limited use for empirical work on sentience. All promising research programs on the mind-body problem (that collectively receive the name of “neural correlates of consciousness”) are based on a combination of self-reporting and neurological measure: you shall simultaneously address the two metaphysically opposite sides of reality based on trust on mind states reporting. 

I am an external observer of this literature, but in my opinion empirical “Information Integration Theory” (ITT) had an incredible success with the development of a predictive model (“Sizing up consciousness” by Massimini and Tononi) that was able to distinguish between conscious (vigil and dreams) and non-conscious (dreamless sleep) states by neurological observation using a (crude) measure of information integration.

Pain and penalty

Mathew devotes a few pages to pile evidence of behavioral similarity between humans and arthropods, and obviously there is a fundamental similarity: we are neural networks trained by natural selection. We avoid destruction and pursue reproduction, and we are both effective and desperate in both goals. The (Darwinian) reinforcement learning process that has led to our behavior imply strong rewards and penalties and being products of the same process (animal kingdom evolution), external similarity is inevitable. But to turn the penalty in the utility function of a neural network into pain you need the neural network to produce a conscious self. Pain is penalty to a conscious self. Philosophers know that philosophical zombies are conceivable, and external similarity is far from enough to guarantee noumenal equivalence.

Consequently, all examples of avoidance of pain and neural excitement described by Matthew are irrelevant: they prove penalty, not pain. Other “penalty reduction” behavior (as the release of amphetamines) are also equally irrelevant for the same reason. 

On the other hand, complex and flexible behavior is more suggestive of the kind of complexity we associate to the existence of a self, and Matthew lists a long list of papers. Many of them are openly bad because they are “checklist based”: you take a series of qualitative properties and tick if present. For example, if you compare me with Jhon von Newman you can tick “Supports the American Hegemony” and “Good in mathematics”: that is the magic of binarization. It is true that shrimps and humans “integrate information”, but of course, it matters how much. Checklists are the ultimate red flag of scientific impotence and look how many of them are in the Rethink Priorities Report on Moral Weights and in Matthews’s selected papers. 

Matthew also describes many cases of brain damage that are compatible with active behavior, to support that no concrete part of the brain can be considered a necessary condition for consciousness. I have not a very strong opinion on this: information processing is the biological function that can adopt more different external forms: while for locomotion form and size are extremely important, you can make computation in many shapes and formats.  But at the end, you need the neural basis to have the conscious experience. That a brain can be conscious with a 90% less neurons (which ones matters!) is massively different from being conscious with a 99.9% less neurons.

Super additivity of consciousness

Of course, we do not measure computers by mass, but by speed, number of processors and information integration. But if you directly do not have enough computing capacity, your neural network is simply small and information processing is limited. Shrimp have ultra tiny brains, with less than 0.1% of human neurons. The most important theories of consciousness are based on integration of information: Information Integration Theory (IIT) is the leader of the pack, but the close contenders as Global Neuronal Workspace Theory (GNWT) and Higher-Order Thought (HOT) theory are equally based on neural complexity. Even the best counterexample to (a theoretical version of) IIT consists in building a simple system with a high measure of “integrated information”: I entirely agree with that line of attack, that is fatal both for large monotonous matrices and tiny shrimp brains. 

I am not a big fan of the relatively lack of dynamism of the most theoretical IIT theories models (and the abuse of formalism over simulation!), but at the end, while it is the dynamics of the network that creates consciousness, you need a large network to support the complex dynamics. If you are interested in the State of the Art of consciousness research, you shall read Erik Hoel (see here his discussion on a letter against IIT), and probably better his books than his Substack.

As a rule, measures of information integration are supper additive (that is, complexity of two neural networks that connect among themselves is far bigger than the sum of the original networks), so neuron count ratios (Shrimp=0.01% of human) are likely to underestimate differences in consciousness. The ethical consequence of supper additivity is that ceteris paribus a given pool of resources shall be allocated in proportion not to the number of subjects but to the number of neurons (in fact, more than that, because the super in super additivity can be substantial). Of course, this is only a Bayesian a priori: behavioral complexity, neuron speed or connectome density could change my mind, but, if I am to decide, better to bring me multi panel graphs than checklists.

In any case, the main problem is inescapable: a broad extension of the moral circle shall be based in a broad theory of consciousness. For the time being we don’t know “what is like to be a bat”, and shrimps are like bats for the bats. 

New Comment
8 comments, sorted by Click to highlight new comments since:

Shrimp have ultra tiny brains, with less than 0.1% of human neurons.

Humans have 1e11 neurons, what's the source for shrimp neuron count? The closest I can find is lobsters having 1e5 neurons, and crabs having 1e6 (all from Google AI overview) which is a factor of much more than 1,000.

This is the kind of criticism I kindly welcome. I used the cockroach data (forebrain) here as a Proxy:

https://en.m.wikipedia.org/wiki/List_of_animals_by_number_of_neurons#:~:text=The human brain contains 86,neurons in the cerebral cortex.

[-]TAG30

Physicalist epiphenomenalism is the only philosophy that is compatible with the autonomy of matter and my experience of consciousness, so it has not competitors as a cosmovision

No, identity theory and illusionism are competitors. And epiphenenomenalism is dualism, not physicalism. As I have pointed out before.

Illusionism is not a competitor, because consciousness is obviously an illusion. That is immediate since Descartes. That is why you cannot distinguish between "the true reality" and "matrix": both produce a legitimate stream of illusory experience ("you"). 

Epiphenomenalism is physicalist in the sense that it respects the autonomy and closeness of the physical world. Given that we are not p-zombis (because there is an "illusory" but immediate difference between real humans and p-zombies), that difference is precisely what we call “consciousness”.  

Descartes+Laplace=Chalmers. 

In fact, there is only one scape: consciousness could play an active role in the fundamental Laws of Physics. That would break the Descartes/Laplace orthogonality, making philosophy interesting again.

If the number of neurons is so important, what about elephants or whales? Perhaps compared to them, humans are morally insignificant.

I appreciate the discussion, but I'm disappointed by the lack of rigor in proposals, and somewhat expect failure for the entire endeavor of quantifying empathy (which is the underlying drive for discussing consciousness in these contexts, as far as I'm concerned).
 

Of course, we do not measure computers by mass, but by speed, number of processors and information integration. But if you directly do not have enough computing capacity, your neural network is simply small and information processing is limited.

It's worth going one step further here - how DO we measure computers, and how might that apply to consciousness?  Computer benchmarking is a pretty complex topic, with most of the objective trivial measures (FlOps, IOPS, data throughput, etc.) being well-known to not tell the important details, and specific usage benchmarks being required to really evaluate a computing system.  Number of transistors is a marketing datum, not a measure of value for any given purpose.

Until we get closer to actual measurements of cognition and emotion, we're unlikely to get any agreement on relative importance of different entities' experiences.

Agree on this criticism for the difference between humans and pigs, but there too many orders of magnitude of difference between shrimp and human to consider detailed measures of computing power very necesary.

Quantifying empathy is intrinsically hard, because everything begins by postulating (not observing) consciousness in a group of beings, and that is only well grounded for humans. So, at the end, even if you are totally successful in developing a theory of human sentience, for other beings you are extrapolating. Anything beyond solipsism is a leap of faith (unlike you find St. Anselm ontological proof credible).