I've had conversations with many EAs and EA-adjacent people who believe things about qualia that seem wrong to me. I've met one who assigned double-digit probabilities to bacteria having qualia and said they wouldn't be surprised if a balloon flying through a gradient of air experiences pain because it's trying to get away from hotter air towards colder air. Some say they see shrimp have pain receptors and clearly react to "pain"[1], just like humans, and try to avoid future "pain", so they're experiencing pain, and we should care about their welfare. (A commenter says they think any visual information processing is qualia to some extent, even with neural networks[2].)

I think the way they're making these inferences is invalid. In this post, I'll try to explain why. I'll also suggest a direction for experiments that could produce valid evidence one way or the other.

Epistemic status: Having disentangled the models some people had, I'm relatively confident I see where many make invalid inferences as part of their worldviews. But I'm not a biologist, and this is not my area of expertise. A couple of people I talked to agreed with a suggested experiment as something that would potentially resolve the crux.

I'm using the word "qualia" to point at subjective experience. I don't use the word "consciousness" because different people mean completely different things by it.

I tried to keep the post short while communicating the idea. I think this is an important conversation to have. I believe many in the community make flawed arguments and claim that animal features are evidence for consciousness, even though they aren't.

TL;DR: If a being can describe qualia, we know this is caused by qualia existing somewhere. So we can be pretty sure that humans have qualia. But when our brains identify emotions in things, they can think both humans and geometric shapes in cartoons are feeling something. I argue that when we look at humans and feel like they feel something, we know that this feeling is probably correct, because we can make a valid inference that humans have qualia (because they would talk about having conscious experiences). I further argue that when we look at non-human things, our circuits'  recognition of feeling in others is no longer linked to a valid way of inferring that these others have qualia, and we need other evidence.

No zombies among humans

We are a collection of atoms interacting in ways that make us feel and make inferences. The level of neurons is likely the relevant level of abstraction: if the structure of neurons is approximately identical, but the atoms are different, we expect that inputs and outputs will probably be similar, which means that whatever determines the outputs runs on the level of neurons.

If you haven't read the Sequences, I highly recommend doing this. Stuff on zombies (example) is relevant here.

In short, there are some neural circuits in our brains that run qualia. These circuits have inputs and outputs: signals get into our brains, get processed, and then, in some form, get inputted into these circuits. These circuits also have outputs: we can talk about our experience, and the way we talk about it corresponds to how we actually feel.

If a monkey you observe types perfect Shakespeare, you should suspect it's not doing that at random and someone who has access to Shakespeare is messing with the process. If every single monkey you observe types Shakespeare, you can be astronomically confident someone got copies of Shakespeare's writings into the system somehow.

Similarly, we can be pretty confident other people have qualia because other people talk about qualia. Hearing a description of having a subjective experience that matches ours is really strong evidence of outputs from qualia-circuits being in the causal tree of this description. If an LLM talks about qualia, either it has qualia or qualia somewhere else caused some texts to exist, and the LLM read those. When we hear someone about qualia, we can make a valid inference that this is caused by qualia existing or having existed in the past: it'd be surprising to have such a strong match between our internal experience and the description we hear from others by random, without being caused by their own internal experience.

In a world without other things having qualia in a way that affects their actions, hearing about qualia only happens at random, rarely. If you see everyone talking about qualia, this is astronomical evidence qualia caused this.

Note that we don't infer that humans have qualia because they all have "pain receptors": mechanisms that, when activated in us, make us feel pain; we infer that other humans have qualia because they can talk about qualia.

Furthermore, note that lots of stuff that happens in human brains isn't transparent to us at all. We experience many things after the brain processes them. Experiments demonstrated that our brains can make decisions seconds before we experience making these decisions[3].

When we see humans having reactions that we can interpret as painful, we can be confident that they, indeed, experience that pain: we've had strong reasons to believe they have qualia, so we expect information about pain to be input to their qualia.

Reinforcement learning

We experience pain and pleasure when certain processes happen in our brains. Many of these processes are there for reinforcement learning. Having reactions to positive and negative rewards in ways that make the brain more likely to get positive rewards in the future and less likely to get negative rewards in the future is a really useful mechanism that evolution came up with. These mechanisms of reacting to rewards don't require the qualia circuits. They happen even if you train simple neural networks with reinforcement learning: they learn to pursue what gives positive rewards and avoid what gives negative rewards. They can even learn to react to reward signals in-episode: to avoid what gives negative reward after receiving information about the reward without updating the neural network weights. It is extremely useful, from an evolutionary angle, to react to rewards. Having something that experiences information about these rewards wouldn't help the update procedure. For subjective experience to be helpful, the outputs of circuits that run it must play some beneficial role.

What if someone doesn't talk about qualia?

Having observed many humans being able to talk about qualia, we can strongly suspect that it is a universal property of humans. We suspect that any human, when asked, would talk about qualia. We expect that even if someone can't (e.g., they can't talk at all) but we ask them in writing or restore their ability to respond, they'd talk about qualia. This is probabilistic but strong evidence and valid inference.

It is valid to infer that, likely, qualia has been beneficial in human evolution, or it is a side effect of something that has been beneficial in human evolution.

It is extremely easy for us to anthropomorphize everything. We can see a cartoon about geometric shapes and feel like these shapes must be experiencing something. A significant portion of our brain is devoted to that sort of thing.

When we interpret other humans as feeling something when we see their reactions or events happening to them, imagine what it must be like to be like them, feel something we think they must be feeling, and infer there's something they're feeling in this moment, our neural circuits make an implicit assumption that other people have qualia. This assumption is, coincidentally, correct: we can infer in a valid way that neural circuits of other humans run subjective experiences because they output words about qualia, and we wouldn't expect to see this similarity between what we see in ourselves when we reflect and what we hear from other humans to happen by coincidence, in the absence of qualia existing elsewhere.

So, we strongly expect things happening to people to be processed and then experienced by the qualia circuits in the brains of these people. And when we see a person's reaction to something, our brains think this person experiences that reaction and this is a correct thought.

But when we see animals that don't talk about qualia, we can no longer consciously make direct and strong inferences, the way we can with humans. Looking at a human reacting to something and inferring this reaction is to something experienced works because we know they'd talk about having subjective experience if asked; looking at an animal reacting to something and making the same inference they're experiencing what they've reacted to is invalid, as we don't know they're experiencing anything in the first place. Our neural circuits still recognise emotion in animals like they do in humans, but it is no longer tied to a valid way of inferring that there must be an experience of this emotion. In the future (if other problems don't prevent us from solving this one), we could figure out how qualia actually works, and then scan brains and see whether there are circuits implementing it or not. But currently, we have to rely on indirect evidence. We can make theories about the evolutionary reasons for qualia to exist in humans and about how it works and then look for signs that:

  • evolutionary reasons for the appearance of subjective experience existed in some animal species' evolution,
  • something related to the role we think qualia plays is currently demonstrated by that species, or
  • something that we think could be a part of how qualia works exists in that species.

I haven't thought about this long enough, but I'm not sure there's anything outside of these categories that can be valid evidence for qualia existing in animals that can't express having subjective experiences.

To summarise: when we see animals reacting to something, our brains rush to expect there's something experiencing that reaction in these animals, and we feel like these animals are experiencing something. But actually, we don’t know whether there are neural circuits running qualia in these animals at all, and so we don’t know whether whatever reactions we observe are experienced by some circuits. The feeling that animals are experiencing something doesn't point towards evidence that they're actually experiencing something.

So, what do we do?

Conduct experiments that'd provide valid evidence

After a conversation with an EA about this, they asked me to come up with an experiment that would provide valid evidence for whether fish have qualia.

After a couple of minutes of thinking, the first I came up with what I considered might give evidence for whether fish feel empathy (feel what they model others feeling), something I expect to be correlated with qualia[4]:

Find a fish such that you can scan its brain while showing it stuff. Scan its brain while showing it:

  • Nothing or something random
  • Its own kids
  • A fish of another species with its kids
  • Just the kids of another fish species

See which circuits activate when the fish sees its own kids. If they activate when it sees another fish with its kids more than when it sees just the kids of another fish species, it's evidence that the fish has empathy towards other fish parents: it feels some parental feelings when it sees its own children and that feels more of it when it sees another parent (who it processes as having these feelings) with children than when it sees just that parent's children.

A couple of EAs were happy to bet 1:1 that this experiment would show that fish have empathy. I'm more than happy to bet this experiment would show fish don't have empathy (and stop eating fish that this experiment shows to possess empathy). 

I think there are some problems with this experiment, but I think it might be possible to design actually good experiments in this direction and potentially stop wasting resources on improving lives that don't need improving. 

Reflect and update

I hope some people would update and, by default, not consider that things they don't expect to talk about qualia can have qualia. If a dog reacts to something in a really cute way, remember that humans have selected its ancestors for being easy to feel empathy towards. Dogs could be zombies and not feel anything, having only reactions caused by reinforcement learning mechanisms and programmed into them by evolution shaped by humans; you need actual evidence, not just a feeling that they feel something, to think they feel something.

Personally, I certainly wouldn't eat anything that passes the mirror test, as it seems to me to be pointing at something related to why and how I think qualia appears in evolution. I currently don't eat most animals (including all mammals and birds), as I'm uncertain enough about many of them. I eat fish and shrimp (though not octopuses): I think the evolutionary reasons for qualia didn't exist in the evolution of fish, I strongly expect experiments to show fish have no empathy, etc., and so I'm certain there's no actual suffering in shrimp, it's ok to eat them, the efforts directed at shrimp welfare could be directed elsewhere with greater impact.

  1. ^

    See, e.g., the research conducted by Rethink Priorities.

  2. ^

    `I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"`

    The comment: EA Forum, LW. My reply: EA Forum, LW.

  3. ^

    I think there are better versions of Libet's experiment, e.g., maybe this one (paywalled)

  4. ^

    It's possible to model stuff about others by reusing circuits for modelling stuff about yourself without having experience; and it's also possible to have experience without modelling others similarly to yourself; but I expect the evolutionary role of qualia and things associated with subjective experience to potentially correlate with empathy, so I'd be surprised if an experiment like that showed that fish have empathy, and it'd be enough evidence for me to stop eating it.

New Comment
17 comments, sorted by Click to highlight new comments since:

Why would showing that fish "feel empathy" prove that they have inner subjective experience?  It seems perfectly possible to build a totally mechanical, non-conscious system that nevertheless displays signs of empathy.  Couldn't fish just have some kind of built-in, not-necessarily-conscious instinct to protect other fish (for instance, by swimming together in a large school) in order to obtain some evolutionary benefit?

Conversely, isn't it possible for fish to have inner subjective experience but not feel empathy?  Fish are very simple creatures, while "empathy" is a complicated social emotion.  Especially in a solitary creature (like a shark, or an octopus), it seems plausible that you might have a rich inner world of qualia alongside a wide variety of problem-solving / world-modeling skills, but no social instincts like jealousy, empathy, loyalty, etc.  Fish-welfare advocates often cite studies that seem to show fish having an internal sense of pain vs pleasure (eg, preferring water that contains numbing medication), or that bees can have an internal sense of being optimistic/risky vs pessimistic/cautious -- if you think that empathy proves the existence of qualia, why are these similar studies not good enough for you?  What's special about the social emotion of empathy?

Personally, I am more sympathetic to the David Chalmers "hard problem of consciousness" perspective, so I don't think these studies about behaviors (whether social emotions like jealousy or more basic emotions like optimism/pessimism) can really tell us that much about qualia / inner subjective experience.  I do think that fish / bees / etc probably have some kind of inner subjective experience, but I'm not sure how "strong", or vivid, or complex, or self-aware, that experience is, so I am very uncertain about the moral status of animals.  (Personally, I also happily eat fish & shrimp all the time.)

In general, I think this post is talking about consciousness / qualia / etc in a very confused way -- if you think that empathy-behaviors are ironclad proof of empathy-qualia, you should also think that other (pain-related, etc) behaviors are ironclad proof of other qualia.

Both (modeling stuff about others by reusing circuits for modeling stuff about yourself without having experience; and having experience without modelling others similarly to yourself) are possible, and the reason why I think the suggested experiment would provide indirect evidence is related to the evolutionary role I consider qualia to possibly play. It wouldn't be extremely strong evidence and certainly wouldn't be proof, but it'd be enough evidence for me to stop eating fish that has these things.

The studies about optimistic/pessimistic behaviour tell us nothing about whether these things experience optimism/pessimism, as they are an adaptation an RL algorithm would implement without the need to implement circuits that would also experience these things, unless you can provide a story for why circuitry for experience is beneficial or a natural side effect of something beneficial.

One of the points of the post is that any evidence we can have except for what we have about humans would be inderect, and people call things evidence for confused reasons. Pain-related behaviour is something you'd see in neural networks trained with RL, because it's good to avoid pain and you need a good explanation for how exactly it can be evidence for qualia.

(Copied from EA Forum)

(Copies from EA Forum for the benefit of lesswrongers following the discussion here)

Definitely agree that empathy and other social feelings provide indirect evidence for self-awareness (ie, "modeling stuff about yourself" in your brain) in a way that optimism/pessimism or pain-avoidance doesn't.  (Although wouldn't a sophisticated-enough RL circuit, interacting with other RL circuits in some kind of virtual evolutionary landscape, also develop social emotions like loyalty, empathy, etc?  Even tiny mammals like mice/rats display sophisticated social behaviors...)

I tend to assume that some kind of panpsychism is true, so you don't need extra "circuitry for experience" in order to turn visual-information-processing into an experience of vision.  What would such extra circuitry even do, if not the visual information processing itself?  (Seems like maybe you are a believer in what Daniel Dennet calls the "fallacy of the second transduction"?)
Consequently, I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"!  But of course it would not have any awareness of itself as being a thing-that-sees, nor would those isolated experiences of vision be necessarily tied together into a coherent visual field, etc.

So, I tend to think that fish and other primitive creatures probably have "qualia", including something like a subjective experience of suffering, but that they probably lack any sophisticated self-awareness / self-model, so it's kind of just "suffering happening nowhere" or "an experience of suffering not connected to anything else" -- the fish doesn't know it's a fish, doesn't know that it's suffering, etc, the fish is just generating some simple qualia that don't really refer to anything or tie into a larger system.  Whether you call such a disconnected & shallow experience "real qualia" or "real suffering" is a question of definitions.

I think this personal view of mine is fairly similar to Eliezer's from the Sequences: there are no "zombies" (among humans or animals), there is no "second transduction" from neuron activity into a mythical medium-of-consciousness (no "extra circuitry for experience" needed), rather the information-processing itself somehow directly produces (or is equivalent to, or etc) the qualia.  So, animals and even simpler systems probably have qualia in some sense.  But since animals aren't self-aware (and/or have less self-awareness than humans), their qualia don't matter (and/or matter less than humans' qualia).

...Anyways, I think our core disagreement is that you seem to be equating "has a self-model" with "has qualia", versus I think maybe qualia can and do exist even in very simple systems that lack a self-model.  But I still think that having a self-model is morally important (atomic units of "suffering" that are just floating in some kind of void, unconnected to a complex experience of selfhood, seem of questionable moral relevance to me), so we end up having similar opinions about how it's probably fine to eat fish.

I guess what I am objecting to is that you are acting like these philosophical problems of qualia / consciousness / etc are solved and other people are making an obvious mistake.  I agree that I see a lot of people being confused and making mistakes, but I don't think the problems are solved!

I appreciate this comment.

Qualia (IMO) certainly is "information processing": there are inputs and outputs. And it is a part of a larger information-processing thing, the brain. What I'm saying is that there's information processing happening outside of the qualia circuits, and some of the results of the information processing outside of the qualia circuits are inputs to our qualia. 

I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"

Well, how do you know that visual information processing produces qualia? You can match when algorithms implemented by other humans' brains to algorithms implemented by your brain, because all of you talk about subjective experience; how do you, inside your neural circuitry, make an inference that a similar thing happens in neurons that just process visual information?

You know you have subjective experience, self-evidently. You can match the computation run by the neural circuitry of your brain to the computation run by the neural circuitry of other humans: because since they talk about subjective experience, you can expect this to be caused by similar computation. This is valid. Thinking that visual information processing is part of what makes qualia (i.e., there's no way to replace a bunch of your neurons with something that outputs the same stuff without first seeing and processing something, such that you'll experience seeing as before) is something you can make theories about but is not a valid inference, you don't have a way of matching the computation of qualia to the whole of your brain.

And, how can you match it to matrix multiplications that don't talk about qualia, did not have evolutionary reasons for experience, etc.? Do you think an untrained or a small convolutional neural network experiences images to some extent, or only large and trained? Where does that expectation come from?

I'm not saying that qualia is solved. We don't yet know how to build it, and we can't yet scan brains and say which circuits implement it. But some people seem more confused than warranted, and they spend resources less effectively than they could've.

And I'm not equating qualia to self-model. Qualia is just the experience of information. It doesn't required a self-model, also on Earth, so far, I expect these things to have been correlated.

If there's suffering and experience of extreme pain, in my opinion, it matters even if there isn't reflectivity.

You know you have subjective experience, self-evidently. You can match the computation run by the neural circuitry of your brain to the computation run by the neural circuitry of other humans: because since they talk about subjective experience, you can expect this to be caused by similar computation.

Similarity is subjective. There is no fundamental reason that the ethical threshold must be on the level of similarity between humans and not on level of similarity between humans and shrimps.

I've met one who assigned double-digit probabilities to bacteria having qualia and said they wouldn't be surprised if a balloon flying through a gradient of air experiences pain because it's trying to get away from hotter air towards colder air.

though this may be an arguable position (see, e.g., https://reducing-suffering.org/is-there-suffering-in-fundamental-physics/), the way you've used it (and the other anecdotes) in the introduction decontextualized, as a 'statement of position' without justification, is in effect a clown attack fallacy.

on the post: remember that absence of evidence is not evidence of absence when we do not yet have the technologies to collect relevant evidence. the conclusion in the title does not follow: it should be 'whether shrimp suffer is uncertain'. under uncertainty, eating shrimp is taking a risk whose downsides are suffering, and upsides (for individuals for whom there are any) might e.g taste preference satisfaction, and the former is much more important to me. a typical person is not justified in 'eating shrimp until someone proves to them that shrimp can suffer.' 

The justification that I've heard for that position wouldn't make the statement better; I'd be able to pass an ITT for the specific person who told me it, and I understand why it is wrong. I consider the mistake they're making and the mistake Rethink Priorities are making to be the same and I try to make an argument why in the post.

I'm separately pretty sure evolutionary reasons for qualia didn't exist in fish evolution (added this to the post, thanks!), and from my experience talking to a couple of EAs about this they agreed with some correlations enough to consider a suggested experiment to be a crux, and I'm pretty certain about the result of the experiment and think they're wrong for reasons described in the post.

It's not obvious how to figure out the priors here, but my point is people update on things that aren't valid evidence. The hope is that people will spend their resources more effectively after correctly considering shrimp welfare to be by orders of magnitude less important and deprioritizing it. Maybe they'll still avoid eating shrimp because they don't have intuitions about evolutionary reasons for qualia similar to my, but that seems less important to me than reducing as much actual suffering as possible, other things being equal.

I suspect there is no good way to "short-circuit" the fact that the "hard problem of consciousness" and, in particular, its truly hard core, the "hard problem of qualia" is unsolved.

Disclaimer: there has been a LessWrong post Why it's so hard to talk about Consciousness, and that post states that on this groups of issues people are mostly divided into 2 camps which don't really understand each other:

The basic model I'm proposing is that core intuitions about consciousness tend to cluster into two camps, with most miscommunication being the result of someone failing to communicate with the other camp. For this post, we'll call the camp of boldface author Camp #1 and the camp of quotation author Camp #2.

So, the epistemological disclaimer is that this comment (just like the original post) would probably make sense only to people belonging to Camp #2 (like myself), that is, people who think that it makes sense to talk about qualia.

When I ponder the problem of qualia, I usually think that it will eventually be solved by a two-pronged approach.

On the theoretical side, people will start to require that a viable candidate theory predicts some non-trivial subjectively observable novel effects, just like we require that a viable candidate theory for new physics predicts some non-trivial observable effects. For example, a requirement like that might be satisfied by predicting a novel, non-trivial way to induce "visual illusions" (with the condition that this way does not readily follow from the known science).

Basically, instead of engaging in purely philosophical speculations about the "nature of consciousness", people (or collaborations of people and AIs) will start finding ways to ground the new theories in experiments, not just in explaining the existing experiments, but in novel non-trivial predictions of experimental phenomena.

On the experimental side, a natural starting point (which has tons of safety and ethical caveats) would be creation of hybrid systems between biological entities having qualia and electronic circuits of various nature (digital, analog, running verbally intelligent software, running clever fluid simulations, running audio-visual synthesizers, etc). For practical reasons, people would probably aim for technologies based on things like non-invasive BCI to create tight coupling between biological and electronic entities (only if that proves impossible, people would have to resort to drastic Neuralink-like steps, but the more can be done without surgery or other invasive methods the better). While tight coupling of this kind presents formidable ethical and safety issues even with non-invasive interfaces, this route should eventually enable qualia-possessing entities to "look subjectively from the inside at the dynamics of electronic circuits", and that's how we can try to start experimentally assessing which electronic circuits are or are not capable of supporting qualia.

Also, this would likely eventually enable coupling of different biological entities to each other via coupling each of them to an interconnected electronic circuit (ethical and safety issues are getting even more formidable, as we move along this route). If this coupling is tight enough, we might learn something about qualia (or lack thereof) in various biological entities as well.

I think technically this is likely to be eventually doable. Whether a way can be found to do this in an acceptably safe and sufficiently ethical manner is an open question. But if we want to actually figure out qualia, we probably have to do more on both the theoretical and the experimental sides.

Minor quibble, but:

I currently don't eat animals, as I'm uncertain enough about many of them. I eat fish and shrimp

Fish and shrimp are animals; did you mean "mammals"? Or something else?

Oops, English! Thanks

I don't think the title of this post is consistent with your self professed epistemic status, or the general claims you make.

You seem to be stating that in your (non expert) opinion, some EA's are overconfident in the probabilities they'd assign to shrimp having the capacity to experience qualia?

If we assumed that's correct, that doesn't imply that it's okay to eat shrimp. It just means there's more uncertainty.

I think unless you take a very linguistics heavy understanding of the emergence of qualia, you are over-weighting your arguments around being able to communicate with an agent being highly related to how likely they are to have consciousness.  

___________________________________________________________________________________________

You say:

In short, there are some neural circuits in our brains that run qualia. These circuits have inputs and outputs: signals get into our brains, get processed, and then, in some form, get inputted into these circuits. These circuits also have outputs: we can talk about our experience, and the way we talk about it corresponds to how we actually feel.

And: 

It is valid to infer that, likely, qualia has been beneficial in human evolution, or it is a side effect of something that has been beneficial in human evolution.

I think both of the above statements are very likely true.  From that, it is hard to say that a chimpanzee likely to lack those same circuits.  Neither our mental circuits nor our ancestral environments are that different.  Similarly, it is hard to say "OK, this is what a lemur is missing, as compared to a chimpanzee".  

I agree that as you go down the list of potentially conscious entities (e.g. Humans -> Chimpanzees -> Lemurs -> Rats -> Bees -> Worms -> Bacteria -> Virus -> Balloon) it gets less likely that each has qualia, but I am very hesitant to put anything like an order of magnitude jump at each level.  

Note that we don't infer that humans have qualia because they all have "pain receptors": mechanisms that, when activated in us, make us feel pain; we infer that other humans have qualia because they can talk about qualia.

The way I decide this, and how presumably most people do (I admit I could be wrong) revolves around the following chain of thought:

  1. I have qualia with very high confidence.*

  2. To the best of my knowledge, the computational substrate as well as the algorithms running on them are not particularly different from other anatomically modern humans. Thus they almost certainly have qualia. This can be proven to most people's satisfaction with an MRI scan, if they so wish.

  3. Mammals, especially the intelligent ones, have similar cognitive architectures, which were largely scaled up for humans, not differing much in qualitative terms (our neurons are still actually more efficient, mice modified to have genes from human neurons are smarter). They are likely to have recognizable qualia.

  4. The further you diverge from the underlying anatomy of the brain (and the implicit algorithms), the lower the odds of qualia, or at least the same type of qualia. An octopus might well be conscious and have qualia, but I suspect the type of consciousness as well as that of their qualia will be very different from our own, since they have a far more distributed and autonomous neurology.

  5. Entities which are particularly simple and don't perform much cognitive computation are exceedingly unlikely to be conscious or have qualia in a non-tautological sense. Bacteria and single transistors, or slime mold.

More speculatively (yet I personally find more likely than not):

  1. Substrate independent models of consciousness are true, and a human brain emulation in-silico, hooked up to the right inputs and outputs, has the exact same kind of consciousness as one running on meat. The algorithms matter more than the matter they run on, for the same reason an abacus or a supercomputer are both Turing Complete.

  2. We simply lack an understanding of consciousness well grounded enough to decide whether or not decidedly non-human yet intelligent entities like LLMs are conscious or have qualia like ours. The correct stance is agnosticism, and anyone proven right in the future is only so by accident.

Now, I diverge from Effective Altruists on point 3, in that I simply don't care about the suffering of non-humans or entities that aren't anatomically modern humans/ intelligent human derivatives (like a posthuman offshoot). This is a Fundamental Values difference, and it makes concerns about optimizing for their welfare on utilitarian grounds moot as far as I'm concerned.

In the specific case of AGI, even highly intelligent ones, I posit it's significantly better to design them so they don't have capability to suffer, no matter what purpose they're put to, rather than worry about giving them rights that we assign to humans/transhumans/posthumans.

But what I do hope is ~universally acceptable is that there's an unavoidable loss of certainty or Bayesian probability in each leap of logic down the chain, such that by the time you get down to fish and prawns, it's highly dubious to be very certain of exactly how conscious or qualia possessing they are, even if the next link, bacteria and individual transistors lacking qualia, is much more likely to be true (it flows downstream of point 2, even if presented in sequence)

*Not infinite certitude, I have a non-negligible belief that I could simply be insane, or that solipsism might be true, even if I think the possibility of either is very small. It's still not zero.

Some people expressed a reaction of scepticism over this:

assigned double-digit probabilities to bacteria having qualia and said they wouldn't be surprised if a balloon flying through a gradient of air experiences pain

Here's something from a comment on the EA Forum:

I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"

Not sure if they expect a small CNN to possess qualia (and do they then think that when physics makes essentially equivalent matrix multiplications to compute rocks, there are a lot of qualia of random visions in rocks?), but maybe it's easy to underestimate how confused many people are about all that stuff

When we hear someone about qualia, we can make a valid inference that this is caused by qualia existing or having existed in the past.

When we hear someone talking about a god, we can make a valid inference that this is caused by a god existing or having existed in the past.

we could figure out how qualia actually works, and then scan brains and see whether there are circuits implementing it or not.

Whether circuits implement something is subjective - on the physical level the circuits in other humans' brains don't implement your qualia. If you generalize to other humans' implementations, what's stopping you from generalizing to anything with pain receptors?

something that we think could be a part of how qualia works exists in that species.

So pain receptors?

When we hear someone talking about a god, we can make a valid inference that this is caused by a god existing or having existed in the past.

It is valid Bayesian evidence, yes. We can't consistently expect that people talking about gods is less likely in worlds where gods do exist. (Of course, other explanations remain far more probable, given background knowledge; it's hardly a proof.)

yeah, I got a similar impression that this line of reasoning doesn't add up...

we interpret other humans as feeling something when we see their reactions

we interpret other eucaryotes as feeling something when we see their reactions 🤷