Promoted to curated: The general topic of moral patienthood strikes me as important on two fronts. One, it's important in terms of understanding our values and taking good actions, and two it's important as an area in which I think it's pretty clear that human thinking is still confused, and so for many people I think it's a good place to try to dissolve confused questions and train the relevant skills of rationality. I think while this post is less polished than your much longer moral patienthood report, I think for most people it will be a better place to start engaging with this topic, at least in parts because it's length isn't as daunting.
On the more object level, I think this post makes some quite interesting points that have changed my thinking a good bit. I think most people have not considered the hypothesis that animals could be assigned a higher moral value than humans, and independently of the truth value of that hypothesis, I think the evidence presented helps people realize a bunch of implicit constraints in their thinking around moral patienthood. I've heard similar things from other people who've read the post.
It's also great to see you write a post on LW again, and I strongly recommend newcomers to read lukeprog's other writing on LW, if you haven't done so.
I agree with this, and I agree with Luke that non-human animals could plausibly have much higher (or much lower) moral weight than humans, if they turned out to be moral patients at all.
It may be worth emphasizing that "plausible ranges of moral weight" are likely to get a lot wider when we move from classical utilitarianism to other reasonably-plausible moral theories (even before we try to take moral uncertainty into account).
I thought I'd try to re-state the three types of unity in my own words, to test my understanding.
I have not read much in this field before, and expect all three of my descriptions are wrong in some significant way. I wrote this comment so that someone else would have a datapoint to triangulate any misconceptions off (i..e. correcting me could help communicate the core concepts).
Edit: An actually true mathematical example for representational unity, thanks Daniel Filan.
Interesting historical footnote from Louis Francini:
This issue of differing "capacities for happiness" was discussed by the classical utilitarian Francis Edgeworth in his 1881 Mathematical Psychics (pp 57-58, and especially 130-131). He doesn't go into much detail at all, but this is the earliest discussion of which I am aware. Well, there's also the Bentham-Mill debate about higher and lower pleasures ("It is better to be a human being dissatisfied than a pig satisfied"), but I think that may be a slightly different issue.
Are moral weights supposed to be real properties that are out there? If they are not, if they are just projections of human concern, then what would be right about the right answer? How will you know when you have solved the problem?
Probably it was addressed somewhere in the links above, but I would like to mentione that for two ants it is more probable to be exact copies of each other (in terms of similarity of the observer-moments) than for two humans. Thus, despite the number of ants is larger than number of humans, the universe of possible humans experiences is larger than ant's experience, including different observer-moments of sufferings.
If we assume that two instances of an observer-moment should be regarded as one, human's sufferings-moments would dominate ant's sufferings-moments in the set of all possible experiences.
How many distinct possible ant!observer-moments are there? What is the entropy of their distribution in the status quo?
How many distinct possible human!observer-moments are there? What is the entropy of their distribution in the status quo?
(Confidence intervals okay; I just have no intuition about these quantities, and you seem to have considered them, so I'm curious what estimates you're working with.)
Just some preliminary thoughts.
In 1984 there was a big problem in the Soviet Union: the butterfly's population declined because children were hunting on them with butterfly nets. To solve this problem, the Soviet government banned sales of such nets. I remember that one summer my parents were not able to buy me such a net.
Today it is obvious that this was not the biggest problem for the Soviet Union, which was facing its own existential catastrophe in just a few years. The same way, discussing now the moral value of insect may be an opportunity cost if we want to prevent x-risks. Anyway, let's go.
Let's look first on the number of ant's observer-moments. One way to calculate them is the use of the number of insect's facets in eye, which is 30 000 for dragonfly. Assuming binary vision in insects, it is 2power30 000 different images which an ant can see. Humans have 7 mln color vision cone cells in each eye, which imply 8power7000000 different possible images an human eye could see. This is 8power6990000 times more than ant's possible observer states. Similar result could be achieved if we compare brain sizes in neurons of a human and an ant.
Good question about entropy. It could be also assumed that "normal" states of consciousness for humans are more diverse than for ants. "Normal states" are those which one experiences during his normal life, not under the effect of random generators combined with powerful hallucinogens. The less diverse species are in their experience, more likely these species to have exact copies of observer-moments between their specimen.
Another part of the enthropy question is the ability of a human (or an ant) to distinguish two states of his consciousness as different, probably by providing different reaction to them. In that case, humans enormously outperform ants, as we can give long textual descriptions of all nuances of our experiences. Those also could be calculated by combining the complexity of all possible human situations describing phrases with all typical reactions of an ant on new objects (here it is assumed that ants are only capable to typical reactions which may be not true).
Right, so if we're using a uniform distribution over 2^30000, there should be exactly zero ants sharing observer-moments, so in order to argue that ants' overlap in observer-moments should discount their total weight, we're going to need to squeeze that space a lot harder than that.
I've also spent some time recently staring at ~randomly generated grids of color for an unrelated project, and I think there's basically no way that the human visual system is getting so much as 5000 bits of entropy (i.e., 50x50 grid of four-color choices) out of the observer-experience of the visual field. So I think using 2^#receptors is just the wrong starting point. Similarly, assuming that neurons operate independently is going to give you a number in entirely the wrong realm of numbers entirely. (Wikipedia says an ant has ~250,000 neurons.)
I think that if you want to get to the belief that two ants might ever actually share an experience, you're going to need to work in a significantly smaller domain, like your suggestion of output actions, though applying the domain of "typical reactions of a human to new objects" is going to grossly undercount the number of human possible observer-experiences, so now I'm back to being stuck wondering how to do that at all.
If we take multiverse view, there will be copies, but what we need is not actual copies, but a measure of uniqueness of each observer-moments, which could be calculated as a proportion of frequencies of copies - for humans and for ants.
The problem may be done more practical by asking how much computational resources we (future FAI) need to resurrect all possible humans and all possible ants.
by the possibility concerning faster clock speeds for smaller animals, the possibility of lesser unity in non-human animals (which one might value at >1x for the same reason one might value a dually-conscious split-brain patient at ~2x), and the possibility for greater intensity of experience in simpler animals.
Interestingly, some variant of each of these would also seem to apply when comparing the moral weight of adult human vs. infant/toddler humans; while human infants probably don't have a higher clock speed than adults in the same sense that small animals might have a higher clock speed than humans, there is the widely-known point about young children regardless seeming to have a much higher subjective speed than adults.
And if we are willing to ascribe moral weight to fruit flies, there must also be some corresponding non-zero moral weight to early-term human fetuses.
This whole conversation makes me deeply uncomfortable. I expect to strongly disagree at pretty low levels with almost anyone else trying to have this conversation, I don't know how to resolve those disagreements, and meanwhile I worry about people seriously advocating for positions that seem deeply confused to me and those positions spreading memetically.
For example: why do people think consciousness has anything to do with moral weight?
why do people think consciousness has anything to do with moral weight?
Is there anything that it seems to you likely does have to do with moral weight?
I feel pretty confused about these topics, but it's hard for me to imagine that conscious experience wouldn't at least be an input into judgments I would endorse about what's valuable.
For anyone who is curious, I cite much of the literature arguing over criteria for moral patienthood/weight in the footnotes of this section of my original moral patienthood report. My brief comments on why I've focused on consciousness thus far are here.
(You have to press space after finishing some markdown syntax to have it be properly parsed. Fixed it for you, and sorry for the confusion.)
why do people think consciousness has anything to do with moral weight?
One of my strongest moral intuitions is that suffering is bad, meaning that it's good to help other minds not-suffer. Minds can only suffer if they are conscious.
Interesting, that is not a terribly strong intuition for me. I'm willing to suffer some amount for some causes, so at least it's not fundamental and universal.
The intuition that feels more fundamental is that joy should be maximized, and suffering is (in many cases) a reduction in joy. Which gets to "useless suffering is bad", but that's a lot weaker than "suffering is bad".
Anyhow, I suspect this difference in intuition is a deep enough disagreement that it makes it difficult to fully agree on moral values. Both are about consciousness, though, so we at least agree there. I wonder what the moral intuitions are that make one thing consciousness is not central.
I suspect this difference in intuition is a deep enough disagreement that it makes it difficult to fully agree on moral values.
It's not clear to me, from what's written here, that you two even disagree at all. Kaj says, "suffering is bad." You say, "useless suffering is bad."
Are you sure Kaj wouldn't also agree that suffering can sometimes be useful?
Yeah, "suffering is bad" doesn't mean that I would never accept trades which involved some amount of suffering. Especially since trying to avoid suffering tends to cause more of it in the long run, so even if you only cared about reducing suffering (which I don't), you'd still want to take actions involving some amount of suffering.
Compare that even if you want to have a lot of money, never spending any money (on e.g. investments) isn't a very good strategy, even though your stated goal implies that spending money is bad.
Hmm, the money analogy misses me too. I'd never say "spending money is bad", even as shorthand for something, as it's simply not a base-level truth. I think of money as a lifetime flow rather than an instantaneous stock, and failing in your goals when you have unspent money is clearly a mistake.
I suspect we do agree on a lot of intuitions, but also disagree on the modeling of which of those are fundamental vs situational.
This does exactly what it sets out to do: presents an issue, shows why we might care, and lays out some initial results (including both intuitive and counterintuitive ones). It's not world-shaking for me, but it certainly carries its weight.
It's fairly rare (lately) that I've read something that meaningfully shifted my distribution of "what sorts of moral and/or consciousness theories I'm likely to subscribe to after more learning/reflection."
I think this is probably mostly has to do with me being in a valley where there's a lot of "relatively easy" concepts I've already learned, and then [potentially] harder concepts that I'd have to put a lot of work into understanding. (I did kinda bounce of Luke's longer post on consciousness, although I think that had more to do with length than being over-my-head)
But this post seemed well targeted towards 2018_raemon's background. I had thought about high-clockspeed being relevant for the moral relevance of digital-minds, but somehow hadn't considered that this might also make hummingbirds more morally relevant than humans.
(To be clear, all of this is hedged with massive uncertainty, and I currently don't expect to end up believing hummingbirds are more relevant. But it felt like a big shift in how I carved up the space of possibilities)
Interesting points! I hadn't seriously considered the possibility of animals having more moral weight per capita than humans, but I guess it makes some sense, even if it's implausible. Two points:
1. Are the ranges conditional on each species being moral patients at all? If not, it seems like there'd be enough probability mass on 0 for some of the less complex animals that any reasonable confidence interval should include it.
2. What are your thoughts on pleasure/pain asymmetries? Would your ranges for the moral weight of positive experiences alone be substantially different to the ones above? To me, it makes intuitive sense that animals can feel pain in roughly the same way we do, but the greatest happiness I experience is so wrapped up in my understanding of the overall situation and my expectations for the future that I'm much less confident that they can come anywhere close.
Yes, I meant to be describing ranges conditional on each species being moral patients at all. I previously gave my own (very made-up) probabilities for that here. Another worry to consider, though, is that many biological/cognitive and behavioral features of a species are simultaneously (1) evidence about their likelihood of moral patienthood (via consciousness), and (2) evidence about features that might affect their moral weight *given* consciousness/patienthood. So, depending on how you use that evidence, it's important to watch out for double-counting.
I'll skip responding to #2 for now.
I don't think the two-elephants problem is as fatal to moral weight calculations as you suggest (e.g. "this doesn't actually work"). The two-envelopes problem isn't a mathematical impossibility; it's just an interesting example of mathematical sleight-of-hand.
Brian's discussion of two-envelopes is just to point out that moral weight calculations require a common scale across different utility functions (e.g. the decision to fix the moral weight of a human at 1 whether you're using brain size, all-animals-are-equal, unity-weighting, or any other weighing approach). It's not to say that there's a philosophical or mathematical impossibility in doing these calculations, as far as I understand.
FYI I discussed this a little with Brian before commenting, and he subsequently edited his post a little, though I'm not yet sure if we're in agreement on the topic.
I think the moral-uncertainty version of the problem is fatal unless you make further assumptions about how to resolve it, such as by fixing some arbitrary intertheoretic-comparison weights (which seems to be what you're suggesting) or using the parliamentary model.
Regardless of whether the problem can be resolved, I confess that I don't see how it's related to the original two-envelopes problem, which is a case of doing incorrect expected-value calculations with sensible numbers. (The contents of the envelopes are entirely comparable and can't be rescaled.)
Meanwhile, it seems to me that the elephants problem just comes about because the numbers are fake. You can do sensible EV calculations, get (a + b/4) for saving two elephants versus (a/2 + b/2) for saving one human, but because a and b are mostly-unconstrained (they just have to be positive), you can't go anywhere from there.
These strike me as just completely unrelated problems.
The naive form of the argument is the same between the classic and moral-uncertainty two-envelopes problems, but yes, while there is a resolution to the classic version based on taking expected values of absolute rather than relative measurements, there's no similar resolution for the moral-uncertainty version, where there are no unique absolute measurements.
There's nothing wrong with using relative measurements, and using absolute measurements doesn't resolve the problem. (It hides from the problem, but that's not the same thing.)
The actual resolution is explained in the wiki article better than I could.
I agree that the naive version of the elephants problem is isomorphic to the envelopes problem. But the envelopes problem doesn't reveal an actual difficulty with choosing between two envelopes, and the naive elephants problem as described doesn't reveal an actual difficulty with choosing between humans and elephants. They just reveal a particular math error that humans are bad at noticing.
I think most thinkers on this topic wouldn't think of those weights as arbitrary (I know you and I do, as hardcore moral anti-realists), and they wouldn't find it prohibitively difficult to introduce those weights into the calculations. Not sure if you agree with me there.
I do agree with you that you can't do moral weight calculations without those weights, assuming you are weighing moral theories and not just empirical likelihoods of mental capacities.
I should also note that I do think intertheoretic comparisons become an issue in other cases of moral uncertainty, such as with infinite values (e.g. a moral framework that absolutely prohibits lying). But those cases seem much harder than moral weights between sentient beings under utilitarianism.
Seconding Ray. This was a bunch of important hypotheses about consciousness I had never heard of.
This was an awesome read. Can you perhaps explain the listed intuition to care more about things like clock speeds than higher cognitive functions?
The way I see it, higher cognitive functions allow long term memories and their resurfacing, and cognitive interpretation of direct suffering, like physical pain. A hummingbird might have a X3 human clock, but it might be way less emotionally scarred than a human when projected to maximum pain for, lets say, 8 objective seconds ("emotionally scarred" is a not well defined way of saying that more suffering will arise later due to the pain caused in the hypothetical event). That is why, IMO, most people do assign relevance to more complicated cognitions.
Thanks for the post!
I was trying to use the lower and upper estimates of 5*10^-5 and 10, guessed for the moral weight of chickens relative to humans, as the 10th and 90th percentiles of a lognormal distribution. This resulted in a mean moral weight of 1000 to 2000 (the result is not stable), which seems too high, and a median of 0.02.
1- Do you have any suggestions for a more reasonable distribution?
2- Do you have any tips for stabilising the results for the mean?
I think I understand the problems of taking expectations over moral weights (E(X) is not equal to 1/E(1/X)), but believe that it might still be possible to determine a reasonable distribution for the moral weight.
hot take: utilitarianism is broken, the only way to fix it is to invent economics - you can't convert utility between agents, and when you try to do anything resembling that, you get something that works sort of like (but not exactly the same as) money.
That sounds like it's talking about some version of preference utilitarianism (in which utility is defined the way economics does it, and we try to maximize the sum of each agent's own utility function), whereas this post says that it's talking about classical utilitarianism. I think that for classical utilitarianism, it's enough to just know your own ideal exchange rates for different kinds of pain and pleasure, and then you can try to take actions which shift the world's overall ratio of pain/pleasure towards something that's good according to your own utility function.
It's fun to take these calculations and apply some existential value to seriously amplify the repugnant conclusion. We should tile the universe with whatever creature has the highest moral weight per resource consumed. It's unlikely to be humans.
My actual honest reaction to this sort of thing: Please, please stop. This kind of thinking actively drives me and many others I know away from LW/EA/Rationality. I see it strongly as asking the wrong questions with the wrong moral frameworks, and using it to justify abominable conclusions and priorities, and ultimately the betrayal of humanity itself - even if people who talk like this don't write the last line of their arguments, it's not like the rest of us don't notice it. I don't have any idea what to say to someone who writes 'if I was told one pig was more important morally than one human I would not be surprised.'
That's not me trying to convince anyone of anything beyond that I have that reaction to this sort of thing, and that it seemed wrong for me not to say it given I'm writing reviews. No demon threads please, if I figure out how to say this in a way that would be convincing and actually explain, I'll try and do that. This is not that attempt.
This kind of thinking actively drives me and many others I know away from LW/EA/Rationality
And that kind of thinking (appeal to the consequence of repelling this-and-such kind of person away from some alleged "community") has been actively driving me away. I wonder if there's some way to get people to stop ontologizing "the community" and thereby reduce the perceived need to fight for control of the "LW"/"EA"/"rationalist" brand names? (I need to figure out how to stop ontologizing, because I'm exhausted from fighting.) Insofar as "rationality" is a thing, it's something that Luke-like optimization processes and Zvi-like optimization processes are trying to approximate, not something they're trying to fight over.
I see it strongly as asking the wrong questions with the wrong moral frameworks, and using it to justify abominable conclusions and priorities, and ultimately the betrayal of humanity itself—even if people who talk like this don’t write the last line of their arguments, it’s not like the rest of us don’t notice it. I don’t have any idea what to say to someone who writes ‘if I was told one pig was more important morally than one human I would not be surprised.’
Entirely seconded; this is my reaction also.
This post adapts some internal notes I wrote for the Open Philanthropy Project, but they are merely at a "brainstorming" stage, and do not express my "endorsed" views nor the views of the Open Philanthropy Project. This post is also written quickly and not polished or well-explained.
My 2017 Report on Consciousness and Moral Patienthood tried to address the question of "Which creatures are moral patients?" but it did little to address the question of "moral weight," i.e. how to weigh the interests of different kinds of moral patients against each other:
Thus far, philosophers have said very little about moral weight (see below). In this post I lay out one approach to thinking about the question, in the hope that others might build on it or show it to be misguided.
Proposed setup
For the simplicity of a first-pass analysis of moral weight, let's assume a variation on classical utilitarianism according to which the only thing that morally matters is the moment-by-moment character of a being's conscious experience. So e.g. it doesn't matter whether a being's rights are respected/violated or its preferences are realized/thwarted, except insofar as those factors affect the moment-by-moment character of the being's conscious experience, by causing pain/pleasure, happiness/sadness, etc.
Next, and again for simplicity's sake, let's talk only about the "typical" conscious experience of "typical" members of different species when undergoing various "canonical" positive and negative experiences, e.g. consuming species-appropriate food or having a nociceptor-dense section of skin damaged.
Given those assumptions, when we talk about the relative "moral weight" of different species, we mean to ask something like "How morally important is 10 seconds of a typical human's experience of [some injury], compared to 10 seconds of a typical rainbow trout's experience of [that same injury]?
For this exercise, I'll separate "moral weight" from "probability of moral patienthood." Naively, you could then multiply your best estimate of a species' moral weight (using humans as the baseline of 1) by P(moral patienthood) to get the species' "expected moral weight" (or whatever you want to call it). Then, to estimate an intervention's potential benefit for a given species, you could multiply [expected moral weight of species] × [individuals of species affected] × [average # of minutes of conscious experience affected across those individuals] × [average magnitude of positive impact on those minutes of conscious experience].
However, I say "naively" because this doesn't actually work, due to two-envelope effects.
Potential dimensions of moral weight
What features of a creature's conscious experience might be relevant to the moral weight of its experiences? Below, I describe some possibilities that I previously mentioned in Appendix Z7 of my moral patienthood report.
Note that any of the features below could be (and in some cases, very likely are) hugely multidimensional. For simplicity, I'm going to assume a unidimensional characterization of them, e.g. what we'd get if we looked only at the principal component in a principal component analysis of a hugely multidimensional phenomenon.
Clock speed of consciousness
Perhaps animals vary in their "clock speed." E.g. a hummingbird reacts to some things much faster than I ever could. If any of that is under conscious control, its "clock speed" of conscious experience seems like it should be faster than mine, meaning that, intuitively, it should have a greater number of subjective "moments of consciousness" per objective minute than I do.
In general, smaller animals probably have faster clock speeds than larger ones, for mechanical reasons:
My impression is that it's a common intuition to value experience by its "subjective" duration rather than its "objective" duration, with no discount. So if a hummingbird's clock speed is 3x as fast as mine, then all else equal, an objective minute of its conscious pleasure would be worth 3x an objective minute of my conscious pleasure.
Unities of consciousness
Philosophers and cognitive scientists debate how "unified" consciousness is, in various ways. Our normal conscious experience seems to many people to be pretty "unified" in various ways, though sometimes it feels less unified, for example when one goes "in and out of consciousness" during a restless night's sleep, or when one engages in certain kinds of meditative practices.
Daniel Dennett suggests that animal conscious experience is radically less unified than human consciousness is, and cites this as a major reason he doesn't give most animals much moral weight.
For convenience, I'll use Bayne (2010)'s taxonomy of types of unity. He talks about subject unity, representational unity, and phenomenal unity — each of which has a "synchronic" (momentary) and "diachronic" (across time) aspect of unity.
Subject unity
Bayne explains:
Representational unity
Bayne explains:
I suspect many people wouldn't treat representational unity as all that relevant to moral weight. E.g. there are humans with low representational unity of a sort (e.g. visual agnosics); are their sensory experiences less morally relevant as a result?
Phenomenal unity
Bayne explains:
Unity-independent intensity of valenced aspects of consciousness
A common report of those who take psychedelics is that, while "tripping," their conscious experiences are "more intense" than they normally are. Similarly, different pains feel similar but have different intensities, e.g. when my stomach is upset, the intensity of my stomach pain waxes and wanes a fair bit, until it gradually fades to not being noticeable anymore. Same goes for conscious pleasures.
It's possible such variations in intensity are entirely accounted for by their degrees of different kinds of unity, or by some other plausible feature(s) of moral weight, but maybe not. If there is some additional "intensity" variable for valenced aspects of conscious experience, it would seem a good candidate for affecting moral weight.
From my own experience, my guess is that I would endure ~10 seconds of the most intense pain I've ever experienced to avoid experiencing ~2 months of the lowest level of discomfort that I'd bother to call "discomfort." That very low level of discomfort might suggest a lower bound on "intensity of valenced aspects of experience" that I intuitively morally care about, but "the most intense pain I've ever experienced" probably is not the highest intensity of valenced aspects of experience it is possible to experience — probably not even close. You could consider similar trades to get a sense for how much you intuitively value "intensity of experience," at least in your own case.
Moral weights of various species
(This section edited slightly on 2020-02-26.)
If we thought about all this more carefully and collected as much relevant empirical data as possible, what moral weights might we assign to different species?
Whereas my probabilities of moral patienthood for any animal as complex as a crab only range from 0.2 - 1, the plausible ranges of moral weight seem like they could be much larger. I don't feel like I'd be surprised if an omniscient being told me that my extrapolated values would assign pigs more moral weight than humans, and I don't feel like I'd be surprised if an omniscient being told me my extrapolated values would assign pigs .0001 moral weight (assuming they were moral patients at all).
To illustrate how this might work, below are some guesses at some "plausible ranges of moral weight" (80% prediction interval) for a variety of species that someone might come to, if they had intuitions like those explained below.
(But whenever you're tempted to multiply such numbers by something, remember two-envelope effects!)
What intuitions might lead to something like these ranges?
Under these intuitions, the low end of the ranges above could be explained by the possibility that intensity of conscious experience diminishes dramatically with brain complexity and flexibility, while the high end of the ranges above could be explained by the possibility concerning faster clock speeds for smaller animals, the possibility of lesser unity in non-human animals (which one might value at >1x for the same reason one might value a dually-conscious split-brain patient at ~2x), and the possibility for greater intensity of experience in simpler animals.
Other writings on moral weight