I did some figuring and it looks like I came to the same conclusion.
I've only skimmed this and small portions of the links about the two-envelopes thing. As the original mathematical exercise, it's kind of fun to construct a probability distribution where it's always advantageous to switch envelopes. But Wiki says:
Suppose E ( B | A = a ) > a for all a. It can be shown that this is possible for some probability distributions of X (the smaller amount of money in the two envelopes) only if E ( X ) = ∞.
Which seems probably true. And comparing infinities is always a dangerous game. Though you can have finite versions of the situation (e.g. 1/10th chance of each of "$1, $2", "$2, $4", ..., "$512, $1024") where switching envelopes is advantageous in all cases except one.
Anyway, onto the moral version from Tomasik's article. I tried stating it in terms of utility.
Suppose (helping) a human is worth 1 util. In the first scenario (to which we give probability 0.5), an elephant is worth 1/4 as much as a human, so 0.25 utils, so two elephants are worth 0.5 utils. In the second scenario (also probability 0.5), an elephant is worth the same as a human, so 1 util, and two elephants are worth 2 utils. Then the expected-value calculation for helping the human is: "E(h) = 0.5 * 1 + 0.5 * 1 = 1", while for the elephants it's "E(2e) = 0.5 * 0.5 + 0.5 * 2 = 1.25", and thus E(h) = 1 < E(2e) = 1.25, so helping the elephants is better.
On the other hand, if we decide that an elephant is worth 1 util, then our calculations become:
e = 1 u.
.5: h = 4 u, 2e = 2 u.
.5: h = 1 u, 2e = 2 u.E(h) = 2.5 u
E(2e) = 2 u
-> prefer h.
This reproduces the "always advantageous to switch" problem.
The trouble is that our unit isn't consistent between the two scenarios. The mere information about the ratio between h and e doesn't fix an absolute value that can be compared across the two worlds; we could scale all values in one world up or down by an arbitrary factor, which can make the calculation go either way. To illustrate, let's assign absolute values. First, let's suppose that h is worth $100 in all worlds (you might imagine h = "hammer", e = "earbuds" or something):
.5: h = $100, 2e = $50
.5: h = $100, 2e = $200
"E(h) = $100, E(2e) = $125"
-> prefer 2e.
Next, let's imagine that h is worth $100 in the first world, but $1 in the second world:
.5: h = $100, 2e = $50
.5: h = $1, 2e = $2
"E(h) = $50.5, E(2e) = $26"
-> prefer h.
We see that giving h a much bigger value in one world effectively gives that world a much bigger weight in the expected-value calculation. The effect is similar to if you gave that world a much higher probability than the other.
And we see that Tomasik's original situation amounts to, the first time around, having "h = $100, 2e = $50 or $200", and, the second time, having "h = $50 or $200, 2e = $100".
So picking the right consistent cross-universal unit is important, and is the heart of the problem... Finally looking back at your post, I see that your first sentence makes the same point. :-)
Now, I'll remark: It could be that, in one world, everyone has much less moral worth—or their emotions are deadened or something—and therefore your calculations should care more about the other world. Just how if, in world A, picking option 1 gets you +$500, whereas in world B, picking option 2 gets you +$0.5, then you act like you're in world A and don't care about world B because A is more important, in all situations except where B is >1000x as likely as A.
It is possible that the value of human life or happiness or whatever should in fact be considered worth a lot more in certain worlds than others, and that this co-occurs with moral worth being determined by brain cell count rather than organism count (or vice versa). But whatever the cross-world valuation is, it must be explicitly stated, and hopefully justified.
Summary
When taking expected values, the results can differ radically based on which common units we fix across possibilities. If we normalize relative to the value of human welfare, then other animals will tend to be prioritized more than by normalizing by the value of animal welfare or by using other approaches to moral uncertainty.
How this work has changed my mind: I was originally very skeptical of intertheoretic comparisons of value/reasons in general, including across theories of consciousness and the scaling of welfare and moral weights between animals, because of the two envelopes problem (Tomasik, 2013-2018) and the apparent arbitrariness involved. This lasted until around December 2023, and some arguments here were originally going to be part of a piece strongly against such comparisons for cross-species moral weights, which I now respond to here along with positive arguments for comparisons.
Acknowledgements
I credit Derek Shiller and Adam Shriver for the idea of treating the problem like epistemic uncertainty relative to what we experience directly. I’d also like to thank Brian Tomasik, Derek Shiller and Bob Fischer for feedback. All errors are my own.
Background
On the allocation between the animal-inclusive and human-centric near-termist views, specifically, Karnofsky (2018) raised a problem:
We can define random variables to capture these statements more precisely via a formalization with expected values. Let H denote the (average or marginal) moral value per human life improved by some intervention, and let C denote the (average or marginal) moral value per chicken life improved by another intervention. Then,
Based on Karnofsky’s example, we could take C/H to be 1% with probability 50% and (approximately) 0 otherwise, and H/C to be 100 (100=1/(1%)) with probability 50% and astronomical (possibly infinite) otherwise. If C is never 0, then C/H and H/C are multiplicative inverses of one another this way, i.e. C/H∗H/C=1. However, E[C/H]=0.005, while E[H/C] is astronomical or infinite, and E[C/H]∗E[H/C]>1. In general, E[C/H]∗E[H/C]>1 as long as C/H is defined, non-negative and not constant.[2] The fact that these two expected values of ratios aren’t inverses of one another is why the two methods give different results for prioritization.
Rather than specific welfare improvements in particular, C and H could denote welfare ranges, i.e. the difference between the maximum welfare at a time and the minimum welfare at a time of the average chicken or average human, respectively. Or, they may be the “moral weight” of the average chicken or the average human, respectively, as multipliers by which to weigh measures of welfare. We may let H denote the moral value per unit of a human welfare improvement according to a measure of human welfare, like DALYs, QALYs, or measures of life satisfaction, and let C denote the moral value of per unit of chicken welfare improvement according to a measure of chicken welfare.[3] See Fischer, 2022 and Rethink Priorities’ Moral Weight Project Sequence for further discussion of welfare ranges, capacities for welfare and moral weights.
This problem has been called the two envelopes problem, in analogy with the original two envelopes problem (Tomasik, 2013-2018, Tomasik et al., 2009-2014). I use Karnofsky (2018)’s framing because of its more explicit connection to effective altruist cause prioritization.
I make a case here that we should fix and normalize by the (or a) human moral weight, using something like comparison method A, with some caveats and adjustments.
Welfare in human-relative terms
The strengths of our reasons to reduce human suffering or satisfy human belief-like preferences, say, don’t typically seem to depend on our understanding of their empirical or descriptive nature. This is not how we actually do ethics. If we found out more about the nature of consciousness and suffering, which we define in human terms, we typically wouldn’t decide it mattered less (or more) than we thought before.[4] Finding out that pleasure is mediated not by dopamine or serotonin but by a separate system, or that humans only have around 86 billion neurons instead of 100 billion doesn’t change how important our own experiences directly seem to us. Nor does changing our confidence between the various theories of consciousness.
Instead, we directly value our experiences, not our knowledge of what exactly generates them. Water didn’t become more or less important to human life from finding out it was H2O.[5] The ultimate causes of why we care about something may depend on its precise empirical or descriptive nature, but the proximal reasons — for example, how suffering feels to us and how bad it feels to us, say — do not change with our understanding of its nature. One might say we know (some of) these reasons by direct experience.[6] My own suffering just directly seems bad to me,[7] and how bad it directly seems does not depend on my beliefs about theories of consciousness or about how many neurons we have.
And, in fact, on utilitarian views using subjective theories of welfare like hedonism, desire theories and preference views, how bad my suffering actually (directly) is for me on those theories plausibly should just be how bad my suffering (directly) seems to me.[8] In that case, uncertainty about the nature of these “seemings” or appearances and how they arise and their extent in other animals is just descriptive uncertainty, like uncertainty about the nature and prevalence of any other physical or biological phenomenon, like gravity or cancer.[9] This is not a problem of comparisons of reasons across moral theories or moral uncertainty. It’s a problem of comparisons of reasons across theories of the empirical or descriptive nature of the things to which we assign moral value. There is, however, still moral uncertainty in deciding between hedonism, desire theories, preference views and objective list theories, and between variants of each, among other things.
Despite later warning about two-envelopes effects in Muehlhauser, 2018, one of Muehlhauser (2017)’s illustration of how he understands moral patienthood is based on his own direct experience of pain:
It’s the still poorly understood “whatever this is”, i.e. his direct experience, and things “like it” that are of fundamental moral importance and for which he’s looking in other animals. Conscious pain according to specific theories are just designed to track “whatever this is” and things “like it”, but almost all theories will be wrong. The example also seems best interpretable as an illustration of comparison method A, weighing fish pain relative to his experience of pain spraining an ankle.
The relevant moral reasons are or derive directly from these direct experiences or appearances, and the question is just when, where (what animals and other physical systems) and to what extent these same (kinds of) appearances and resulting reasons apply. Whatever this is that we’re doing, to what extent do others do it or something like it, too? All of our views and theories of the value of welfare should already be or should be made human-relative, because the direct moral reasons we have to apply all come from our own individual experiences and modest extensions, e.g. assuming our experiences are similar to other humans’. As we find out more about other animals and the nature of human welfare, our judgements about where other animals stand in relation to our concept and direct impressions of human welfare — the defining cases — can change.
So I claim that we have direct access to the grounds for the disvalue of human suffering and human moral value, i.e. the variable H in the previous section, and we understand the suffering and moral value of other beings, including the (dis)value in chickens as C above, relative to humans. Because of this, we can fix H and use comparison method A, at least across some theories, including at least separately across theories of the nature of unpleasantness, across theories of the nature of felt desires, and across theories of the nature of belief-like preferences.
On the other hand, it doesn’t make much sense for us to fix the moral value of chicken suffering or the chicken moral weight, because we (or you, the reader) only understand it in human-relative terms, and especially in reference to our (respectively, your) own experiences.[10]
And it could end up being the case — i.e. with nonzero probability — that chickens don’t matter at all, not even infinitesimally. They may totally lack the grounds to which we assign moral value, e.g. they may not be capable of suffering at all, even though I take it to be quite likely that they can suffer, or moral status could depend on more than suffering. Then, we aren’t even fixing the moral weight of a chicken at all, if it can be 0 with nonzero probability and nonzero with nonzero probability. And because of the possible division by 0 moral weight, the expected moral weights of humans and all other animals will be infinite or undefined.[11] It seems such a view wouldn’t be useful for guiding action.[12]
Similarly, we wouldn’t normalize by the moral weights of any other animals, artificial systems, plants or rocks.
We have the most direct access to (some) human moral reasons, can most reliably understand (some of) them and so typically theorize morally relative to (some of) them. How we handle uncertainty should reflect these facts.
Finding common ground
How intense or important suffering is could be quantified differently across theories, both empirical theories and moral theories. In some cases, there will be foundational metaphysical claims inherent to those theories that could ground comparisons between the theories. In many or even most important cases, there won’t be.
What common metaphysical facts could ground intertheoretic comparisons of value or reasons across theories of consciousness as different as Integrated Information Theory, Global Workspace Theory and Attention Schema Theory? Under their standard intended interpretations, they have radically different and mutually exclusive metaphysical foundations — or basic building blocks —, and each of these foundations is false, except possibly one. Similarly, there are very different and mutually exclusive proposals to quantify the empirical intensity of welfare and moral weights, like counting just-noticeable differences, functions of the number of relevant (firing) neurons or cognitive sophistication, direct subjective intrapersonal weighing, among others (e.g. Fischer, 2023 with model descriptions in the tabs of this sheet). How do the numbers of relevant neurons relate to the number of just-noticeable differences across all possible minds, not just humans? There’s nothing clearly inherent to these accounts that would ground intertheoretic comparisons between them, at least given our current understanding. But we can look outside the contents of the theories themselves to the common facts they’re designed to explain.
When ice seemed like it could have turned out to be something other than the solid phase of water, we would be comparing the options based on the common facts — the evidence or data — the different possibilities were supposed to explain. And then by finding out that ice is water, you learn that there is much more water in the world, because you would then also have to count all the ice on top of all the liquid water.[13] If your moral theory took water to be intrinsically good and more of it to be better, this would be good news (all else equal).
For moral weights across potential moral patients, the common facts our theories are designed to explain are those in human experiences, our direct impressions and intuitions, like how bad suffering feels or appears to be to us. It’s these common facts that can be used to ground intertheoretic comparisons of value or reasons, and it’s these common facts or similar ones for which we want to check in other beings or systems. So, we can hold the strengths of reasons from these common facts constant across theories, if and because they ground value directly on these common facts in the same way, e.g. the same hedonistic utilitarianism under different theories of (conscious) pleasure and unpleasantness, or the same preference utilitarianism under different theories of belief-like preferences. And in recognizing animal consciousness, like finding out that ice is water, you could come to see the same kind of empirical facts and therefore moral value in some other animals, finding more of it in the world.
Multiple possible reference points
However, things aren’t so simple as fixing the human moral weight across theories. We should be unsure about that, too. Perhaps a given instance of unpleasantness matters twice as much as another given belief-like preference, or perhaps it matters half as much, with 50% each. We get the two envelopes problem here, too. If we were to fix the value of the unpleasantness, then the belief-like preference would have an expected value of 50%*0.5 + 50%*2 = 1.25 times as great of the value of the unpleasantness. If we were to fix the value of the belief-like preference, then the unpleasantness would have an expected value of 50%*0.5 + 50%*2 = 1.25 times as great of the value of the belief-like preference.
We’re uncertain about which theory of wellbeing is correct and how to weigh human unpleasantness vs human pleasure vs human felt desires vs human belief-like preferences vs human choices vs objective goods and objective bads (and between each). The relative strengths of these different corresponding reasons are not in general fixed across theories. Therefore, the strengths of our reasons can only be fixed for at most one of these at a time (if their relationships aren’t fixed). And the positive arguments for fixing any specific one and not the others seem likely to be weak, so it really is plausible that none should be fixed.
Similarly, we can also be uncertain about tradeoffs, strengths and intensities within the same type of welfare for a human, too, e.g. just degrees of unpleasantness, resulting in another two envelopes problem. For example, I’m uncertain about the relative intensities and moral disvalues of pains I’ve experienced.[14] In general, people may use multiple reference points with which they’re familiar, like multiple specific experiences or intensities, and be uncertain about how they relate to one another.
There could also be non-welfarist moral reasons to consider, like duties, rights, virtues, justifiability and reasonable complaints (under contractualism), special relationships, and specific instances of any of these. We can be uncertain about how they relate to each other and the various types of welfare, too.
So, what do we do? We could separately fix and normalize by each possible (typically human-based) reference point, a specific moral reason, use intertheoretic comparisons relative to it, e.g. the expected value of belief-like preferences (cognitive desires) in us and other animals relative to the value of some particular (human) pleasure. I’ll elaborate here.
We pick a very specific reference point or moral reason R and fix its moral weight VR as a common unit relative to which we measure everything else. VR takes the role of H in the human-relative method A in the background section. We measure the moral weights of humans (or specific human welfare concerns) like HR and that of chickens like CR, and we do the same for everything else. And we also do all of this separately for every possible (typically human-based) reference point R.
For uncertainty between choices of reference points, e.g. between a human pleasure and a human belief-like preference, we would apply a different approach to moral uncertainty that does not depend on intertheoretic comparisons of value or reasons, e.g. a moral parliament.[15] Or, when we can fix (or bound or get a distribution on) the ratios between all pairs of reference points in a subet of them, we could take a weighted sum across the reference points (or subsets of them), like in maximizing expected choiceworthiness and calculate the expected moral weights of chickens and humans (on that subset of reference points) as [16]
∑RwRE[CR] and ∑RwRE[HR] , respectively.In either case, it’s essentially human-relative, if and because R is almost always a human reference point.
There are some things we could say about E[CR] vs E[HR] with some constraints on the relationship between the distributions of HR and CR. Using the same numbers as Karnofsky (2018)’s and assuming
then[17]
E[CR]≥0.005∗E[HR] ,like in Karnofsky (2018)’s illustration of the human-relative method A. In general, we multiply the probability ratio (50% here) by the value ratio (1/100 here) to get the ratio of expected moral weights (0.005 here). We can also upper bound E[CR] with a multiple of E[HR] by reversing the inequalities between the probabilities in 2 and 3.[18]
What can we say about the ratio of expected moral weights?
Would we end up with a ratio of expected moral weights between chickens and humans that’s relatively friendly to chickens? This will depend on the details and our credences.
Consider a lower bound for the chicken’s expected moral weight relative to a human’s. Say we fix some human reference point and corresponding moral reason.
As in the inequality from the previous section, we might think that whatever reason applies to a given human and with whatever strength, a chicken has at least 50% of the probability of having the same or a similar reason apply, but with strength only at least 1/100th of the human’s (relative to the reference point). That would give a ratio of 0.005. Or something similar with different numbers.
We might expect something like this because the central moral reasons from major moral theories seem to apply importantly to farmed chickens with probability not far lower than they do to humans.[19] Let’s consider several:
See also the articles by Animal Ethics on the status of nonhuman animals under various ethical theories and the weight of animal interests.
However, many of the comparisons here probably do in fact depend on comparisons across moral theories, e.g. Kant’s original animal-unfriendly position vs Regan and (perhaps) Korsgaard’s animal-friendly positions. The requirement of (sufficient) rationality for Kant’s reasons to apply could be an inherently moral claim, not a merely empirical one. If Regan and Korsgaard don’t require rationality for moral status, are they extending the same moral reasons Kant recognizes to other animals, or grounding different moral reasons? They might be the same intrinsically, if we see the restriction to rational beings as not changing the nature of the moral reasons. Perhaps the moral reasons come first, and Kant mistakenly inferred that they apply only to rational beings. Or, if they are different, are they similar enough that we can identify them anyway? On the other hand, could the kinds of reasons Regan and Korsgaard recognize as applying to other animals be far far weaker than Kant’s that apply to humans or incomparable to them? Could Kant’s apply to other animals directly with modest probability anyway?
Similar issues could arise between contractualist theories that protect nonrational (or not very rational) beings and those that only protect (relatively) rational beings. I leave these as open problems.
Objections
In this section, I describe and respond to some potential objections to the approach and rationale for intertheoretic comparisons of moral weights I’ve described.
Conscious subsystems
First, it should be human welfare standardly and simultaneously accessed for report that we fix. There could be multiple conscious (or otherwise intrinsically morally considerable) subsystems in a brain to worry about — whether inaccessible in general or not accessed at any particular time — effectively multiple moral patients with their own moral interests in each brain. Our basic moral intuitions about the value of human welfare and the common facts we’re trying to explain probably do not reflect any inaccessible conscious subsystems in our brains, and in general would plausibly only reflect conscious subsystems when they are actually accessed. So, we should normalize relative to what we actually access. It could then be that the number of such conscious subsystems scales in practice with the number of neurons in a brain, so that the average human would have many more of them in expectation, and so could have much greater expected moral weight than other animals with fewer neurons (Fischer, Shriver & St. Jules, 2023 (EA Forum post)).
In the most extreme case, we end up separately counting overlapping systems that differ only by a single neuron (Mathers, 2021) or even a single electron (Crummett, 2022), and the number of conscious subsystems may grow polynomially or even exponentially with the number of neurons or the number of particles, by considering all connected subsets of neurons and neural connections or “connected” subsets of particles.[21] Even a small probability on an aggressive scaling hypothesis could lead to large predictable expected differences in total moral weights between humans, and could give greater expected moral weight to the average whale with more neurons than the average human (List of animals by number of neurons - Wikipedia). With a small but large enough probability to fast enough scaling with the number of neurons or particles, a single whale could have more expected moral weight than all living humans combined. That seems absurd.
In this case, how we decide to individuate and count conscious systems seems to be a matter of moral uncertainty. Empirically, I am pretty confident that both the system that is my whole brain is conscious and that the system that is my whole brain excluding any single neuron or electron is conscious. I just don’t think I should count these systems separately to add up. And then, even if I should assign some non-negligible probability that I should count such systems separately and that the same moral reasons apply views on counting conscious systems — this would be a genuine identification of moral reasons across different moral theories, not just identifying the same moral reasons across different empirical views —, it seems far too fanatical if I prioritize humans (or whales) because of the tiny probability I assign to the number of conscious subsystems of a brain scaling aggressively with the number of neurons or electrons. I outline some other ways to individuate and count subsystems in this comment, and I would expect these to give a number of conscious subsystems scaling at most roughly proportionally in expectation with the number of neurons.
There could be ways to end up with conscious subsystems scaling with the number of neurons that are more empirically based, rather than dependent on moral hypotheses. However, this seems unlikely, because the apparently valuable functions realized in brains seem to occur late in processing, after substantial integration and high-level interpretation of stimuli (see this comment and Fischer, Shriver & St. Jules, 2023 (EA Forum post)). Still, even a small but modest probability could make a difference, so the result will depend on your credences.
Unresolvable disagreements
Second, it could also be difficult for intelligent aliens and us, if both impartial, to agree on how to prioritize humans vs the aliens under uncertainty, if and because we’re using our own distinct standards to decide what matters and how much. Suppose the aliens have their own concept of a-suffering, which is similar to, but not necessarily identical to our concept of suffering. It may differ from human suffering in that some functions are missing, or additional functions are present, or the number of times they’re realized differ, or the relative or absolute magnitudes of (e.g. cognitive) effects differ. Or, if they haven’t gotten that far in their understanding of a-suffering, it could just be the fact that a-suffering feels different or might feel different from human suffering, so their still vague concept picks out something potentially different from ours. Or vice versa.
In the same way chickens matter relatively more on the human-relative view than chickens do on the chicken-relative view, as above from Karnofsky, 2018, humans and the aliens could agree on (almost) all of the facts and have the same probability distributions for the ratio of the moral weight of human suffering to the moral weight of a-suffering, and yet still disagree on expected moral weights and about how to treat each other. Humans could weigh humans and aliens relative to human suffering, while the aliens could weigh humans and aliens relative to a-suffering. In relative terms and for prioritization, the aliens would weigh us more than we weigh ourselves, but we’d weigh them more than they weigh themselves.
One might respond that this seems too agent-relative, and we should be able to agree on priorities if we agree on all the facts, and share priors and the same impartial utilitarian moral views. However, while consciousness remains unsolved, humans don't know what it's like to be the aliens or to a-suffer, and the aliens don't know what it's like to be us or suffer like us. We have access to different facts, and this is not a source of agent-relativity, or at least not an objectionable one. Furthermore, we are directly valuing our own experiences, human suffering, and the aliens are directly valuing their own, a-suffering, and if these differ enough, then we could also disagree about what matters intrinsically or how. This seems no more agent-relative than the disagreement between utilitarians that disagree just on whether hedonism or desire theory is true: a utilitarian grounding welfare based on human suffering and a utilitarian doing so based on a-suffering just disagree about the correct theory of wellbeing or how it scales.
Epistemic modesty about morality
Still, perhaps both we and the aliens should be more epistemically modest[22] about what matters intrinsically and how, and so give weight to the direct perspectives of the aliens. If we try to entertain and weigh all points of view, then we would need to make and agree on genuine intertheoretic comparisons of value, which seems hard to ground and justify, or else we’d use an approach that doesn’t depend on intertheoretic comparisons. This could bring us and the aliens closer to agreement about optimal resource allocation, and perhaps convergence under maximal epistemic modesty, assuming we also agree on how to weigh perspectives and an approach to normative uncertainty.
Doing this can take some care, because we’re uncertain about whether the aliens have any viewpoint at all for us to adopt, and similarly they could be uncertain about us having any such viewpoint. This could prevent full convergence.
On the other hand, chickens presumably don’t think at all about the moral value of human welfare in impartial terms, so there very probably is no such viewpoint to adopt on their behalf, or else only one that’s extremely partial, e.g. some chickens may care about some humans to which they are emotionally attached, and many chickens may fear or dislike humans. Chickens’ points of view therefore wouldn’t grant humans much or any moral weight at all, or may even grant us negative overall weight instead. However, the right response here may instead be against moral impartiality, not against humans in particular. Indeed, most humans seem to be fairly partial, too, and we might partially defer to them, too. Either way, this perspective doesn’t look like the chicken-relative comparison method B from Karnofsky, 2018 that grants humans astronomically more weight than chickens.
How might we get such a perspective? We might idealize: what would a chicken believe if they had the capacities and were impartial, while screening off the value from those extra capacities. Or, we might consider a hypothetical impartial human or other intelligent being whose capacities for suffering are like those of a chicken, whatever those may be. Rather than actual viewpoints for which we have specific evidence of their existence, we’re considering conceivable viewpoints.
I’ll say here that this seems pretty speculative and weird, so I have some reservations about this, but I’m not sure either way.
A plausibly stronger objection to epistemic modesty about moral (and generally normative) stances is that it can undermine whatever moral views you or I or anyone else holds too much, including the foundational beliefs of effective altruists or assumptions in the project of effective altruism, like impartiality and the importance of beneficence. I am strongly disinclined to practically abandon my own moral views this way. I think this is a more acceptable position than rejecting epistemic modesty about non-normative claims, especially for a moral antirealist, i.e. those who reject stance-independent moral facts. We may have no or only weak reasons for epistemic modesty about moral facts in particular.
On the other hand, rather than abandoning foundational beliefs, it may actually support them. It may capture impartiality in a fairly strong sense by weighing each individual’s normative stance(s). Any being who suffers finds their own suffering bad in some sense, and this stance is weighed. A typical parent cares a lot for their child, so the child gets extra weight through the normative stance of the parent. Some humans particularly object to exploitation and using others as means to ends, and this stance is weighed. Some humans believe it’s better for far more humans to exist, and this stance is weighed. Some humans believe it’s better for fewer humans to exist, and this stance is weighed. The result could look like a kind of impartial person-affecting preference utilitarianism, contractualism or Kantianism (see also Gloor, 2022),[23] but relatively animal-inclusive, because whether or not other animals meet some thresholds for rationality or agency, they could have their own perspectives on what matters, e.g. their suffering and its causes.
If normative stances across species, like even across humans, are often impossible to compare, then the implications for prioritization could be fundamentally indeterminate, at least very vague. Or, they could be dominated by those with the most fanatical or lexical stances, who prioritize infinite value at stake without trading it off against mere finite stakes. Or, we might normalize each individual's values (or utility function) by their own range or variance in value (Cotton-Barratt et al., 2020), and other animals could outweigh humans through their numbers in the near term.
Other applications of the approach
What other intertheoretic comparisons of value could this epistemic approach apply to? I will consider:
First, realism vs illusionism about phenomenal consciousness. Illusionists deny the phenomenal nature of consciousness and the existence of qualia as “Introspectable qualitative properties of experience that are intrinsic, ineffable, and subjective” (Frankish, 2012, preprint), introduced by Lewis (1929, pp.121, 124-125). Realists accept the phenomenal nature of consciousness and/or qualia. Illusionists do not deny that consciousness exists.[24] In section 5.2, Kammerer, 2019 argues that if phenomenal consciousness would ground moral value if it existed, it would be an amazing coincidence for pain to be as bad under (strong) illusionism, which denies the existence of phenomenal consciousness, as it is under realism which accepts the existence of phenomenal consciousness. However, if you're already a moral antirealist or take an epistemic approach to intertheoretic comparisons, then it seems reasonable to hold the strengths of your reasons to be the same, but just acknowledge that you may have misjudged their source or nature. Rather than phenomenal properties as their source, it could be quasi-phenomenal properties, where “a quasi-phenomenal property is a non-phenomenal, physical property (perhaps a complex, gerrymandered one) that introspection typically misrepresents as phenomenal” (Frankish, 2017, p. 18), or even the beliefs, appearances or misrepresentations themselves. Frankish (2012, preprint) proposed a theory-neutral explanandum for consciousness:
These zero qualia could turn out to be phenomenal, under realism, or non-phenomenal and so quasi-phenomenal under illusionism (Frankish, 2012, preprint), but the judgements to be captured are the same, so it seems reasonable to treat the resulting reasons as the same. Or, we could use a less precise common ground: consciousness, whatever it is.
A similar approach could be taken with respect to uncertainty between metaethical positions, using our moral judgements or intuitions as the common facts. Again, we may be wrong about the nature of what they’re supposed to refer to or even the descriptive reality of these moral judgements and intuitions — e.g. whether they express propositions, as in cognitivism, or desires, emotions or other pro-attitudes and con-attitudes, as in non-cognitivism (van Roojen, 2023), and, under cognitivism, whether they are stance-independent or stance-dependent —, but we will still have them in any case. I’d judge torture very negatively regardless of my metaethical stance. Even more straightforwardly, for any specific moral realist stance, there’s a corresponding subjectivist stance that recognizes the exact same moral facts (and vice versa?), but just interprets them as stance-dependent rather than stance-independent. Any non-cognitivist pro-attitude or desire could be reinterpreted as expressing a belief (or appearance) that something is better.[25] This could allow us to at least match identical moral theories, e.g. the same specific classical utilitarianism, under the different metaethical interpretations.
Riedener (2019) proposes a similar and more general constructivist approach based on epistemic norms.[26] He illustrates with person-affecting views vs total utilitarianism, arguing for holding the strengths of reasons to benefit existing people the same between welfarist person-affecting views and total utilitarianism,[27] which would tend to favour total utilitarianism under moral uncertainty. However, if we’re comparing a Kantian person-affecting view and total utilitarianism, he argues that we may have massively misjudged our reasons other than for beneficence between the two views. So the comparison is more complex, and reasons for beneficence could be stronger under total utilitarianism, while our other reasons could be stronger under Kantian views, and we should balance epistemic norms and the particulars to decide these differences.
To be clear, I’m much less convinced of the applications in these cases, and there are important reasons for doubt:
In each of the above cases, one view takes as fundamental and central measures or consequences of what the other view takes as fundamental and central.[29] This will look like Goodhart’s law to those who insist it’s not these measures or consequences that matter but what is being measured or the causes. Those holding one of the pairs of views could complain that the others are gravely mistaken about what matters and why, so Riedener (2019)’s conservatism may not tell us much about how to weigh the views. The comparisons seem less reasonable, and we could end up with two envelopes problems again, fixing one theory’s fundamental grounds and evaluating both theories relative to it.
On the other hand, while not every pair of theories of consciousness or the value of welfare will agree on common facts to explain, many will. For example, realists about phenomenal consciousness will tend to agree with each other that it’s (specific) phenomenal properties themselves that their theories are designed to explain, so we could compare reasons across realist theories. Illusionists will tend to agree that it’s our beliefs (or appearances) about consciousness that are the common facts to explain, so we could compare reasons across illusionist theories. And theories of welfare and its value are designed to explain, among other things, why suffering is bad or seems bad. So, many reason comparisons can be grounded in practice, even if not all. And regardless of the reasons and whether they can be compared across all views, the common facts from which comparable reasons derive are based on human experience, so our moral views are justifiably human-relative.
Karnofsky (2018) wrote:
I assume he meant “human-centric view” instead of “human-inclusive view”, so I correct the quote with square brackets here.
E[1/X]=1/E[X] if and only if X is equal to a constant with probability 1, and E[1/X]>1/E[X] if X is nonnegative and not equal to a constant with probability 1. This follows from Jensen's inequality, because f defined by f(x)=1/x is convex.
I would either use per unit averages for chickens and humans, respectively, or assume here that the value scales in proportion (or at least linearly) with each unit of measured welfare for each of humans and chickens, separately.
However, some may believe objective moral value is threatened by illusionism about phenomenal consciousness, which denies that phenomenal consciousness exists. These positions do still recognize that consciousness exists, but they deny that it is phenomenal. We could just substitute an illusionist account of consciousness wherever phenomenal consciousness was used in our ethical theories, although some further revisions may be necessary to accommodate differences. For further discussion, see Kammerer, 2019, Kammerer, 2022 or a later section in this piece. The difference here is because some ethical theories directly value phenomenal consciousness specifically, and not (or less) consciousness in general.
Other examples could be free will, libertarian free will specifically or god(s) which may turn out not to exist, and so moral theories that tied some reasons specifically to them would lose those reasons.
If a moral theory only places value on things that actually exist in some form, while being more agnostic about their nature, then the value can follow the vague and revisable concepts of those things.
Except possibly for indirect and instrumental reasons. It’s useful to know water is H2O.
This could be cashed out in terms of acquaintance, as in knowledge by acquaintance (Hasan, 2019, Duncan, 2021, Knowles & Raleigh, 2019), or appearance, as in phenomenal conservatism (Huemer, 2013). Adam Shriver made a similar point in conversation.
This may be more illustrative than literal for me. Personally, it’s more that other people’s suffering seems directly and importantly bad to me, or indirectly and importantly bad through my emotional responses to their suffering.
However, which kind of “seeming” or appearance should be used can depend on the theory of wellbeing, i.e. unpleasantness under hedonism, cognitive desires or motivational salience under desire theories and preferences under preference theories. I concede later that we may need to separate by these very broad accounts of welfare (and perhaps more finely) rather than treat them all as generating the same moral reasons.
From conversation with multiple people, something like this seems to be the standard view.
Our sympathetic responses to the suffering of another individual — chicken, human or otherwise — don’t necessarily reliably track how bad it is for them from their own perspective, but is probably closer for other humans, because of greater similarity between humans (neurological, functional, cognitive, psychological, behavioural).
E[X/Y]=∞ (or undefined) if X>0, Y≥0, and Y=0 with nonzero probability, because we get X/0 with nonzero probability. E[X/Y] is undefined if X,Y≥0, and X=Y=0 with nonzero probability, because we get 0/0 with nonzero probability.
However, in principle, humans in general or each proposed type of wellbeing could not matter with nonzero probability, so we could get a similar problem normalizing by human welfare or moral weights.
There may be some ways to address the issue.
You could treat the 0 moral weight like an infinitesimal and do arithmetic with it, but I think this entirely denies the possibility that chickens don’t matter at all. This seems ad hoc and to have little or no independent justification.
You could take conditional expected values in the denominator (and numerator) first that gives a nonzero value, assuming Cromwell’s rule, before taking the ratio and expected value of the ratio. In other words, you take the expected value of a ratio of conditional expected values of moral weights. Then, in effect, you’re treating the conditional expected value of chicken moral weight as equal across some views. Most naturally, you would take the conditional expected values over descriptive uncertainty, conditional on each fixed normative stance — so that the resulting prescriptions would agree with each normative stance — and then take the expected value of the ratio across these normative stances/theories (over normative uncertainty).
If you had already measured all the liquid water directly and precisely, you wouldn’t expect any more or less liquid water from finding out ice is also water.
I even doubt that there is any precise fact of the matter for the ratio of their intensities or moral disvalue.
Approaches include Open Philanthropy’s worldview diversification approach (Karnofsky, 2018), variance voting (MacAskill et al., 2020, Ch4), moral parliaments (Newberry & Ord, 2021), a bargain-theoretic approach (Greaves & Cotton-Barratt, 2019), or the Property Rights Approach (Lloyd, 2022). For an overview of moral uncertainty, see MacAskill et al., 2020.
With multiple values for a given wR, e.g. a distribution of values, we could get a distribution or set of expected moral weights for chickens and humans. To these, we could apply an approach to moral uncertainty that doesn’t depend on intertheoretic reason comparisons.
Let QH(q) and QC(q) be the quantile functions of HR and CR, respectively. Then, for p between 0 and 1,
QC(1−0.5p)=inf{y∈R|1−0.5p≤1−P[CR≥y]}
=inf{x/100|x∈R,0.5p≥P[CR≥x/100]}
=1100inf{x∈R|0.5p≥P[CR≥x/100]}
=1100inf{x∈R|0.5p≥P[CR≥x/100]}
≥1100inf{x∈R|p≥P[HR≥x]}
=1100inf{x∈R|1−p≤1−P[HR≥x]}
≥1100QH(1−p)
Then,
E[CR]=∫10QC(q)dq
=∫20QC(1−0.5p)0.5dp
≥∫01QC(1−0.5p)0.5dp
≥∫101100QH(1−p)0.5dp
=0.005∫10QH(1−p)dp
=0.005E[HR]
P[CR>0]≤a and P[CR≥b∗x]≤a∗P[HR≥x] gives E[CR]≤abE[HR].
However, some major moral theories don’t weigh reasons by summation, aggregate at all or take expected values. The expected moral weights of chickens and humans may not be very relevant in those cases.
Carey and Fry (1995) showed that pigs generalize the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). Many more such experiments were performed on rats, as discussed in Sánchez-Suárez, 2016, summarized in Table 2 on pages 63 and 64 and discussed further across chapter 3. Rats could discriminate between the injection of the anxiety-inducing drug PTZ and saline injection, including at subconvulsive doses. Various experiments with rats and PTZ have effectively ruled out convulsions as the discriminant, further supporting that it’s the anxiety itself that they’re discriminating, because they could discriminate PTZ from control without generalizing between PTZ and non-anxiogenic drugs, and with the discrimination blocked by anxiolytics and not nonanxiolytic anticonvulsants. Rats further generalized between various pairs of anxiety(-like) states, like those induced by PTZ, drug withdrawal, predator exposure, ethanol hangover, “jet lag”, defeat by a rival male, high doses of stimulants like bemegride and cocaine, and movement restraint.
However, Mason and Lavery (2022) caution:
There are exponential (non-tight) upper bounds for the number of connected subgraphs of a graph, and hence connected neural subsystems of a brain (Pandey & Patra, 2021, Filmus, 2018). However, not any such connected subsystem would be conscious. Also, with bounded degree, i.e. a bounded number of connections/synapses per neuron in your set of brains under consideration, the number of connected subgraphs can be bounded above by a polynomial function of the number of neurons (Eppstein, 2013).
For a defense of epistemic modesty, see Lewis, 2017.
Aumann's agreement theorem, which supports convergence in beliefs between ideally rational Bayesians with common priors about events of common knowledge, may not be enough for convergence here. This is because our conscious experiences are largely private and not common knowledge. Even if they aren’t inherently private, without significant advances in theory or technology that would resolve remaining factual disagreements or far more introspection and far more detailed introspective reports than are practical, they’ll remain largely private in practice.
Or, our priors could differ, based on our distinct conscious experiences, which we use as references to understand moral patienthood and often moral value in general.
I’d only be inclined to weigh the actual or idealized intrinsic/terminal values of actual moral patients, not any possible or conceivable moral patients or perspectives. The latter also seems particularly ill-defined. How would we weigh possible or conceivable perspectives?
The term ‘illusionism’ seems prone to cause misunderstanding, and multiple illusionists have taken issue with the term, including Graziano (2016, ungated), Humphrey (2016) and Veit and Browning (2023, preprint).
See my previous piece discussing how desires and hedonic states may be understood as beliefs or appearances of normative reasons. Others have defended desire-as-belief, desire-as-perception and generally desire-as-guise or desire-as-appearance of normative reasons, the good or what one ought to do. See Schroeder, 2015, 1.3 for a short overview of different accounts of desire-as-guise of good, and Part I of Deonna (ed.) & Lauria (ed), 2017 for more recent work on and discussion of such accounts and alternatives. See also Archer, 2016, Archer, 2020 for some critiques, and Milona & Schroeder 2019 for support for desire-as-guise (or desire-as-appearance) of reasons. A literal interpretation of Roelofs (2022, ungated)’s “subjective reasons, reasons as they appear from its perspective” would be as desire-as-appearance of reasons.
Riedener, 2019 writes, where IRCs is short for intertheoretic reason-comparisons:
Riedener (2019) writes:
Rabinowicz and Österberg (1996) describe similar accounts as object versions of preference views, contrasting them with satisfaction versions, which are instead concerned with preference satisfaction per se. Also similar are actualist preference-affecting views (Bykvist, 2007) and conditional reasons (Frick, 2020).
Or in the case of illusionism vs realism about phenomenal consciousness on one interpretation of illusionism, the comparisons are grounded based on such measures or consequences for both, i.e. the (real or hypothetical) dispositions for phenomenality/qualia beliefs, but what matters are the quasi-phenomenal properties that lead to these beliefs, which are either actually phenomenal under realism or not under illusionism. On another interpretation of illusionism, it’s the beliefs themselves that matter, not quasi-phenomenal properties in general. For more on the distinction, see Frankish, 2021.