Some have argued that all interpersonal welfare comparisons should be possible or take it as a strong mark against a theory in which they are not all possible. Others have argued against their possibility, e.g. Hausman (1995) for preference views. Here, I will illustrate an intermediate position: interpersonal welfare comparisons are vague, with tighter bounds on reasonable comparisons between beings whose welfare states are realized more similarly, and wider or no bounds when more different.
The obvious case is two completely or at least functionally identical brains (at the right level of abstraction for our functionalist theory). As long as we grant intrapersonal comparisons, then we should get interpersonal comparisons between identical brains. We map the first brain’s state(s) to the equivalent state(s) in the second, and compare them in the second brain. Of course, this is not a very interesting case, and it seems only directly useful for artificial duplicates of minds.
Still, we can go further. Consider an experience E1 in brain B1 and an experience E2 in brain B2. If B1 and B2 only differ by the fact that some of B2's unpleasantness-contributing neurons are less sensitive or removed, and B1 and B2 receive the same input signals that cause pain, then it seems likely to me that B1's painful experience E1 is at least as unpleasant as B2's E2 and possibly more. We may be able to say roughly how much more unpleasant it is by comparing E2 in B2 directly to less intense states in B1, sandwiching E2 in unpleasantness between two states in B1.
Maybe going from E1 to E2 changes the unpleasantness by between -0.01 and 0, i.e. UnpleasantnessB2(E2)=UnpleasantnessB1(E1)+Δ, where −0.01≤Δ≤0. There may be no fact of the matter about the exact value of Δ.
For small enough local differences between brains, we could make fairly precise comparisons.
I use unpleasantness for the purpose of a more concrete illustration, but it's plausible other potential types of welfare could be used instead, like preferences. A slight difference in how some preferences are realized should typically result in a slight difference in the preferences themselves and how we value them, but the extent of the difference in value could be vague and only boundable by fairly tight inequalities. We can use the same example, too: a slight difference in how unpleasant a pain is through the same kinds of differences in neurons as above typically results in a slight difference in preferences about that pain and preference-based value.
In general, for arbitrary brains B1 and B2 and respective experiences E1 and E2, we can ask whether there's a sequence of changes from E1 and B1 to E2 and B2, possibly passing through different hypothetical intermediate brains and states, that lets us compare E1 and E2 by combining bounds and inequalities from each step along the sequence. Some changes could have opposite sign effects on the realized welfare, but with only bounds rather than precise values, the bounds widen between brains farther apart in the sequence.
For example, a change with a range of +1 to +4 in additional unpleasantness and a change with a range of -3 to -1 could give a net change between -2=+1-3 and +3=+4-1. Adding one more change of between +1 and +4 and another of between -3 and -1 gives between -4 and +6. Adding another change of between +2 and +3 gives between -2 and +9. The gap between the bounds widens with each additional change.[1]
The more or larger such changes are necessary to get from one brain to another, the less tight the bounds on the comparisons could become, the further they may go both negative and positive overall,[2] and the less reasonable it seems to make such comparisons at all.
In princinple, the gap between the bounds could sometimes shrink with additional changes. In the simplest case, if you make a change of between +1 and +3 in unpleasantness, and then reverse the change, which means adding a change of between -3 and -1, the two together is no net physical change and should give 0 net change in unpleasantness, not between -2 and +2.
Some have argued that all interpersonal welfare comparisons should be possible or take it as a strong mark against a theory in which they are not all possible. Others have argued against their possibility, e.g. Hausman (1995) for preference views. Here, I will illustrate an intermediate position: interpersonal welfare comparisons are vague, with tighter bounds on reasonable comparisons between beings whose welfare states are realized more similarly, and wider or no bounds when more different.
The obvious case is two completely or at least functionally identical brains (at the right level of abstraction for our functionalist theory). As long as we grant intrapersonal comparisons, then we should get interpersonal comparisons between identical brains. We map the first brain’s state(s) to the equivalent state(s) in the second, and compare them in the second brain. Of course, this is not a very interesting case, and it seems only directly useful for artificial duplicates of minds.
Still, we can go further. Consider an experience E1 in brain B1 and an experience E2 in brain B2. If B1 and B2 only differ by the fact that some of B2's unpleasantness-contributing neurons are less sensitive or removed, and B1 and B2 receive the same input signals that cause pain, then it seems likely to me that B1's painful experience E1 is at least as unpleasant as B2's E2 and possibly more. We may be able to say roughly how much more unpleasant it is by comparing E2 in B2 directly to less intense states in B1, sandwiching E2 in unpleasantness between two states in B1.
UnpleasantnessB1(E1)≥UnpleasantnessB2(E2)≥UnpleasantnessB1(E′1)Maybe going from E1 to E2 changes the unpleasantness by between -0.01 and 0, i.e. UnpleasantnessB2(E2)=UnpleasantnessB1(E1)+Δ, where −0.01≤Δ≤0. There may be no fact of the matter about the exact value of Δ.
For small enough local differences between brains, we could make fairly precise comparisons.
I use unpleasantness for the purpose of a more concrete illustration, but it's plausible other potential types of welfare could be used instead, like preferences. A slight difference in how some preferences are realized should typically result in a slight difference in the preferences themselves and how we value them, but the extent of the difference in value could be vague and only boundable by fairly tight inequalities. We can use the same example, too: a slight difference in how unpleasant a pain is through the same kinds of differences in neurons as above typically results in a slight difference in preferences about that pain and preference-based value.
In general, for arbitrary brains B1 and B2 and respective experiences E1 and E2, we can ask whether there's a sequence of changes from E1 and B1 to E2 and B2, possibly passing through different hypothetical intermediate brains and states, that lets us compare E1 and E2 by combining bounds and inequalities from each step along the sequence. Some changes could have opposite sign effects on the realized welfare, but with only bounds rather than precise values, the bounds widen between brains farther apart in the sequence.
For example, a change with a range of +1 to +4 in additional unpleasantness and a change with a range of -3 to -1 could give a net change between -2=+1-3 and +3=+4-1. Adding one more change of between +1 and +4 and another of between -3 and -1 gives between -4 and +6. Adding another change of between +2 and +3 gives between -2 and +9. The gap between the bounds widens with each additional change.[1]
The more or larger such changes are necessary to get from one brain to another, the less tight the bounds on the comparisons could become, the further they may go both negative and positive overall,[2] and the less reasonable it seems to make such comparisons at all.
In princinple, the gap between the bounds could sometimes shrink with additional changes. In the simplest case, if you make a change of between +1 and +3 in unpleasantness, and then reverse the change, which means adding a change of between -3 and -1, the two together is no net physical change and should give 0 net change in unpleasantness, not between -2 and +2.
However, they could aggregate to definitely be positive, or aggregate to definitely be negative.