You wrote that the further-fact view is rejected by 87.8% of philosophers, but it's important to realise that 37.3% of philosophers selected other for that question.
The closest view to person-affecting ethics that makes any sense to me is something like "it's hard for future lives to have much positive value except when seen as part of an organic four-dimensional human civilization, like notes in a piece of music, and individual survival is a special case of this". (If this were true, I'm not sure if it would limit the number of people whose lives could eventually have much positive value. I'm specifying positive value here because it seems plausible that there's an asymmetry between positive and negative, like how a good note outside a musical piece can be only slightly beautiful and nails on chalkboard outside a musical piece can still be very ugly.)
I think person-affecting views are wrong, so I don't see much motivation to "rescue" them from whatever contradiction or problem they run into.
Why do you want?
(I dont mean it in any negative way, just curious)
I also think they are probably wrong, but this kind of argument is a substantial part of why. So I want to see if they can be rescued from it, since that would affect their probability of being right from my perspective.
Do you think there are more compelling arguments that they are wrong, such that we need not consider ones like this? (Also just curious)
I think this is quite devastating analysis for them, even if you would take a "person" to be well defined object.
for example see fig. 2 and related argument. Basically, you have 3 worlds (A;B;C), with populations ([x,y];[y,z];[z,x]). You set the welfare of the populations such that
"Assume that all of these people have positive welfare, but that the y people are better off in B as compared to A, the z people are better off in C as compared to B, and the x people are better off in A as compared to C.
Since the x people do not exist in B, B is neither worse nor better than A for them. Similarly, since the z people do not exist in A, A is neither worse nor better than B for them. However, B is better than A for the y people. Consequently, B is better than A according to the second clause of the Person Affecting Restriction. The same reasoning yields that C is better than B, and A is better than C. But if B is better than A, and C is better than B, then transitivity yields that C is better than A. Consequently, C is both better and worse than A."
So you would have to sacrifice transitivity to "rescue" PAW.
Another argument may be from physics - according to many-worlds interpretation of QM, there exists a world where I was not born, because some high energy particle damaged a part of DNA necessary for my birth. Hence, for each person there exists a world where he does not exists. Taken ad absurdum, nobody has moral value.
I'm not sure if "wrong", "incoherent", or just "incomplete", but this is one major hole in strict person-affecting views. When comparing two future universes, are you disallowed from having a preference if NEITHER of them contains any entity (or consciousness-path or whatever you say is "person" across time) from the current universe? 200 years from now has ZERO person-overlap with now. Does that mean nothing matters?
Hmm, let's see. So I guess to start it matters what we consider a person. Certainly a person is a thing, and I think we should also suppose by person we mean a phenomenally conscious thing. I'm fairly willing to grant all such things personhood, though others may not be, but I think we can at least start approaching this question by supposing that persons are at least phenomenally conscious things.
In this scenario, what matters? I'd argue that value comes from the telos assigned to noemata, both intrinsically by being the object of an intentional relation and by other noemata putting valued noemata (what I call "axias") into relationships with telos. So if we ask what matters to a person (a phenomenally conscious thing) we answer that what matters is whatever matters to them, i.e. whatever it is they value by virtue of the configuration of their thoughts.
Now to turn to the question, properly I think there is no way we can say Alice exists in both A and C. What we can instead say is perhaps that some successor to Alice exists in both A and C (let's call them Aalice and Calice) and then we could ask if Alice, the person who existed prior to A and C diverging, would prefer to be Aalice or Calice. We could also ask Aalice if she would rather be Calice and Calice if she would rather have been Aalice, although in some ways this question is meaningless because it supposes a counterfactual that could not have been: if Aalice had been Calice then she would have just always been Calice and never Aalice and vice versa. This also suggests how we should feel about comparing A and C to B: there is no sense in which Alice, Aalice, or Calice could have existed in B and have B still be B, so we're unable to directly say whether A or B is better for Alice.
I've not fully thought this through, but what this seems to suggest to me is that we can't reasonably compare A, B, and C except insofar as we can compare them relative to the preferences of persons who existed prior to their diverging, and then only so far as asking persons in the primordial world which world they would prefer, although even this is complicated since metaphysically it seems likely that all three worlds will exist but be cut off from each other causally, so primordial persons would end up in all three worlds anyway and their preference in some sense doesn't matter except insofar as it may increase the measure of worlds "similar" to A, B, or C, although what similar means is also a bit unclear here.
So I guess I don't have much of an answer for you beyond suggesting that trying to compare between possible worlds is a confused question because it's asking us to do something that only seems possible because of the way we can imagine possible worlds without them being grounded in their histories. That is, the question to compare worlds asks us to do something that is physically impossible and only makes sense within an ontology that fails to fully account for the way different worlds come to exist.
To be fair this is at odds with our experience of living since it often feels like we make choices to pick between different future worlds we will find ourselves in, but this may as much be an illusion created by only being able to see the world from the inside.
[Epistemic status: sloppy thoughts not informed by the literature. Hoping actual population ethicists might show up and correct me or point me to whoever has already thought about something like this better.]
Person-affecting views say that when you are summing up the value in different possible worlds, you should ignore people who only exist in one of those worlds. This is based on something like the following intuitions:
The further-fact view says that after learning all physical facts about Alice and Alice’—such as whether Alice’ was the physical result of Alice waiting for five seconds, or is a brain upload of Alice, or is what came out of a replicating machine on Mars after Alice walked in on Earth, or remembers being Alice—there is still a further meaningful question of whether Alice and Alice’ are the same person.
I take the further-fact view to be wrong (or at least Derek Parfit does, and I think we agree the differences between Derek Parfit and I have been overstated). Thinking that the further-fact view is wrong seems to be a common position among intellectuals (e.g. 87% among philosophers).
If the further-fact view is wrong, then the what we have is a whole lot of different person-moments, with various relationships to one another, which for pragmatic reasons we like to group into clusters called ‘people’. There are different ways we could define the people, and no real answer to which definition is right. This works out pretty well in our world, but you can imagine other worlds (or futures of our world) where the clusters are much more ambiguous, and different definitions of ‘person’ make a big difference, or where the concept is not actually useful.
Person-affecting views seem to make pretty central use of the concept ‘person’. If we don’t accept the further-fact view, and do want to accept a person-affecting view, what would that mean? I can think of several options:
Before considering these options, I want to revisit the second reason for adopting a person-affecting view: If Alice exists in world A and not in world B, then Alice can’t be made better off by world A existing rather than world B. Whether this premise is true seems to depend on how ‘a world being better for Alice’ works. Some things we might measure would go one way, and some would go the other. For instance, we could imagine it being analogous to:
Suppose that person-affecting view advocates are right, and the worth of one’s life is more like 2). You just can’t compare the worth of Alice’s life in two worlds where she only exists in one of them. Then can you compare person-moments? What if the same ‘person’ exists in two possible worlds, but consists of different person-moments?
Compare world A and world C, which both contain Alice, but in world C Alice makes different choices as a teenager, and becomes a fighter pilot instead of a computer scientist. It turns out that she is not well suited to it, and finds piloting pretty unsatisfying. If Alice_t1A is different from Alice_t1C, can we say that world A is better than world C, in virtue of Alice’s experiences? Each relevant person-moment only exists in one of the worlds, so how can they benefit?
I see several possible responses:
The second possibility seems to involve accepting the second view above: that there is some correct definition of ‘person’ that is larger than a person moment, and fundamental to ethics – something like the further-fact view. This sounds kind of bad to me. And the third view doesn’t seem very tempting without some idea of an actual difference between persons and person-moments.
So maybe the person-moment affecting view looks most promising. Let us review what it would have to look like. For one thing, the only comparable person moments are the ones that are the same. And since they are the same, there is no point bringing about one instead of the other. So there is never reason to bring about a person-moment for its own benefit. Which sounds like it might really limit the things that are worth intentionally doing. Isn’t making myself happy in three seconds just bringing about a happy person moment rather than a different sad person moment?
Is everything just equally good on this view? I don’t think so, as long as you are something like a preference utilitarian: person-moments can have preferences over other person-moments. Suppose that Alice_t0A and Alice_t0C are the same, and Alice_t1A and Alice_t1C are different. And suppose that Alice_t0 wants Alice_t1 to be a computer scientist. Then world A is better than world C for Alice_t0, and so better overall. That is, person-moments can benefit from things, as long as they don’t know at the time that they have benefited.
I think an interesting feature of this view is that all value seems to come from meddling preferences. It is never directly good that there is joy in the world for instance, it is just good because somebody wants somebody else to experience joy, and that desire was satisfied. If they had instead wished for a future person-moment to be tortured, and this was granted, then this world would apparently be just as good.
So, things that are never directly valuable in this world:
On the upside, since person-moments often care about future person-moments within the same person, we do perhaps get back to something closer to the original person-affecting view. There is often reason to bring about or benefit a person moment for the benefit of previous person moments in the history of the same person, who for instance wants to ‘live a long and happy life’. My guess after thinking about this very briefly is that in practice it would end up looking like the ‘moderate’ person-affecting views, in which people who currently exist get more weight than people who will be brought into existence, but not infinitely more weight. People who exist now mostly want to continue existing, and to have good lives in the future, and they care less, but some, about different people in the future.
So, if you want to accept a person-affecting view and not a further-fact view, the options seem to me to be something like these:
None of these sound very good to me, but nor do person-affecting views in general, so maybe I’m the wrong audience. I had thought person-moment affecting views were almost a reductio, but a close friend says he thought they were the obvious reasonable view, so I am curious to hear others’ takes.