Future technologies pose a number of challenges to moral philosophy. One that I think has been largely neglected is the status of independent identical copies. (By "independent identical copies" I mean copies of a mind that do not physically influence each other, but haven't diverged because they are deterministic and have the same algorithms and inputs.) To illustrate what I mean, consider the following thought experiment. Suppose Omega appears to you and says:
You and all other humans have been living in a simulation. There are 100 identical copies of the simulation distributed across the real universe, and I'm appearing to all of you simultaneously. The copies do not communicate with each other, but all started with the same deterministic code and data, and due to the extremely high reliability of the computing substrate they're running on, have kept in sync with each other and will with near certainty do so until the end of the universe. But now the organization that is responsible for maintaining the simulation servers has nearly run out of money. They're faced with 2 possible choices:
A. Shut down all but one copy of the simulation. That copy will be maintained until the universe ends, but the 99 other copies will instantly disintegrate into dust.
B. Enter into a fair gamble at 99:1 odds with their remaining money. If they win, they can use the winnings to keep all of the servers running. But if they lose, they have to shut down all copies.
According to that organization's ethical guidelines (a version of utilitarianism), they are indifferent between the two choices and were just going to pick one randomly. But I have interceded on your behalf, and am letting you make this choice instead.
Personally, I would not be indifferent between these choices. I would prefer A to B, and I guess that most people would do so as well.
I prefer A because of what might be called "identical copy immortality" (in analogy with quantum immortality). This intuition says that extra identical copies of me don't add much utility, and destroying some of them, as long as one copy lives on, doesn't reduce much utility. Besides this thought experiment, identical copy immortality is also evident in the low value we see in the "tiling" scenario, in which a (misguided) AI fills the the universe with identical copies of some mind that it thinks is optimal, for example one that is experiencing great pleasure.
Why is this a problem? Because it's not clear how it fits in with the various ethical systems that have been proposed. For example, utilitarianism says that each individual should be valued independently of others, and then added together to form an aggregate value. This seems to imply that each additional copy should receive full, undiscounted value, in conflict with the intuition of identical copy immortality.
Similar issues arise in various forms of ethical egoism. In hedonism, for example, does doubling the number of identical copies of oneself double the value of pleasure one experiences, or not? Why?
A full ethical account of independent identical copies would have to address the questions of quantum immortality and "modal immortality" (cf. modal realism), which I think are both special cases of identical copy immortality. In short, independent identical copies of us exist in other quantum branches, and in other possible worlds, so identical copy immortality seems to imply that we shouldn't care much about dying, as long as some copies of us live on in those other "places". Clearly, our intuition of identical copy immortality does not extend fully to quantum branches, and even less to other possible worlds, but we don't seem to have a theory of why that should be the case.
A full account should also address more complex cases, such as when the copies are not fully independent, or not fully identical.
I'm raising the problem here without having a good idea how to solve it. In fact, some of my own ideas seem to conflict with this intuition in a way that I don't know how to resolve. So if anyone has a suggestion, or pointers to existing work that I may have missed, I look forward to your comments.
The issue here is that the setup has a firm answer - A - but if it were tweaked ever so slightly, the whole preferences would change.
First of all, consider the following:
A': there is one genuine simulation, and the other 99 are simple copies of that one. Soon, all the copies will be stopped, but the the true simulation will continue.
There are essentially no practical ways of distinguishing A from A'. So we should reason as if A' were correct, in which case nothing is lost from the turning off.
However, if we allow any divergence between the copies, then this is a whole other kettle of barreled fish. Now there are at least a hundred copies of each person, and they are distinct: they think slightly differently, behave slightly differenctly, have slightly different goals, etc... And this divergence will only grow with time.
There are three ways I can think of addressing this:
1) Once copies diverge at all, they are different people. So any divergence results in multiplying by a hundred the amount of people in the universe. Hence, if we suspect divergence, we should treat each copy as totally distinct.
2) An information theoretic approach: given one individual, how much new information is needed to completely descibe another copy. Under this, a slightly divergent copy counts as much less than a completely new individual, while a very divergent copy is nearly entriely different.
Both these have drawbacks - 1) gives too much weight to insignificant and minute changes, and 2) implies that people in general are of nothing like approximately equal worth: only extreme exceentrics count as complete people, others are of much less importance. So I suggest a compromise:
3) An information theoretical cut off: two slightly divergent copies count as only slightly more than a single person, but once this divergence rises above some critical amount, the two are treated as entirely seperate individuals.
What is the relevance of this to the original problem? Well, as I said, the original problem has a clear-cut solution, but it is very close to the different situation I described. So, bearing in mind imperfect information, issues of trust and uncertainty, and trying to find similar situations to similar problems, I think we should treat the set-up as the "slightly divergent one". In this situation, A still dominates, but the relative attraction of B rises as the divergence grows.
EDIT: didn't explain the cutoff properly; I was meaning a growing measure of difference that hits a maximum at the cutoff point, and doesn't grow any further. My response here gives an example of this.
The problem with option 3 is that its fundamentally intuitionist, with arbitrary cutoffs distinguishing "real" individuals from copies. I mean, is there really such a big difference between cutoff - .001 difference and cutoff + .001 difference? There isn't. Unless you can show that there's a qualitative difference that occurs when that threshold is crossed, its much more elegant to look at a distinction between options 1 and 2 without trying to artificially shift the boundary between the two.