tl;dr: I present four axioms for anthropic reasoning under copying/deleting/merging, and show that these result in a unique way of doing it: averaging non-indexical utility across copies, adding indexical utility, and having all copies being mutually altruistic.
Some time ago, Eliezer constructed an anthropic trilemma, where standard theories of anthropic reasoning seemed to come into conflict with subjective anticipation. rwallace subsequently argued that subjective anticipation was not ontologically fundamental, so we should not expect it to work out of the narrow confines of everyday experience, and Wei illustrated some of the difficulties inherent in "copy-delete-merge" types of reasoning.
Wei also made the point that UDT shifts the difficulty in anthropic reasoning away from probability and onto the utility function, and ata argued that neither the probabilities nor the utility function are fundamental, that it was the decisions that resulted from them that were important - after all, if two theories give the same behaviour in all cases, what grounds do we have for distinguishing them? I then noted that this argument could be extended to subjective anticipation: instead of talking about feelings of subjective anticipation, we could replace it by questions such as "would I give up a chocolate bar now for one of my copies to have two in these circumstances?"
I then made a post where I applied by current intuitions to the anthropic trilemma, and showed how this results in complete nonsense, despite the fact that I used a bona fide utility function. What we need are some sensible criteria for which to divide utility and probability between copies, and this post is an attempt to figure that out. The approach is similar to expected utility, where a quadruped of natural axioms forced all decision processes to have a single format.
The assumptions are:
- No intrinsic value in the number of copies
- No preference reversals
- All copies make the same personal indexical decisions
- No special status to any copy.
The first assumption states that though I may want to have different number of copies for various external reasons (multiples copies to be well-backuped, or few copies to prevent any of them being kidnapped), I do not derive any intrinsic utility from having 1, 42 or 100 000 copies. The second one is the very natural requirement that there are no preference reversals: I would not pay anything today to have any of my future copies make a different decision, nor vice-versa. The third says that all my copies will make exactly the same decision as me in purely indexical situations ("Would Monsieur prefer a chocolate bar or else coffee right now, or maybe some dragon fruit in a few minutes? How about the other Monsieur?"). And the fourth claims that no copy gets a special intrinsic status (this does not mean that the copies cannot have special extrinsic status; for instance, one can prefer copies instantiated in flesh and blood to those on computer; but if one does, then downloading a computer copy into a flesh and blood body would instantly raise its status).
These assumptions all very intuitive (though the third one is perhaps a bit strong), and they are enough to specify uniquely how utility should work across copying, deleting, and merging.
Now, I will not be looking here at quantum effects, nor at correlated decisions (where several copies make the same identical decision). I will assume throughout that me and all of my copies are expected utility maximisers, and that my utility decomposes into a non-indexical part about general conditions in the universe ("I'd like it if everyone in the world could have a healthy meal everyday") and an indexical part pertaining to myself specifically ("I'd like a chocolate bar").
The copies need not be perfectly identical, and I will be using the SIA probabilities. Since each decision is a mixture of probability and utility, I can pick the probability theory I want, as long as I'm aware that those using different probability theories will have different utilities (but ultimately the same decisions). Hence I'm sticking with the SIA probabilities simply because I find them elegant and intuitive.
Then the results are:
- All copies will have the same non-indexical utility in all universes, irrespective of the number of copies.
Imagine that one of my copies is confronted with Omega saying: "currently, there is either a single copy of you, or n copies, with a probability p. I have chosen only one copy of you to say this to. If you can guess whether there are n copies or one in this universe, then I will (do something purely non-indexical)". The SIA odds state that the copy been talked to will put a probability p on there being n copies (the SIA increase in n copies cancelled by the fact only he is being talked to). From my current perspective, I would therefore want that copy to reason as if its non-indexical utility was the same as mine, irrespective of the number of copies. Therefore, by no preference reversals, it will have the same non-indexical utility as mine, in both possible universes.
- All copies will have a personal indexical utility which is non-zero. Consequently, my current utility function has a positive term for my copies achieving their indexical goals.
This is simply because the copies will make the same pure indexical decisions as me, and must therefore have a term for this in their utility function. If they do so, then since utility is real-valued (and not non-standard real valued), they will in certain situations make a decision that increases their personal indexical utility and diminish their (and hence my) non-indexical utility. By no preference reversal, I must approve of this decision, and hence my current utility must contain a term for my copy's indexical utility.
- All my copies (and myself) must have the same utility function, and hence all copies must care about the personal indexical utility of the other copies, equally to how much that copy cares about its own personal indexical utility.
It's already been established that all my copies have the same non-indexical utility. If the copies had different utilities for the remaining component, then one could be offered a deal that increased their own personal indexical utility and decreased that of another copy, and they would take this deal. We can squeeze the benefit side of this deal: offer them arbitrarily small increases to their own utility, in exchange for the same decrease in another copy's utility.
Since I care about each copy's personal indexical utility, at least to some extent, eventually such a deal will be to my disadvantage, once the increase gets small enough. Therefore I would want that copy to reject the deal. The only way of ensuring that would do so is to make all copies (including myself) share the same utility function.
So, let's summarise where we are now. We've seen that all my copies share the same non-indexical utility. We've also established that they have a personal indexical utility that is the same as mine, and that they care about the other copy's personal indexical utilities exactly as much as that copy does himself. So, strictly speaking, there are two components, the shared non-indexical utility, and a "shared indexical" utility, made up of some weighted sum of each copy's "personal indexical" utility.
We haven't assumed that the weighting is equal, nor what the weight is. Two intuitive ideas spring to mind: a equal average, and a total utility.
For an equal average, we assign each copy a personal indexical utility that is equal to what mine would be if there were not multiples copies, and the "shared indexical" utility is the average of these. If there were a hundred copies about, I would need to give them each a chocolate bar (or give a hundred chocolate bars to one of them) in order to get the same amount of utility as a single copy of me getting a single bar. This corresponds to the intuition "duplicate copies, doing the same thing, doesn't increase my utility".
For total utility, we assign each copy a personal indexical utility that is equal to what mine would be if there were not multiples copies, and the "shared indexical" utility is the total of these. If each of my hundred copies gets a chocolate bar each, this is the same as if I had a single copy, and he got a hundred bars. This is a more intuitive position if we see the copies as individual people. I personally find this less intuitive; however:
- My copies' "shared indexical" utility (and hence mine) is the sum, not average, of what the individual copies would have if they were the only existent copy.
Imagine that there is one copy now, that there will be n extra copies made in ten minutes, which will all be deleted in twenty minutes. I am confronted with situations such as "do you want to make this advantageous deal now, or a slightly less/more advantageous deal in 10/20 minutes?" By "all copies make the same purely indexical decisions" I would want to delay if, and only if, that is what I would want to do if there were no extra copies made at all. This is only possible if my personal indexical utility is the same throughout the creation and destruction of the other copies. Since no copy is special, all my copies must have the same personal indexical utility, irrespective of the number of copies. So their "shared indexical" utility must be the sum of this.
Thus, given those initial axioms, there is only one consistent way of spreading utility across copies (given SIA probabilities): non-indexical utility must average, personal indexical utility must add, and all copies must share exactly the same utility function.
In the next post, I'll apply this reasoning to the anthropic trilemma, and also show that there is still hope - of a sort - for the more intuitive "average" view.
If I understand this correctly, what you mean is that in a situation where I am given a choice between:
A)1 bar of chocolate now,
B) 2 bars in ten minutes
C) 3 bars in twenty minutes,
If 10 copies of me are made, but they are not "in on the deal" with me (they get no chocolate, no matter what I pick), then instead of giving B 2 utility, I should give it 0.18 utility and prefer A to B. You are right that this seems absurd, and that summing utility instead of averaging it fixes this problem.
However, in situations where the copies are "in on the deal," and do receive chocolate, the results also seem absurd. Imagine the same situation, except that if I pick B each copy will also get 2 bars of chocolate.
If utilities of each copy is summed, then picking B will result in 22 utility, while picking C will result in 3. This would mean I would select B if 10 copies are made and C if no copies are made. This would also mean that I should be willing to pay 18 chocolate bars for the privilege of having 10 identical copies made who eats a chocolate bar and is then deleted.
This seem absurd to me, however. If given a choice between two chocolate bars, or having one chocolate bar, plus having a million copies created who eat one chocolate bar and are then merged with me, I'll pick two chocolate bars. It seems to me that any decision theory that claims you should be willing to pay to create exact duplicates of yourself who exist briefly, while having the exact same experiences as you, before being merged back with you, should be rejected.
There is no amount of money I would be willing to pay to create more copies who will have exactly the same experiences as me, providing the alternative is that no copies will be made at all if I don't pay. (I would be willing to pay to have an identical copy made if the alternative is having a copy who is tortured being made if I don't pay, or something like that.
Obviously I'm missing something.
Here's one possible thing that I might be missing: Does this decision theory have anything to say about how many copies we should chose to make, if we have a choice, or does it only apply to situations where a copy is going to be made, whether we like it or not? If that's the case then it might make sense to prefer B to C when copies are definitely going to be created, but take action to make sure that they are not created so that you are allowed to choose C.
In this view having a copy made essentially changes my utility function in a subtle way, it essentially doubles the strength of all my current preferences, among other things. So I should avoid having large amounts of copies made for the same reason Gandhi should avoid murder pills. This makes sense to me, I want to have backup copies of myself and other such things, but am leery of having a trillion copies a la Robin Hanson.
Other solutions might include modifying the average view in some fashion, for instance, using summative utilities for decisions affecting just you, and average ones for decisions affecting yourself and copies. Or taking a timeless average view and dividing utility by all the copies you will ever have, regardless of whether they exist at the moment or not. (This could potentially lead to creating suffering copies if copies that are suffering even more exist, but we can patch that by evaluating dis utility and utility asymmetrically, so the first is summative and the second is average).