In previous posts, I revisited Eliezer's anthropic trilemma, approaching it with ata's perspective that the decisions made are the objects of fundamental interest, not the probabilities or processes that gave rise to them. I initially applied my naive intuitions to the problem, and got nonsense. I then constructed a small collection of reasonable-seeming assumptions, and showed they defined a single method of spreading utility functions across copies.

This post will apply that method to the anthropic trilemma, and thus give us the "right" decisions to make. I'll then try and interpret these decisions, and see what they tell us about subjective anticipation, probabilities and the impact of decisions. As in the original post, I will be using the chocolate bar as the unit of indexical utility, as it is a well known fact that everyone's utility is linear in chocolate.

The details of the lottery winning setup can be found either here or here. The decisions I must make are:

Would I give up a chocolate bar now for two to be given to one of the copies if I win the lottery? No, this loses me one utility and gains me only 2/million.

Would I give up a chocolate bar now for two to given to every copy if I win the lottery? Yes, this loses me one utility and gains me 2*trillion/million = 2 million.

Would I give up one chocolate bar now, for two chocolate bars to the future merged me if I win the lottery? No, this gives me an expected utility of -1+2/million.

Now let it be after the lottery draw, after the possible duplication, but before I know whether I've won the lottery or not. Would I give up one chocolate bar now in exchange for two for me, if I had won the lottery (assume this deal is offered to everyone)? The SIA odds say that I should; I have an expected gain of 1999/1001 ≈ 2.

Now assume that I have been told I've won the lottery, so I'm one of the trillion duplicates. Would I give up a chocolate bar for the future merged copy having two? Yes, I would, the utility gain is 2-1=1.

So those are the decisions; how to interpret them? There are several ways of doing this. There are four things to keep in mind: probability, decision impact, utility function, and subjective anticipation.

  • SIA, individual impact, standard utility function

The way I've phrased the system uses the SIA probabilities, and an individual impact (I don't need to worry about my other copies are deciding). From that perspective, when I agree to give up my chocolate bar for all my future potential copies, I don't anticipate winning the lottery, but I do anticipate this decision having a huge impact. So I'm doing this for expect utility reasons: I don't anticipate winning the lottery. And similarly I don't anticipate having won the lottery at a later date.

However, once the lottery draw has happened, my SIA probabilities tell me I've probably won the lottery. And my behaviour tells me that I anticipate continuing to have won the lottery in future (passing my chocolate bar to my future unique copy). So in this optic, there is a switch of subjective anticipation.

  • SSA, individual impact, scaled utility function

But instead, I could be using SSA probabilities, and an individual impact. For this system to remain consistent, my utility function has to be scaled at any moment by the number of other copies in existence. Again here, I don't anticipate winning the lottery.

Now, once the lottery has happened, I don't anticipate having won it. Rather, I value chocolate bars much more in the world where there are many copies (ie if I've won the lottery). So I will bet that I've won, because I value the return more if I have won. Similarly passing on to the future copy - I value his utility much more if there are more copies around now.

Yes, SSA with individual impact is ugly and counter-intuitive; but it does at least preserve my feeling of subjective anticipation.

  • Objective probabilities, total impact, standard utility function

In this model, I see my decision as not only determining my own action, but that of all those correlated with it, and I use the objective probabilities of the world (neither SSA nor SIA). This is similar to the UDT perspective, and the utility is the same as in the "SIA, individual impact" situation.

As usual, I don't anticipate winning the lottery. Now, once the lottery is happened, I don't anticipate having won it. Nor do value chocolate more in one universe or the other. Rather, I anticipate that my decision has much more impact if there are many copies - they will all get a chocolate if my decision is this. So my feeling of subjective anticipation is preserved.

In summary

These are just some of the different ways to interpret my decisions - why not SSA with total impact? Objective probabilities with individual impact? However these three are enough to illustrate the interplay between your probability, utility and impact conventions. There is, however, no experimental way of distinguishing between these different conventions.

If we want to preserve the impression of subjective anticipation in a useful way, we should use "SSA, individual impact" or "Objective probabilities, total impact" or a similar system. On grounds of elegance, I'm personally going for the last one.

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 9:28 PM

This is a nice tool to solve ethical problems in situations with copying. I think this does indeed solve Eliezer's original post. It's really a layer on top of the existing theory, though, so I feel like noting that it doesn't help with problems where the underlying theory breaks rather than is incomplete.