For illustrative purposes, imagine simple agents - AI's, or standard utility maximisers - who have to make decisions under anthropic uncertainty.
Specifically, let there be two worlds, W1 and W2, equally likely to exist. W1 contains one copy of the agent, W2 contains two copies. The agent has one single action available: the opportunity to create, once, either a box or a cross. The utility of doing so varies depending on which world the agent is in, as follows:
In W1: Utility(cross) = 2, Utility(box) = 5
In W2: Utility(cross) = 2, Utility(box) = 0
The agent has no extra way of telling which world they are in.
- First model (aggregationist, non-indexical):
Each box or cross created will generate the utility defined above, and the utility is simply additive. Then if the agent decides to generate crosses, the expected utility is 0.5(2+(2+2))=3, while that of generating boxes is 0.5(5+(0+0))=2.5. Generating crosses is the way to go.
- Second model (non-aggregationist, non-indexical):
The existence of a single box or cross will generate the utility defined above, but extra copies won't change anything. Then if the agent decides to generate crosses, the expected utility is 0.5(2+2)=2, while that of generating boxes is 0.5(5+0)=2.5. Generating boxes is the way to go.
- Third model (unlikely existence, non-aggregationist, non-indexical):
Here a simple change is made: the worlds do not contain agents, but proto-agents, each of which has an (independent) one chance in a million of becoming an agent. Hence the probability of the agent existing in the first universe is 1/M, while the probability of an agent existing in the second universe is approximately 2/M. The expected utility of crosses is approximately 1/M*0.5(2+2*2)=3/M while that of boxes is approximately 1/M*0.5(5+2*0)=2.5/M. Generating crosses is the way to go.
- Fourth model (indexical):
This is the first "hard" model from the title. Here the agent only derives utility from the box or cross it generated itself. And here, things get interesting.
There is no immediately obvious way of solving this situation, so I tried replacing it with a model that seems equivalent. Instead of having indexical preferences for its own shapes, I'll give the agent non-indexical aggregationist preferences (just as in the first model), and halve the utility of any shape in W2. This should give the same utility to all agents in all possible worlds as the indexical model. Under the new model, the utility of crosses is 0.5(2+(1+1)) = 2, while that of boxes is 0.5(5+(0+0))=2.5. Boxes are the way to go.
- Fifth model (indexical, anticipated experience)
The fifth model is one where after the agents in W2 have made their decision, but before they implement it, one of them is randomly deleted, and the survivor creates two shapes. If the agents are non-indexical, then the problem is simply a version of first model.
But now the agents are indexical. There are two ways of capturing this fact; either the agent can care about the fact that "I, myself will have created a shape", or "the thread of my future experience will contain an agent that will have created a shape". In the first case, the agent should consider that in W2, it only has a 50% chance of succeeding in it's goal, but the weight of its goal is doubled: this is the fourth model again, hence: boxes.
In the second case, each agent should consider the surviving agent as the thread of its future experience. This is equivalent to non-indexical first case, where only the number of shapes matter (since all future shapes belong to an agent that is in the current agent(s)' future thread of experience). Hence: crosses.
I won't be analysing solutions to these problems yet, but simply say that many solutions will work, such as SIA with a dictator's filter. However, though the calculations are correct, the intuition behind this seems suspect in the fourth model, and one could achieve similar results without SIA at all (giving its decision the power to affect multiple agent outcomes at once, for instance).
It should be noted that the fourth model seems to imply the Presumptuous Philosopher would be wrong to accept his bets. However, the third model seems to imply the truth of FNC (full non-indexical conditioning), which is very close to SIA - but time inconsistent. And there, the Presumptuous Philosopher would be right to accept his bets.
Confusion still persists in my mind, but I think it's moving towards a resolution.
I don't see any of your agents say something like "I have P=1/3 of being in world 1, because there are 2 situations fitting my information in world 2 and 1 situation fitting my information in world 1." How would you classify a model that used that sort of anthropic reasoning?
I haven't classified that, because I haven't used the anthropic reasoning. I was just trying to figure out what, from a timeless perspective, the "correct" decision must be.
Then afterwards I'll think of methods of reaching that decision. Though it seems that using your probablity estimate (known as SIA) and a "division of responsability" method, we get the answers presented here.