It's not just indexical uncertainty, it's any kind of uncertainty, as possible worlds can trade with each other. Independence is approximation, adequate for our low-intelligence times, but breaking down as it becomes possible to study counterfactuals. It's more obvious with indexical uncertainty, where the information can be transferred in apparent form by stupid physics, and less obvious with normal uncertainty, where it takes a mind.
This idea that possible worlds can trade with each other seems to have fairly radical implications. Together with Eliezer's idea that agents who know each other's source code ought to play cooperate in one-shot PD, doesn't it imply that all sufficiently intelligent and reflective agents across all possible worlds should do a global trade and adopt a single set of preferences that represents a compromise between all of their individual preferences? (Note: the resulting unified preferences are not necessarily characterized by expected utility maximization.)
Let me trace the steps of my logic here. First take 2 agents in the same world who know each other's source code. Clearly, each adopting a common set of preferences can be viewed as playing Cooperate in a one-shot PD. Now take an agent who has identified a counterfactual agent in another possible world (who has in turn identified it). Each agent should also adopt a common set of preferences, in the expectation that the other will do so as well. Either iterating this process, or by doing a single global trade across all agents in all possible worlds, we should arrive at a common set of preferences between everyone.
Hmm, maybe this is just what you meant by "one global decision"? Since my original interest was to figure out what probabilities mean in the context of indexical uncertainty, let me ask you, do probabilities have any role to play in your decision theory?
Downvoted because due to lack of meaningful example I don't understand what author is trying to say.
Thanks for adding an example. Let me rephrase it:
You have been invited to take part in a game theory experiment. You are placed in an empty room with three buttons labeled "1", "2" and "my room number". Another test subject is in another room with identical buttons. You don't know your room number, or theirs, but experimenters swear they're different. If you two press buttons corresponding to different numbers, you are both awarded $100 on exit, otherwise zero.
...What was so interesting about this problem, again?
Expected Utility Theory may not apply in situations involving indexical uncertainty.
Sounds intriguing. Can you provide some small game to show that?
Most interesting. Though with a very different motivation, (I was trying to resolve the anthropic paradoxes) I have also concluded that self-locating uncertainties or indexical uncertainties do not have meaningful probabilities.
This is a very old post, but I have to say I don't even understand the Axiom of Independence as presented here. It is stated:
The Axiom of Independence says that for any A, B, C, and p, you prefer A to B if and only if you prefer p A + (1-p) C to p B + (1-p) C.
If p A + (1-p) C and p B + (1-p) C, this means that both A and B are true if and only if C is false (two probabilities sum to 1 if and only if they are mutually exclusive and exhaustive). Which means A is true if and only if B is true, i.e. . Since A and B have the same truth value with ce...
But what if p represents an indexical uncertainty, which is uncertainty about where (or when) you are in the world?
Didn't someone pose this exact question here a few months ago?
If you construct your world states A, B, and C using an indexical representation, there is no uncertainty about where, who, or when you are in that representation. Representations without indexicals turn out to have major problems in artificial intelligence (although they are very popular; mainly, I think, due to the fact that it doesn't seem to be possible for a single knowledg...
I’ve noticed that the Axiom of Independence does not seem to make sense when dealing with indexical uncertainty, which suggests that Expected Utility Theory may not apply in situations involving indexical uncertainty. But Googling for "indexical uncertainty" in combination with either "independence axiom" or “axiom of independence” give zero results, so either I’m the first person to notice this, I’m missing something, or I’m not using the right search terms. Maybe the LessWrong community can help me figure out which is the case.
The Axiom of Independence says that for any A, B, C, and p, you prefer A to B if and only if you prefer p A + (1-p) C to p B + (1-p) C. This makes sense if p is a probability about the state of the world. (In the following, I'll use “state” and “possible world” interchangeably.) In that case, what it’s saying is that what you prefer (e.g., A to B) in one possible world shouldn’t be affected by what occurs (C) in other possible worlds. Why should it, if only one possible world is actual?
In Expected Utility Theory, for each choice (i.e. option) you have, you iterate over the possible states of the world, compute the utility of the consequences of that choice given that state, then combine the separately computed utilities into an expected utility for that choice. The Axiom of Independence is what makes it possible to compute the utility of a choice in one state independently of its consequences in other states.
But what if p represents an indexical uncertainty, which is uncertainty about where (or when) you are in the world? In that case, what occurs at one location in the world can easily interact with what occurs at another location, either physically, or in one’s preferences. If there is physical interaction, then “consequences of a choice at a location” is ill-defined. If there is preferential interaction, then “utility of the consequences of a choice at a location” is ill-defined. In either case, it doesn’t seem possible to compute the utility of the consequences of a choice at each location separately and then combine them into a probability-weighted average.
Here’s another way to think about this. In the expression “p A + (1-p) C” that’s part of the Axiom of Independence, p was originally supposed to be the probability of a possible world being actual and A denotes the consequences of a choice in that possible world. We could say that A is local with respect to p. What happens if p is an indexical probability instead? Since there are no sharp boundaries between locations in a world, we can’t redefine A to be local with respect to p. And if A still denotes the global consequences of a choice in a possible world, then “p A + (1-p) C” would mean two different sets of global consequences in the same world, which is nonsensical.
If I’m right, the notion of a “probability of being at a location” will have to acquire an instrumental meaning in an extended decision theory. Until then, it’s not completely clear what people are really arguing about when they argue about such probabilities, for example in papers about the Simulation Argument and the Sleeping Beauty Problem.
Edit: Here's a game that exhibits what I call "preferential interaction" between locations. You are copied in your sleep, and both of you wake up in identical rooms with 3 buttons. Button A immunizes you with vaccine A, button B immunizes you with vaccine B. Button C has the effect of A if you're the original, and the effect of B if you're the clone. Your goal is to make sure at least one of you is immunized with an effective vaccine, so you press C.
To analyze this decision in Expected Utility Theory, we have to specify the consequences of each choice at each location. If we let these be local consequences, so that pressing A has the consequence "immunizes me with vaccine A", then what I prefer at each location depends on what happens at the other location. If my counterpart is vaccinated with A, then I'd prefer to be vaccinated with B, and vice versa. "immunizes me with vaccine A" by itself can't be assigned an utility.
What if we use the global consequences instead, so that pressing A has the consequence "immunizes both of us with vaccine A"? Then a choice's consequences do not differ by location, and “probability of being at a location” no longer has a role to play in the decision.