I’ve noticed that the Axiom of Independence does not seem to make sense when dealing with indexical uncertainty, which suggests that Expected Utility Theory may not apply in situations involving indexical uncertainty. But Googling for "indexical uncertainty" in combination with either "independence axiom" or “axiom of independence” give zero results, so either I’m the first person to notice this, I’m missing something, or I’m not using the right search terms. Maybe the LessWrong community can help me figure out which is the case.

The Axiom of Independence says that for any A, B, C, and p, you prefer A to B if and only if you prefer p A + (1-p) C to p B + (1-p) C.  This makes sense if p is a probability about the state of the world. (In the following, I'll use “state” and “possible world” interchangeably.) In that case, what it’s saying is that what you prefer (e.g., A to B) in one possible world shouldn’t be affected by what occurs (C) in other possible worlds. Why should it, if only one possible world is actual?

In Expected Utility Theory, for each choice (i.e. option) you have, you iterate over the possible states of the world, compute the utility of the consequences of that choice given that state, then combine the separately computed utilities into an expected utility for that choice. The Axiom of Independence is what makes it possible to compute the utility of a choice in one state independently of its consequences in other states.

But what if p represents an indexical uncertainty, which is uncertainty about where (or when) you are in the world?  In that case, what occurs at one location in the world can easily interact with what occurs at another location, either physically, or in one’s preferences. If there is physical interaction, then “consequences of a choice at a location” is ill-defined. If there is preferential interaction, then “utility of the consequences of a choice at a location” is ill-defined. In either case, it doesn’t seem possible to compute the utility of the consequences of a choice at each location separately and then combine them into a probability-weighted average.

Here’s another way to think about this. In the expression “p A + (1-p) C” that’s part of the Axiom of Independence, p was originally supposed to be the probability of a possible world being actual and A denotes the consequences of a choice in that possible world. We could say that A is local with respect to p. What happens if p is an indexical probability instead? Since there are no sharp boundaries between locations in a world, we can’t redefine A to be local with respect to p. And if A still denotes the global consequences of a choice in a possible world, then “p A + (1-p) C” would mean two different sets of global consequences in the same world, which is nonsensical.

If I’m right, the notion of a “probability of being at a location” will have to acquire an instrumental meaning in an extended decision theory. Until then, it’s not completely clear what people are really arguing about when they argue about such probabilities, for example in papers about the Simulation Argument and the Sleeping Beauty Problem.

Edit: Here's a game that exhibits what I call "preferential interaction" between locations. You are copied in your sleep, and both of you wake up in identical rooms with 3 buttons. Button A immunizes you with vaccine A, button B immunizes you with vaccine B. Button C has the effect of A if you're the original, and the effect of B if you're the clone. Your goal is to make sure at least one of you is immunized with an effective vaccine, so you press C.

To analyze this decision in Expected Utility Theory, we have to specify the consequences of each choice at each location. If we let these be local consequences, so that pressing A has the consequence "immunizes me with vaccine A", then what I prefer at each location depends on what happens at the other location. If my counterpart is vaccinated with A, then I'd prefer to be vaccinated with B, and vice versa. "immunizes me with vaccine A" by itself can't be assigned an utility.

What if we use the global consequences instead, so that pressing A has the consequence "immunizes both of us with vaccine A"? Then a choice's consequences do not differ by location, and “probability of being at a location” no longer has a role to play in the decision.

New to LessWrong?

New Comment
79 comments, sorted by Click to highlight new comments since: Today at 2:25 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It's not just indexical uncertainty, it's any kind of uncertainty, as possible worlds can trade with each other. Independence is approximation, adequate for our low-intelligence times, but breaking down as it becomes possible to study counterfactuals. It's more obvious with indexical uncertainty, where the information can be transferred in apparent form by stupid physics, and less obvious with normal uncertainty, where it takes a mind.

This idea that possible worlds can trade with each other seems to have fairly radical implications. Together with Eliezer's idea that agents who know each other's source code ought to play cooperate in one-shot PD, doesn't it imply that all sufficiently intelligent and reflective agents across all possible worlds should do a global trade and adopt a single set of preferences that represents a compromise between all of their individual preferences? (Note: the resulting unified preferences are not necessarily characterized by expected utility maximization.)

Let me trace the steps of my logic here. First take 2 agents in the same world who know each other's source code. Clearly, each adopting a common set of preferences can be viewed as playing Cooperate in a one-shot PD. Now take an agent who has identified a counterfactual agent in another possible world (who has in turn identified it). Each agent should also adopt a common set of preferences, in the expectation that the other will do so as well. Either iterating this process, or by doing a single global trade across all agents in all possible worlds, we should arrive at a common set of preferences between everyone.

Hmm, maybe this is just what you meant by "one global decision"? Since my original interest was to figure out what probabilities mean in the context of indexical uncertainty, let me ask you, do probabilities have any role to play in your decision theory?

8Vladimir_Nesov15y
Agents don't need to merge by changing anything in their individual preferences, merging is just a way of looking at the system, like in process algebra. Three agents can be considered as three separate agents cooperating with each other, or as two agents, one a merge of the first two of the original ones, or as one merged agent. All different perspectives on the same system, revealing its structure. The crucial relation in this picture is that the global cooperation must be a Pareto improvement over cooperations (merges) among any subset of the agents. This is a possible origin for the structure of fair coopertive strategy. More than that, if each agent that could otherwise be considered as individual is divided in this manner on a set of elementary preferences, and all of these elementary preferences are then dumped together in the global cooperation, this may provide all the detail the precise choice of the fair cooperative strategy might need. The "weights" come from the control that each of the elementary agents has over the world.
3Wei Dai15y
Are you familiar with Cooperative Game Theory? I'm just learning it now, but it sounds very similar to what you're talking about, and maybe you can reused some of its theory and math. (For some reason I've only paid attention to non-cooperative game theory until recently.) Here's a quote from page 356 of "Handbook of Game Theory with Economic Applications, Vol 1":
2Vladimir_Nesov15y
I couldn't find anything that "clicked" with cooperation in PD. Above, I wasn't talking about a kind of Nash equilibrium protected from coalition deviations. The correlated strategy needs to be a Pareto improvement over possible coalition strategies run by subsets of the agents, but it doesn't need to be stable in any sense. It can be strictly dominated, for example, by either individual or coalition deviations.
8Wei Dai15y
A core in Cooperative Game Theory doesn't have to be a Nash equilibrium. Take a PD game with payoffs (2,2) (-1,3) (3,-1) (0,0). In Cooperative Game Theory, (-1,3) and (3,-1) are not considered improvements that a player can make over (2,2) by acting for himself. Maybe one way to think about it is that there is an agreement phase, and an action phase, and the core is the set of agreements that no subset of players can improve upon by publicly going off (and forming their own agreement) during the agreement phase. Once an agreement is reached, there is no deviation allowed in the action phase. Again, I'm just learning Cooperative Game Theory, but that's my understanding and it seems to correspond exactly to your concept.
3Vladimir_Nesov15y
Sounds interesting, thank you.
1Will_Newsome13y
The following is an honest non-rhetorical question: Is it not misleading to use the word 'cooperation' as you seem to be using it here? Don't you still get 'cooperation' in this sense if the subsets of agents are not causally interacting with each other (say) but have still semi-Platonicly 'merged' via implicit logical interaction as compared to some wider context of decision algorithms that by logical necessity exhibit comparatively less merging? This sets up a situation where an agent can (even accidentally) engineer 'Pareto improvements' just by improving its decision algorithm (or more precisely replacing 'its' decision algorithm (everywhere 'it' is instantiated, of course...) with a new one that has the relevant properties of a new, possibly very different logical reference class). It's a total bastardization of the concept of trade but it seems to be enough to result in some acausal economy (er, that is, some positive-affect-laden mysterious timeless attractor simultaneously constructed and instantiated by timeful interaction) or 'global cooperation' as you put it, and yet despite all that timeless interaction there are many ways it could turn out that would not look to our flawed timeful minds like cooperation. I don't trust my intuitions about what 'cooperation' would look like at levels of organization or intelligence much different from my own, so I'm hesitant to use the word. (I realize this is 'debating definitions' but connotations matter a lot when everything is so fuzzily abstract and yet somewhat affect-laden, I think. And anyway I'm not sure I'm actually debating definitions because I might be missing an important property of Pareto improvements that makes their application to agents that are logical-property-shifting-over-time not only a useless analogy but a confused one.) This question is partially prompted by your post about the use of the word 'blackmail' as if it was technically clear and not just intuitively clear which interactions are bla
1Vladimir_Nesov13y
Yes, it's better to just say that there is probably some acausal morally relevant interaction, wherein the agents work on their own goals. (I don't understand what you were saying about time/causality. I disagree with Nesov_2009's treatment of preference as magical substance inherent in parts of things.)
4Vladimir_Nesov15y
It does, and I discussed that here. An interesting implication that I noticed a few weeks back is that an UFAI would want to cooperate with a counterfactual FAI, so we get a slice of the future even if we fail to build FAI, depending on how probable it was that we would be able to do that. A Paperclip maximizer might wipe out humanity, then catch up on its reflective consistency, look back, notice that there was a counterfactual future where a FAI is built, allot some of the collective preference to humanity, and restore it from the info remaining after the initial destruction (effectively constructing a FAI in the process). (I really should make a post on this. Some of the credit due to Rolf Nelson for UFAI deterrence idea.)
3orthonormal15y
This seems fishy to me, given the vast space of possible preferences and the narrowness of the target. Assuming your idea of preference compromise as the convergent solution, what weighting might a reflective AI give to all of the other possible preference states, especially given the mutually exclusive nature of some preferences? If there's any Occam prior involved at all, something horrifically complicated like human moral value just isn't worth considering for Clippy.
4Vladimir_Nesov15y
Preferences get considered (loosely) based on probabilities with which AGIs possessing them could've been launched. There supposedly is a nontrivial chance of getting to FAI, so it's a nontrivial portion of Paperlipper's cooperation. FAI gets its share because of (justified efficacy of) our efforts for creating FAI, not because of being on some special metaphysical place, and even not because of the relation of its values to human origin, as humans in themselves claim no power.
-1Z_M_Davis15y
Mind projection fallacy? How are these probabilities calculated, and on what prior information? Even if the AI can look back on the past and properly say in some sense that there was a such-and-this a probability of some FAI project succeeding, can't it just the same look still further back and say there was such-and-that a probability of humanity never evolving in the first place? This just brings us back to the problem orthonormal mentions: our preferences are swamped by the vastness of the space of all possible counterfactual preferences.
4Vladimir_Nesov15y
You don't care about counterfactual preferences, you only care about the bearers of these conterfactual preferences being willing to help you, in exchange for you helping them. It might well be that prior to the first AGI, the info about the world is too sparse or scrambled to coordinate with counterfactual AGIs, for our AGI to discern what's to be done for the others to improve the possible outcome for itself. Of those possibilities, most may remain averaged out to nothing specific. Only if the possibility of FAI is clear enough, will the trade take form, and sharing common history until recently is a help in getting the clear info.
2Wei Dai15y
I'd like to note a connection between Vladimir's idea, and Robin Hanson's moral philosophy, which also involves taking into account the wants of counterfactual agents. I'm also reminded of Eliezer's Three Worlds Collide story. If Vladimir's right, many more worlds (in the sense of possible worlds) will be colliding (i.e., compromising/cooperating). I look forward to seeing the technical details when they've been worked out.
1Wei Dai15y
Ok, so I see that probability plays a role in determining one's "bargaining power", which makes sense. We still need a rule that outputs a compromise set of preferences when given a set of agents, their probabilities, individual preferences, and resources as input, right? Does the rule need to be uniquely fair or obvious, so that everyone can agree to it without discussion? Do you have a suggestion for what this rule should be? Edit: I see you've answered some of my questions already in the other reply. This is really interesting stuff!
0cousin_it15y
I don't get Counterfactual Mugging at all. Dissolve the problem thus: exactly which observer-moment do we, as problem-solvers, get to optimize mathematically? Best algorithm we can encode before learning the toss result: precommit to be "trustworthy". Best algorithm we can encode after learning the toss result: keep the $100 and afterwards modify ourselves to be "trustworthy" - iff we expect similar encounters with Omega-like entities in the future with high enough expected utility. It's pretty obvious that more information about the world allows us to encode a better algorithm. Is there anything more to it?
1Vladimir_Nesov15y
What's observer-moment (more technically, as used here)? What does it mean to be "trustworthy"? (To be a cooperator? To fool Omega of being a cooperator?) For keeping the $100: you are not the only source of info, you can't really modify yourself like that, being only a human, and it's specified that you don't expect other encounters of this sort. Whatever algorithm you can encode after you learn the toss result, you can encode before learning the toss result as well, by including it under the conditional clause, to be executed if the toss result matches the appropriate possibility. More than that, whatever you do after you encounter the new info can be considered the execution of that conditional algorithm, already running in your mind, even if no deliberative effort for choosing it was made. By establishing an explicit conditional algorithm you are only optimizing the algorithm that is already in place, using that same algorithm, so it could be done after learning the info as well as before (well, not quite, but it's unclear how significant is the effect of lack of reflective consistency when reconsidered under reflection).
3cousin_it15y
Here's a precise definition of "observer-moment", "trustworthiness" and everything else you might care to want defined. But I will ask you for a favor in return... Mathematical formulation 1: Please enter a program that prints "0" or "1". If it prints "1" you lose $100, otherwise nothing happens. Mathematical formulation 2: Please enter a program that prints "0" or "1". If it prints "1" you gain $10000 or lose $100 with equal probability, otherwise nothing happens. Philosophical formulation by Vladimir Nesov, Eliezer Yudkowsky et al: we ought to find some program that optimizes the variables in case 1 and case 2 simultaneously. It must, must, must exist! For grand reasons related to philosophy and AI! Now the favor request: Vladimir, could you please go out of character just this once? Give me a mathematical formulation in the spirit of 1 and 2 that would show me that your and Eliezer's theories have any nontrivial application whatsoever.
0Vladimir_Nesov15y
Vladimir, it's work in progress; if I could state everything clearly, I would've written it up. It also seems that what is already written here and there informally on this subject is sufficient to communicate the idea, at least as problem statement.
0Wei Dai15y
Yes, that seems like an interesting way to think about your puzzle. Thanks for pointing out the connection. Have you considered what kind of decision theory would be needed to handle these violations of Independence?
8Vladimir_Nesov15y
Whole strategies need to be considered instead of individual actions, so that there is only one global decision, with individual actions selected as components of the overall calculation of the better global strategy. Indexical uncertainty becomes a constraint on strategy that requires actions to be equal in indistinguishable situations. More generally, different actions of the same agent can be regarded as separate actions of separate agents sharing the same preferences, who cooperate, exchanging info through the history of agent's development that connects them (it). Even more generally, the same process should take care of cooperation of agents with different preferences. Even in that situation, the best global strategy will take into account (coordinate) all actions performed by all agents, including counterfactual ones (a benefit of reflective consistency enabling to perform calculations on the spot, not necessarily in advance). So, expected utility (or some other order to that effect) is compared for the global strategies involving not just the agent, but all cooperating agents, and then the agent just plays its part in the selected global strategy. If the agent has a lot of info about where it is (low indexical uncertainty), then it'll be able to perform a precisely targeted move within the global strategy, suited best for the place it's in. The counterfactual and other-time/other-place counterparts of the agent will perform different moves for different details of the situation. Uncertainty (of any kind) limits the ability of the agent to custom-make its moves, so it must choose a single move targeted at the larger area of the territory over which it's uncertain, instead of choosing different moves for each of its points, if it had the possible info discriminating among them.
-1PhilGoetz15y
I don't believe that possible worlds can trade with each other, and I don't see anything in Counterfactual Mugging to persuade me of that. Expectation maximization is based on a model in which you inhabit a world state, and you have a set (possibly infinite) of possible future world states, and a probability (or point on a probability distribution) attached to each one. If you have interactions between your possible future states, you're just not representing them correctly. The most you can say is that you are using some different model. You can't say there's a problem with the model, unless you demonstrate a situation your model can handle better than the standard model. To answer the counterfactual mugging: You keep your $100. Because the game is over. You can't gain money in another branch by giving up the $100. This is not a Newcomb-like situation. Please provide a counterargument if you vote this down.
3Vladimir_Nesov15y
Consider two alternative possible worlds, forking from a common worldline with equal 50% probability. In one world, an agent A develops, and in another, an agent B. Agent A can either achieve U1 A-utilons or U2 B-utilons, U2>U1 (if A chooses to get U2 B-utilons, it produces 0 A-utilons). Agent B can either achieve U1 B-utilons, or U2 A-utilons. If each of them only thinks about itself, the outcome is U1 for A and U1 for B, that is not very much. If instead each of them optimizes the other-utility, both get U2. If this causes any troubles, shift the perspective to the point before the fork, and calculate expected utility for these strategies: first one has U1/2 in both A-utility and B-utility, while the second gives U2/2 utility for both, which is better. It's more efficient for them to produce utility for the other, which maps directly on the concept of trade. Counterfactual mugging explores exactly the same conceptual problems that you could get trying to accept the argument above. If you accept counterfactual mugging, you should accept the deal above as well. Of course, both agents must be capable of telling whether the other counterfactual agent is going to abide by the deal, which is Omega's powers in CM.
0mwaser13y
Strategy one has U1/2 in both A-utility and B-utility with the additional property that the utility is in the correct fork where it can be used (i.e. it truly exists). Strategy two has U2/2 in both A-utilty and B-utility but the additional property that the utility produced is not going to be usable in the fork where it is produced (i.e. the actual utility is really U0/2 unless the utility can be traded for the opposite utility which is actually usable in the same fork). Assuming that there is no possibility of trade (since you describe no method by which it is possible): I don't see a requirement for trade existing in the counterfactual mugging problem so I accept it. Since the above deal requires the possibility of trade to actually gain USABLE utility (arguably the only nonzero kind assuming that [PersonalUse OR Trade = Usability]) and I don't see the possibility for trade, I am justified in rejecting the above deal despite accepting the counterfactual deal.
4Vladimir_Nesov13y
Utility is not instrumental, not used for something else, utility is the (abstract) thing you try to maximize, caring of nothing else. It's the measure of success, all consequences taken into account (and is not itself "physical"). As such, it doesn't matter in what way (or "where") utility gets "produced". Knowing that might be useful for the purpose of computing utility, but not for the purpose of interpreting the resulting amount, since utility is the final interpretation of the situation, the only one that matters. Now, it might be that you consider events in the counterfactual worlds not valuable, but then it interrupts my argument a step earlier than you did, it makes incorrect the statement that A's actions can produce B-utility. It could be that A can't produce B-utility, but it can't be that A produces B-utility but it doesn't matter for B. Hence the second paragraph about counterfactual mugging: if you accept that events in the counterfactual world can confer value, then you should take this deal as well. And no matter whether you accept CM or not, if you consider the problem in advance, you want to precommit to counterfactual trade. And hence, it's a reflectively consistent thing to do to accept counterfactual trade later as well.
0mwaser13y
Fair enough. I'm willing to rephrase my argument as A can't produce B utility because there is no B present in the world. Yes, I do want to pre-commit to a counter-factual trade in the mugging because that is the cost of obtaining access to an offer of high expected utility (see my real-world rephrasing here for a more intuitive example case). In the current world-splitting case, I see no utility for me since the opposing fork cannot produce it so there is no point to me pre-committing.
3Vladimir_Nesov13y
Why do you believe that the counterfactual isn't valuable? You wrote: That B is not present is a given possible world is not in itself a valid reason to morally ignore that possible world (there could be valid reasons, but B's absence is not one of them for most preferences that are not specifically designed to make this condition hold, and for human-like morality in particular). For example, people clearly care about the (actual) world where they've died (not present): you won't trade a penny a day while you live for eternal torture to everyone after you die (while you should, if you don't care about the world where you are not present).
2mwaser13y
We seem to have differing assumptions: My default is to assume that B utility cannot be produced in a different world UNLESS it is of utility in B's world to produce the utility in another world. One method by which this is possible is trade between the two worlds (which was the source of my initial response). Your assumption seems to be that B utility will always have value in a different world. My default assumption is explicitly overridden for the case where I feel good (have utility in the world where I am present) when I care about the world where I am not present. Your (assumed) blanket assumption has the counterexample that while I feel good when someone has sex with me in the world where I am present (alive), I do not feel good (I feel nothing -- and am currently repulsed by the thought = NEGATIVE utility) when someone has sex with me in the world where I am dead (not present). ACK. Wait a minute. I'm clearly confusing the action that produced B utility with B utility itself. Your problem formulation did explicitly include your assumption (which thereby makes it a premise). OK. I think I now accept your argument so far. I have a vague feeling that you've carried the argument to places where the premise/assumption isn't valid but that's obviously the subject for another post. (Interesting karma question. I've made a mistake. How interesting is that mistake to the community? In this case, I think that it was a non-obvious mistake (certainly for me without working it through ;-) that others have a reasonable probability of making on an interesting subject so it should be of interest. We'll see whether the karma results validate my understanding.)
1Vladimir_Nesov13y
(Just to be sure, I expect this is exactly the point you've changed your mind about, so there is no need for me to argue.) Does not compute. Utility can't be "in given world" or "useful" or "useful from a given world". Utility is a measure of stuff, not stuff itself. Measure has no location. Not if we interpret "utility" as meaning "valuable stuff". It's not generally correct that the same stuff is equally valuable in all possible worlds. If in worlds of both agents A and B we can produce stuff X and Y, it might well be that producing X is world A has more B-utility than producing Y in world A, but producing X in world B has less B-utility than producing Y in world B. At the same time, given amount of B-utility is equally valuable, no matter where the stuff measured so got produced.
0mwaser13y
Yes. I agree fully with the above post.
0Jordan13y
But can certainly be location dependent. Measure doesn't have to be translation invariant. Hyperbolic discounting, for instance.
-1PhilGoetz15y
You're presenting a standard PD, only distributed across possible worlds. Doesn't seem to be any difference between splitting into 2 possible worlds, and taking 2 prisoners into 2 different cells. So you would need to provide a solution, a mechanism for cooperation, that would also work for the PD. And you haven't. Don't know what you mean by "accept counterfactual mugging". Especially since I just said I don't agree with your interpretation of it. I believe the counterfactual mugging is also just a rephrasing of the PD. You should keep the $100 unless you would cooperate in a one-shot PD. We all know that rational agents would do better by cooperating, but that doesn't make it happen.
0[anonymous]15y
That was the answer to the original edition of your question, that asked what does counterfactual mugging has to do with the argument for trade between possible worlds. I presented more or less a direct reduction in the comment above.

Downvoted because due to lack of meaningful example I don't understand what author is trying to say.

5Wei Dai15y
Yes, I should have provided an example. Please see the edited post.

Thanks for adding an example. Let me rephrase it:

You have been invited to take part in a game theory experiment. You are placed in an empty room with three buttons labeled "1", "2" and "my room number". Another test subject is in another room with identical buttons. You don't know your room number, or theirs, but experimenters swear they're different. If you two press buttons corresponding to different numbers, you are both awarded $100 on exit, otherwise zero.

...What was so interesting about this problem, again?

5Wei Dai15y
It seems that my communication attempt failed badly last time, so let me try again. The "standard" approach to indexicals is to treat indexical uncertainty the same as any other kind of uncertainty. You compute a probability of being at each location, and then maximize expected utility. I tried to point out in this post that because decisions made at each location can interact non-linearly, this doesn't work. You transformed my example into a game theory example, and the paradox disappeared, because game theory does take into account interactions between different players. Notice that in your game theory example, the computation that arrives at the solution looks nothing like an expected utility maximization involving probabilities of being at different locations. The probability of being at a location doesn't enter into the decision algorithm at all, so do such probabilities mean anything?
1PhilGoetz15y
How does it not work? If you are at a different location, that's a different world state. You compute the utility for each world state separately. Problem solved. And to the folks who keep voting me down when I point out basically the same solution: State why you disagree. You've already taken 3 karma for me. Don't just keep taking karma for the same thing over and over without explaining why.
0Vladimir_Nesov15y
If the same world contains two copies of you, you can be either copy within the same world.
0PhilGoetz15y
The same world does not contain two copies of you. You are confused about the meaning of "you". Treat each of these two entities just the same way you treat every other agent in the world. If they are truly identical, it doesn't matter which one is "you".
0cousin_it15y
Yes, they do. In this case you just got lucky and the probabilities factored out of the calculations. The general case where they don't necessarily factor out is called evolutionary game theory: indexical probabilities correspond to replicator frequencies, utility corresponds to fitness.
0Wei Dai15y
I need to brush up on evolutionary game theory, but I don't see the correspondence between these two subjects yet. Can you take a standard puzzle involving indexical uncertainty, for example the Sleeping Beauty Problem, and show how to solve it using evolutionary game theory?
0[anonymous]15y
Hmm, I don't see any problem in that scenario. It doesn't even require game theory because the different branches don't interact. Whatever monetary rewards you assign to correct/incorrect answers, the problem will be easy to solve by simple expected utility maximization.
0cousin_it15y
Hmm, I don't see any problem in that scenario. It doesn't even require game theory because the different branches don't interact. Whatever monetary rewards you assign to correct/incorrect answers, the problem will be easy to solve by simple expected utility maximization.
-1Vladimir_Nesov15y
Consider two players as two concurrent processes: each can make any of three decisions. If you consider their decisions separately, it's total of 9 options, and the state space that you construct to analyze them will contain 9 elements. Reasoning with uncertainty can then consider events on this state space, and preference is free to define prior+utility for the 9 elements in any way. But consider another way of treating this situation: instead of 9 elements in the state space, let's introduce only 6: 3 for the first player's decision and 3 for the second player's. Now, the joint decision of our players is represented not by one element of the state space as in the first case, but by a pair of elements, one from each triple. The options for choosing prior+utility, and hence preference, are more limited for this state space. In the first case, it's unclear what could the probability of being one of the players mean: each element of the state space corresponds to both players. In the second case, it's easy: just take the total measure of each triple. When the decisions are dependent, the second way of treating this situation can fail, and the expressive power of expected utility become insufficient to express resulting preference. There is an interesting extension to the question of whether indexical probability is always meaningful: is the probability of ordinary observations, even in a deterministic world, meaningful? I'm not sure it is. When you solve the decision problem, you consider preference over strategies, and a strategy includes the instructions for what to do given either observation. In the space of all possible strategies, each point considers all branches at each potential observation, just like in the example with triples of decisions above, where all 9 elements of the state space describe the decisions of both players. There doesn't seem to be a natural way to define probability of each of the possible observations at a given observation point, st
0Wei Dai15y
In the case of probability of ordinary observations, I think you can assign probabilities if your preferences over possible strategies satisfy some conditions, the major one being what you prefer to happen in one branch has to be independent of what you prefer to happen in another branch, i.e., the Axiom of Independence. If we ignore counterfactual-mugging type considerations, do you see any problems with this? If so can you give an example?
0Vladimir_Nesov15y
This is exactly the difference that allows to have the 6-element state space, as in the example with indexical uncertainty above, instead of more general 9-element state space. You place the possibilities in one branch by the side with the possibilities in the other branch, instead of considering all possible combinations of possibilities. It's easy to represent various situations for which you assign probability as alternatives, lying side by side in the state space: the alternatives in different possible worlds, or counterfactuals, as they never "interact", seem to be right to model by just considering as options, independently. The same for two physical systems that don't interact with each other: what's the difference between that and being in different possible worlds? - And a special case of this situation is indexical uncertainty. One condition for doing it without problem is independence. But independence isn't really true, it's approximation. It's trivial to set up the situations equivalent to counterfactual mugging, if the participants are computer programs that don't run very far. It's possible to prove things about where a program can go, and perform actions depending on the conclusion. What do you do then? I don't know yet, your comment brought the idea of meaninglessness of probability of ordinary observations just yesterday, before that I didn't notice this issue. Maybe I'll finally find a situation where prior+utility isn't an adequate way of representing preference, or maybe there is a good way of lifting probability of observations to probability of strategies.
0Wei Dai15y
I guess it's not, unless you're already interested in figuring out the nature of indexical uncertainty. If you're not sure what's interesting about indexical uncertainty, take a look at http://www.simulation-argument.com/ and http://en.wikipedia.org/wiki/Doomsday_argument.
2JGWeissman15y
I think what cousin_it was asking (and I would also like to know) is: what problem with the Axiom of Independence does the indexical uncertainty in your example (or cousin_it's rephrasing) illustrate?
1Wei Dai15y
Let A = "I'm immunized with vaccine A", B = "I'm immunized with vaccine B", p = probability of being the original. The Axiom of Independence implies p A + (1-p) A > p B + (1-p) A iff p A + (1-p) B > p B + (1-p) B To see this, substitute A for C in the axiom, and then substitute B for C. This statement says that what I prefer to happen at one location doesn't depend on what happens at another location, which is false in the example. In fact, the right side of the iff statement is true while the left side is false. Does this explanation help?
3JGWeissman15y
That is based on the unspoken assumption that you prefer A to B. You yourself explained that such a preference is nonsense: If an axiom or theorem has the form "If X then Y", you should demonstrate X before invoking the axiom or theorem.
0cousin_it15y
Yes, I meant to ask exactly that.

Expected Utility Theory may not apply in situations involving indexical uncertainty.

Sounds intriguing. Can you provide some small game to show that?

1Vladimir_Nesov15y
A = pressing a button gives you $100 B = pressing a button gives you $1000 (if it's still in the box) C = pressing a button exterminates the $1000 from B, and this happens before B. B>A, but p*B+(1-p)*C<p*A+(1-p)*C for (1-p) big enough, where p represents your uncertainty of whether your are pressing the button to get $1000/$100 or to exterminate $1000. The lotteries are dependent now, which obviously breaks the principle.
0PhilGoetz15y
Where's the indexical uncertainty in this example? You're talking about temporal uncertainty. You aren't describing different world states. You're describing different timelines. C is temporally tangled with B. If you express your timelines in a suitable temporal logic, and then let us choose between timelines, I expect the problem will go away.
2Vladimir_Nesov15y
If the worlds are 4D crystals of timelines, two agents placed at different times are not much different from two agents placed in the same time-slice.
-3PhilGoetz15y
You can't expect to use a model that's based on state transitions to work when you switch to a timeless model. If the worlds are 4D crystals, there is no time, no decisions, and no expectation; so what does "optimization" even mean? Pick the best single timeline? Whose problem does that solve?
2PhilGoetz15y
I'd love to stay and discuss this with you, but since all you do is karma-slap me without explanation whenever I open my mouth - bye!
1TheAncientGeek9y
Theres still subjective uncertainty in 4D crystals.
0conchis15y
It seems obvious from your initial description that B!>A here, so I don't quite see how this is supposed to break the principle. What am I missing?
-1Vladimir_Nesov15y
If you are choosing between A and B, the C option doesn't get implemented (equivalently: you prior states that's almost impossible, if you don't do that yourself), so the money is certainly there, $1000>$100.
0conchis15y
Sorry, I'm still confused. Bear with me! If C is impossible in any case when you're choosing between A and B, then I would have thought that the value of C is 0. Whether or not it exterminates the $1000, you don't get the $1000 anyway, so why should you care? (Unless it can affect what happens in the A vs. B case, in which case B!>A.) ETA: But if u(C)=0, then pB+(1-p)C pB<pA, which is false for all (non-negative) values of p.
0Vladimir_Nesov15y
I'm confused with your confusion... A, B and C are your actions, they happen depending on what you choose. A vs. B, in terms involving C, means setting p=1. That is, you implicitly choose to not press C and thus get the $1000 on B without problems.
2JGWeissman15y
This example does not really illustrate the point, but I think I see where you are going. Suppose there is room with two buttons X, and Y. Pushing button X gives you $100 (Event A) with probability p, and does nothing (Event C) with probability 1-p, every time it is pushed. Pushing button Y gives you $150 (Event B) with the same probability p, and does nothing (Event C) with probability 1-p, provided that Event B has not yet occurred, otherwise it does nothing (Event C). So, now you get to play a game, where you enter the room and get to press either button X or Y, and then your memory is erased, you are reintroduced to the game, and you get to enter the room again (indistinguishable to you from entering the first time), and press either button X or Y. Because of indexical uncertainty, you have to make the same decision both times (unless you have a source of randomness). So, your expected return from pressing X is 2*p*$100 (the sum from two independent events with expected return p*$100), and your expected return from pressing Y is (1-(1-p)^2) * $150 (the payoff times the probability of not failing to get the payoff two times), which simplifies to (2*p - p^2) * $150. So, difference in the payoffs, P(Y) - P(X) = 2*p * $50 - (p^2) * $150 = $50 * p * (2 - 3*p). So Y is favored for values of p between 0 and 2/3, and X is favored for values of p between 2/3 and 1. But doesn't the Axiom of Independence say that Y should be favored for all values of p, because Event B is preferred to Event A? No, because pressing Y does not really give p*B + (1-p)*C. It gives q*p*B + (1-q*p)*C, where q is the probability that Event B has not already happened. Given that you press Y two times, and you do not know which time is which, q = (1 - .5 * p), that is, the probability that it is not the case that this is the second time (.5), and the B happened the first time (p). Now, if I had chosen different probabilities for the behaviors of the buttons, so that when factoring in the index
0cousin_it15y
Your analysis looks correct to me. But if Wei Dai indeed meant something like your example, why did he/she say "indexical uncertainty" instead of "amnesia"? Can anyone provide an example without amnesia - a game where each player gets instantiated only once - showing the same problems? Or do people that say "indexical uncertainty" always imply "amnesia"?
3Vladimir_Nesov15y
Amnesia is a standard device for establishing scenarios with indexical uncertainty, to reassert the fact that your mind is in the same state in both situations (which is the essence of indexical uncertainty: a point on your map corresponds to multiple points on the territory, so whatever decision you make, it'll get implemented the same way in all those points of the territory; you can't differentiate between them, it's a pack deal).
0wuwei15y
Since the indexical uncertainty in the example just comes down to not knowing whether you are going first or second, you can run the example with someone else rather than a past / future self with amnesia as long as you don't know whether you or the other person goes first.
0JGWeissman15y
That's true, but that adds the complication of accounting for the probability that the other person presses Y, which of course would depend on the probability that person assigns for you to press Y, which starts an infinite recursion. There may be an interesting game here (which might illustrate another issue), but it distracts from the issue of how indexical uncertainty affects the Axiom of Independence. Though, we could construct the game so that you and the other person are explicitly cooperating (you both get money when either of you press the button), and you have a chance to discuss strategy before the game starts. In this case, the two strategies to consider would be one person presses X and the other presses Y (which dominates both pressing X), or both press Y. The form of the analysis is still the same, for low probabilities, both pressing Y is better (the probability of two payoffs is so low it is better to optimize single payoffs), and for higher probabilities, one pressing X and one pressing Y is better (to avoid giving up the second payoff). Of course the cutoff point would be different. And the Axiom of Independence would still not apply where the indexical uncertainty makes the probabilities in the game different despite the raw probabilities of the buttons being the same under different conditions.

Most interesting. Though with a very different motivation, (I was trying to resolve the anthropic paradoxes) I have also concluded that self-locating uncertainties or indexical uncertainties do not have meaningful probabilities.

This is a very old post, but I have to say I don't even understand the Axiom of Independence as presented here. It is stated:

The Axiom of Independence says that for any A, B, C, and p, you prefer A to B if and only if you prefer p A + (1-p) C to p B + (1-p) C.

If p A + (1-p) C and p B + (1-p) C, this means that both A and B are true if and only if C is false (two probabilities sum to 1 if and only if they are mutually exclusive and exhaustive). Which means A is true if and only if B is true, i.e. . Since A and B have the same truth value with ce... (read more)

2Wei Dai22d
I probably should have said this explicitly, but this post assumes prior understanding of Von Neumann–Morgenstern utility theorem and my description of the Axiom of Independence was only meant to remind someone of what the axiom is in case they forgot it, not meant to teach it to someone who never learned it. There's a post on LW that tries to explain the theorem and the various axioms it assumes, or you can try to find another resource to learn it from.
1cubefox22d
Thanks. My second interpretation of the independence axiom seemed to be on track. The car example in the post you linked is formally analogous to your vaccine example. The mother is indifferent between giving the car to her son (A) or daughter (B) but prefers to throw a coin (C, such that C=0.5A+0.5B) to decide who gets it. Constructing it like this, according to the post, would contradict Independence. But the author argues that throwing the coin is not quite the same as simply 0.5A+0.5B, so independence isn't violated. This is similar to what I wrote, at the end, about your example above: Which would mean the example is compatible with the independence axiom. Maybe there is a different example which would show that rational indexical preferences may contradict Independence, but I struggle to think of one.
3Wei Dai22d
Yeah, I think this makes sense, at least if we assume a decision theory like EDT, where pressing C gives me a lot of evidence that the other guy also presses C, so I can think of the consequences of pressing C as "50% chance I receive A and the other guy receives B, 50% chance I receive B and the other guy receives A" which is not a gamble between the consequences of pressing A (we both receive A) and the consequences of pressing B (we both receive B) so Independence isn't violated. I think at the time I wrote the post, I was uncritically assuming CDT (under the impression that it was the mainstream academic decision theory), and under CDT you're supposed to reason only about the causal consequences of your own decision, not how it correlates with the other guy's decision. In that case the consequences of pressing C would be "50% chance I receive A, 50% chance I receive B" and then strictly preferring it would violate Independence. (Unless I say that pressing C also has the consequence of C having been pressed, and I prefer that over A or B having been pressed, but that seems like too much of a hack to me, or violates the spirit/purpose of decision theory.) I hope this is correct and makes sense. (It's been so long since I wrote this post that I had to try to guess/infer why I wrote what I wrote.) If so, I think that part of the post (about "preferential interaction") is more of an argument against CDT than against Independence, but the other part (about physical interactions) still works?
1cubefox22d
Hm, interesting point about causal decision theory. It seems to me even with CDT I should expect as (causal) consequence of pressing C a higher probability that we get different vaccines than if I had only randomized between button A and B. Because I can expect some probability that the other guy also presses C (which then means we both do). Which would at least increase the overall probability that we get different vaccines, even if I'm not certain that we both press C. Though I find this confusing to reason about. But anyway, this discussion of indexicals got me thinking of how to precisely express "actions" and "consequences" (outcomes?) in decision theory. And it seems that they should always trivially include an explicit or implicit indexical, not just in cases like the example above. Like for an action X, "I make X true", and for an outcome Y, "I'm in a world where Y is true". Something like that. Not sure how significant this is and whether there are counterexamples.
2Wei Dai22d
Yeah, it's confusing to me too. Not sure how to think about this under CDT. I actually got rid of all indexicals in UDT, because I found them too hard to think about, which seemed great for a while, until it occurred to me that humans plausibly have indexical values and maybe it's not straightforward to translate them into non-indexical values. See also this comment where I talk about how UDT expresses actions and consequences. Note that "program-that-is-you" is not an indexical, it's a string that encodes your actual source code. This also makes UDT hard/impossible for humans to use, since we don't have access to our literal source code. See also UDT shows that decision theory is more puzzling than ever which talks about these problems and others.

But what if p represents an indexical uncertainty, which is uncertainty about where (or when) you are in the world?

Didn't someone pose this exact question here a few months ago?

If you construct your world states A, B, and C using an indexical representation, there is no uncertainty about where, who, or when you are in that representation. Representations without indexicals turn out to have major problems in artificial intelligence (although they are very popular; mainly, I think, due to the fact that it doesn't seem to be possible for a single knowledg... (read more)