Acausal trade is speculated to be possible across a multiverse, but why would any rational individuals want to do this if the multiverse is deterministic? The reality measure occupied by all of the branches of the multiverse are pre-determined and causally independent from each other, so no matter what you "do" in your own branch, you cannot affect the reality measure of other branches. This means that even if your utility function cares about what happens in other branches, nothing you do can affect their fixed reality measure, even "acausally". This is just a consequence of making counterfactual scenarios "real".

For example, if two agents come to an acausal cooperation equilibrium, this does not reduce the pre-determined reality measure of counterfactual worlds where they didn't. For example, if your utility function is proportional to the number of paperclips that exist over the multiverse, then your ultimate utility (total number of paperclips) would be the same no matter what you "do". The only thing that can vary is how many paperclips you can experience, within your own branch of the universe. Therefore, it would only be meaningful for a utility function to be focused on your own branch of the multiverse, since that's the only way for you to talk meaningfully about an expected utility that varies with different actions. As a result, the MWI should make absolutely no difference whatsoever for a decision theory compared to a "single-universe" interpretation such as Copenhagen.

Please let me know if my reasoning is correct, and if not, why?

New Answer
New Comment

3 Answers sorted by

Vladimir_Nesov

96

Counterfactuals relevant to decision making in this context are not other MWI branches, but other multiverse descriptions (partial models), with different amplitudes inside them. You are not affecting other branches from your branch, you are determining which multiverse takes place, depending on what your abstract decision making computation does.

This computation (you) is not in itself located in a particular multiverse or on a particular branch, it's some sort of mathematical gadget. Which could be considered (reasoned about, simulated) from many places where it's thereby embedded, including counterfactual places where it can't rule out the possibility of being embedded for purposes of decision making.

With acausal trade, the trading partners (agents) are these mathematical gadgets, not their instances in their respective worlds. Say, there are agents A1 and A2, which have instances I1 and I2 in worlds W1 and W2. Then I1 is not an agent in this sense, and not a party to an acausal trade between them, it's merely a way for A1 to control W1 (in the usual causal sense). To facilitate acausal trade, A1 and A2 need to reason about each other, but at the end of the day the deal gets executed by I1 and I2 on behalf of their respective abstract masters.

This setup becomes more practical if we start with I1 and I2 (instead of A1 and A2) and formulate common knowledge they have about each other as the abstract gadget A that facilitates coordination between them, with the part of its verdict that I1 would (commit in advance to) follow being A1 (by construction), and the part that I2 would follow being A2. This shared gadget A is an adjudicator between I1 and I2, and it doesn't need to be anywhere near as complicated as them, it only gets to hold whatever common knowledge they happen to have about each other, even if it's very little. It's a shared idea they both follow (and know each other to be following, etc.) and thus coordinate on.

I see, thanks for this comment. But can humans be considered as possessing an abstract decision making computation? It seems that due to quantum mechanics it's impossible to predict the decision of a human perfectly even if you have the complete initial conditions.

Slider

20

Yes, if you insist on causality, acausal trade does not make sense (it is in the name).

It might just confuse more but think of of the two-boxing situation and have the stipulation that a new party "friend Freddy" gets 1000$ if you one box and 0$ if you twobox.

It happens that Freddys planet has already passed the cosmological horizon and will not have causal contact with you anymore. Omega did the rewarding before the divergence while there was still contact (like they set up the boxes with bills you can eventually fiddle).

Why would you be disallowed to care about Freddys wellfare?

To the extent the objection is merely about making possibilities real it attacks a way more general phenomenon than acausal trade. There is no sense picking up a cup because the branches of picking-up and not-picking-up are still going to exist. That is ordinary causation is also undermined in the same go. The ink is already dry so no story can be motivated.

Additional complication to get explicit acausal trade going on.

Freddy also faces a two boxing problem. Assume that one boxing is the "smart move" in the single player game. Omega has the additional rule that if you both two box instead there is a moderate amount (less than one boxing but more than two boxing) in the boxes. Even if you just care about your own money you will care whether Freddy will cooperate or defect. (I am sorry, it is additionally a prisoners dilemma). If you know about Freddy and Freddy knows about you and you both know about the multiplayer rule, acausal trading logic exposes you to less risk than any single box strategy.

Dagon

20

I'm pretty skeptical of acausal trade, so I don't think I'm the best one to answer this.  But my understanding of decision theories that engage in it do so because they want to be in the universe/multiverse which contains this increased utility. 

Thanks for the reply! I thought the point of the MWI multiverse is that the wavefunction evolves deterministically according to the Schrodinger equation, so if the utility function takes into account what happens in other universes then it will just output a single fixed constant no matter what the agent experiences, since the amplitude of the universal wave function at any given time is fixed. I think the only way for utility functions to make sense is for the agent to only care about its own branch of the universe and its own possible future observer-moments. Whatever "happens" in the other branches along with their reality measure is predetermined.

2Shmi
Yes, the universe in that model is indeed deterministic, which means that your wants have no effect on the future but are an artifact of you being an embedded agent. Compatibilism says that you will still act as if you have needs and wants... probably because all your actions are predetermined in every universe, anyway. There is no way to steer the future from its predetermined path, but you are compelled to act as if there was. This includes acausal trade and everything else.
3sisyphus
But can that really be called acausal "trade"? It's simply the fact that in an infinite multiverse there will be causally independent agents who converge onto the same computation. If I randomly think "if I do X there will exist an agent who does Y and we both benefit in return" and somewhere in the multiverse there will be an agent who does Y in return for me doing X, can I really call that "trade" instead of just a coincidence that necessarily has to occur? But if my actions are determined by a utility function and my utility function extends to other universes/branches then that utility function simply will not work since no matter what action the agent takes, the total amount of utility in the multiverse is conserved. In order for a utility function to give the agent's actions different amounts of expected utility it necessarily has to focus on the single world the agent is in instead of caring about other branches of the multiverse. Therefore shouldn't perfectly rational beings care only about their own branch of the multiverse since that's the only way to have justified actions?
2JBlack
There are certainly hypothetical scenarios in which acausal trade is rationally justified: cases in which the rational actors can know whether or not the other actors perform or don't perform some acausally-determined actions depending upon the outcomes of their decision theories, even if they can't observe it. Any case simple enough to discuss is obviously ridiculously contrived, but the mode of reasoning is not ruled out in principle. My expectation is that such a mode of reasoning is overwhelmingly ruled out by practical constraints.
1sisyphus
I understand the logic but in a deterministic multiverse the expected utility of any action is the same since the amplitude of the universal wave function is fixed at any given time. No action has any effect on the total utility generated by the multiverse.
2JBlack
True acausal trade can only really work in toy problems, since the number of possible utility functions for agents across possible worlds almost certainly grows much faster with agent complexity than the agents' abilities to reason about all those possible worlds. Whether the multiverse is deterministic or not isn't really relevant. Even in the toy problem case, I think of it as more similar to the concept of execution of a will than to a concept of trade. We carry out the allocation of resources of an agent that would have valued those allocations, despite them no longer existing in our causal universe. There are some elements relevant to acausal trade in this real-world phenomenon. The decedent can't know or meaningfully affect what the executors actually do, except via a decision structure that applies to both but is external to both (the law in this example, some decision theory in more general acausal trade). The executors now can't affect what the decedent did in the past, or change the decedent's actual utility in any way. The will mainly serves the role of a partial utility function which in this example is communicated, but in pure acausal trade many such functions must be inferred.
1sisyphus
I think the fact that the multiverse is deterministic does play a role, since if an agent's utility function covers the entire multiverse and the agent cares about the other branches, its decision theory would suffer paralysis since any action have the same expected utility - the total amount of utility available for the agent within the multiverse, which is predetermined. Utility functions seem to only make sense when constrained to one branch and the agent treats its branch as the sole universe, only in this scenario will different actions have different expected utilities.
2Slider
You are not entitled to the assumption that the other parts of the multiverse remain constant and uncorrelated to what you do. The multiverse could be superdeterministic. Failing to take into account your causes means you have a worldview in which there are two underdetermined events in the multiverse, the big bang and what you are about to do. Both versions can not be heeding local causation and everything is connected. It makes life a whole lot more practical if you do assume it.
1the gears to ascension
I ... don't think your word bindings are right here, but I'm not quite sure how to make a better pointer to contrast to.