Suppose I see a 20% chance that 20% of the multiverse is covered by 100 deterministic universes such that each
Then they could see what everyone does. I, having mastered this cosmos, could simulate them and see what I can prove about how they will use their hypercomputer. If they have something to trade with those like me, they might deliberately write an AI to control their hypercomputer, in order to make it easy for those like me to prove things. Suppose one alien race evolved to value diplomatic rituals and would like other universes to instantiate an ambassador of their race. Suppose "the portion of the multiverse that instantiates infinite happy humans" accounts for 20% of my utility function. Then I could construct an embassy for them so they will spin off an "infinite happy humans" thread, increasing my utility by ~1/100*0.2*0.2*0.2.
Hey, have you noticed how quantum computers appear to let us make an exponential number of timelines interact in ways such that certain agents across the multiverse might find that according to their evolved anthropic intuitions, universes with our lawset that construct the very largest quantum computers dominate their utility function? Convenient.
If they have something to trade with those like me, they might deliberately write an AI to control their hypercomputer, in order to make it easy for those like me to prove things.
But this takes us back to where we started. What could we offer them that they can't make themselves, and what can they offer us that we can't do ourselves?
Dropping this paper here as what I know to be the canonical text on this subject.
https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf
I understand the basic concept of counterfactual trade, and I can understand some examples where it can make sense to trade acausally between two different histories of our familiar world; for example, in Embedded Agency, Scott G and Abram discuss a game where Alice receives a card, either a king or an ace, and can either reveal it to or hide it from Bob, who will guess a probability p that the card is an ace. Bob's incentives are such that he wants to assign maximum probability to the actual outcome, while Alice receives 100*(1 - p^2) points no matter what her card is- so she wants to do her best to make Bob think she has a king.
In this example, one might naively think that if Alice has a king, she should reveal her card, so that Bob will guess p=0, earning Alice the maximum reward. However, under this policy, when Alice has an ace, Bob would be able to infer from Alice hiding the card, that the card is an ace, so he would guess p=100%, and Alice will receive 0 points.
If Alice instead decided to follow the policy of never revealing her card, Bob will be forced to always guess p = 0.5, earning Alice 100*(1 - 0.5^2) = 75 points each time; since Alice prefers a 100% chance of 75 points over a 50/50 shot at {0, 100}, Alice should actually hide her card even when she has a king. This is an example of counterfactual trade that makes sense to me.
Similar logic applies to glomarization, and Newcomb's Dillema also has an acausal trade flavor to it, which is similar to some dynamics that appear in normal human interactions.
However, my question is:
Do you know of a concrete example where acausal trade with a universe completely different from ours, with laws of physics different from ours, actually makes sense?
I struggle to think of such an example, where it'd both be harder for us to do something than it would be for an alien, acausally correlated species to do it, and where there would exist an acausally correlated species that would want to reward us by doing so in return for something more valuable that we can provide. Are there any such examples, or is it a valid conjecture that such acausal trades wouldn't ever make sense?