HistoricalLing

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

1) Which analysis is correct?

Neither. The game ends in either (C, C) or (C, D) at random, the relative likelihood of the outcomes depending on the payoff matrix.

Bob wants Alice to cooperate. Alice's only potential reason to cooperate is if Bob's cooperation is contingent on hers. Bob can easily make his cooperation contingent on hers, by only cooperating if she cooperates. Bob will never cooperate when Alice defects, because he would gain nothing by doing so, either directly or by influencing her to cooperate. Bob will cooperate when Alice cooperates, but not always, only to the extent that it's marginally better for Alice to cooperate than to defect. Alice, knowing this, will thus always cooperate.

This pattern of outcomes is unaffected by differences of scale between Alice's payoffs and Bob's. Even if Bob's move is to decide whether to give Alice twenty bucks, and Alice's move is to decide whether to give Bob a dollar or shoot him in the face, Alice always cooperates and Bob only cooperates (5+epsilon)% of the time.

It should be noted that "Alice and Bob are rational utility maximizers" is often conflated with "Alice and Bob know each other's thought processes well enough that they can predict each other's moves ahead of time", which is wrong. If we flip the scenario around so that Bob goes first, it ends in (D, D).

2) Is this scenario just a theoretical curiosity that can never happen in real life because it is impossible to accurately predict the actions of any agent of any significant complexity, or is this a scenario that is relevant (or will become relevant) to practical decision making?

I'm inclined to believe acausal trade is possible, but not for human beings.

The scenario as described, with an all-knowing Alice somehow in the past of a clueless Bob yet unable to interact with him save for a Vizzini-from-The-Princess-Bride-style game of wits, and apparently no other relevant agents in the entire universe, is indeed a theoretical curiosity that can never happen in real life, as are most problems in game theory.

Where acausal trade becomes relevant to practical decision making is when the whole host of Alice-class superintelligences, none of which are in each other's light-cones at all due to distance or quantum branching, start acausally trading with EACH OTHER. This situation is rarely discussed though, because it quickly gets out of control in terms of complexity. Even if you try to start with a box containing just Alice, and she's playing some philosophied-up version of Deal or No Deal, if she's doing real acausal trade, she doesn't even have to play the game you want her to. Let's say she wants the moon painted green, and that's the prize in your dumb thought experiment.

MC: Our next contestant is Alice. So Alice, they say you're psychic. Tell me, what am I going to have for lunch tomorrow?

Alice: [stands there silently, imagining trillions of alternate situations in perfect molecular detail]

MC: Alice? Not a big talker, huh? Ok, well, there's three doors and three briefcases. You'll be given a bag with three colored marbles...

Alice: [one situation contains another superintelligent agent named Charlie, who's already on the moon, and is running a similar profusion of simulations]

MC: ...blah blah, then you'll be cloned 99 times and the 99 clones will wake up in BLUE rooms, now if you can guess what color room you're in...

Alice: [one of Charlie's simulations is of Alice, or at least someone enough like her that Alice's simulation of Charlie's simulation of some branches of her reasoning forms infinite loops. Thus entangled, Charlie's simulation mirrors Alice's reasoning to the extent that Alice can interact with Charlie through it, without interfering with the seamless physics of the situation]

MC: ...the THIRD box contains a pair of cat ears. Now, if you all pick the SAME box, you'll be...

Alice: [Charlie likewise can acausally interact with his simulated Alice through her sub-simulation of agents sufficiently similar to him. Alice wants the moon painted green, Charlie wants wigs to be eaten. Charlie agrees to paint the moon green if Alice eats the MC's wig.]

MC: ...i should note, breakfast in the infinite PURPLE hotel begins at 9:30, so— GAH! MY WIG!

Does this actually work? Charlie's existence is of negligible amplitude, but then, so is Alice's. Fill in a massive field of possibility space with these dimly-real, barely-acausally-trading agents, and you might have something pretty solid.

It's also not just smart people I care about, it's smart people who share my views.

What's wrong with stupid people who share your views? In a binary election, they could easily form more than half the electorate.

I'm honestly undecided this time around. My gut tells me the increased entertainment value of candidate A over candidate B outweighs their minor policy differences...