V_V comments on 2012 Less Wrong Census Survey: Call For Critiques/Questions - Less Wrong

20 Post author: Yvain 19 October 2012 01:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (479)

You are viewing a single comment's thread.

Comment author: V_V 20 October 2012 02:39:26PM *  2 points [-]

Alice an Bob are playing a variation of a one-shot Prisoner's dilemma. In this version of the game, instead of choosing their actions simultaneously, Alice moves first, and then Bob moves after he knows Alice move. However, Alice know Bob's thought processes well enough that she can predict his move ahead of time. Both Alice and Bob are rational utility maximizers.

There are two possible ways Alice and Bob can reason:

a) Alice predicts that Bob, being an utility maximizer, will always play Defect no matter what she plays. Hence she also playes Defect in order to minimize her loss. Bob sees that Alice played Defect and playes Defect since he can gain nothing by playing Cooperate. This results in the uncoperative outcome (D, D).

b) Alice reasons that the asymmetric outcomes (D, C) and (C, D) are impossible. (D, C) is impossible because, as stated above, once Alice played Defect, Bob has nothing to gain by playing Cooperate. (C, D) is impossible because Alice can predict Bob's move, hence she will never play Cooperate if she predicts that Bob will play Defect. Therefore, only the symmetric outcomes (C, C) and (D, D) remain. Since Alice prefers (C, C) to (D, D), she plays Cooperate. Bob, at this point, is bounded to also play Cooperate, because if he played Defect then Alice's prediction would be falsified, and this in inconsistent with the assumption that Alice can predict Bob's move. Therefore, the cooperative outcome (C, C), also known as acausal trade, results.

Questions:

1) Which analysis is correct?

2) Is this scenario just a theoretical curiosity that can never happen in real life because it is impossible to accurately predict the actions of any agent of any signficant complexity, or is this a scenario that is relevant (or will become relevant) to practical decision making?

Comment author: ArisKatsaris 21 October 2012 01:55:16AM *  3 points [-]

Scenario (b) doesn't explain/analyse the situation the way I'd explain/analyse it. If Bob is able to precommit himself to play C if and only if Alice plays C, then Alice's mindreading reads Bob's precommitment, Alice plays C to ensure Bob will also play C (otherwise Alice would lose), then Bob's precommitment is followed through and the (C, C) reality becomes true.

If someone can plausibly precommit themselves, via human concepts like honor or duty or obligation, or via computer concepts like rewriting one's software code -- and if they can signal this convincingly, then mutual cooperation becomes a possibility.

"Is this scenario just a theoretical curiosity that can never happen in real life because it is impossible to accurately predict the actions of any agent of any signficant complexity"

It's scenario that is already a reality to some limited expect, though we use concepts like duty, honor, etc... It doesn't always work, mainly because we can't signal effectively the solidity of our precommitment, nor are we indeed always of such iron will that our precommitments are actually solid enough.

EDIT TO ADD: And isn't this concept pretty much what the whole Mutually Assured Destruction doctrine was built on?

Either way, this question is obviously bad for a survey -- as it has to be answered with a small essay, not with a multiple-choice.

Comment author: Kindly 20 October 2012 08:21:41PM 2 points [-]

Analysis (b) can't possibly be right, because Alice's actions ought to depend on the actions of Bob. No amount of logical perfection can force Bob to play Cooperate, so Alice is effectively reasoning herself into a hole.

Analysis (a) is correct if, in fact, Bob is the sort of person that will always play Defect.

In fact, it's pretty clear what the optimal algorithm for Alice is: she should cooperate iff she predicts that Bob will cooperate in response. (Well, she should also defect if she predicts that Bob will cooperate in response to a defection, but that's stupid.)

Bob is the only one whose actions could be expressed as an acausal trade. He wants Alice to predict that he will cooperate, because otherwise Alice will defect and they both end up with the (D,D) payouts. He can obtain this by being the sort of person who cooperates in response to cooperation; but this comes at the cost of missing out on his (C,D) payout. This is still worthwhile if Bob tends to play lots of one-shot prisoner dilemmas with people that can see the future.

Comment author: NancyLebovitz 20 October 2012 03:03:44PM 2 points [-]

Did you mean to put this in the survey thread?

Comment author: V_V 20 October 2012 03:21:58PM *  6 points [-]

IMHO, it might be more interesting to assess the community opinion on topics such as this rather than ask things like how many people you shagged last month.

Comment author: Kindly 21 October 2012 02:37:02AM 2 points [-]

You can always write a discussion post and add a poll in the comments. I think that the survey should be limited to demographics and relatively simple questions.

Comment author: [deleted] 21 October 2012 10:15:58AM 3 points [-]

Now that I think about it, I would like the survey to ask how many people you shagged last month.

Comment author: V_V 21 October 2012 01:01:54PM 1 point [-]

Why?

Comment author: [deleted] 21 October 2012 02:01:14PM 1 point [-]

I'm curious both about the numbers (are people here more like Feynman or more like Tesla?) and whether it correlates with answers to other questions.

Comment author: Vladimir_Nesov 20 October 2012 04:27:42PM *  1 point [-]

In this version of the game, instead of choosing their actions simultaneously, Alice moves first, and then Bob moves after he knows Alice move. However, Alice know Bob's thought processes well enough that she can predict his move ahead of time.

This might lead to a contradiction: since Bob's action depends on Alice's action, and Alice is not always capable of predicting her own action, especially while deciding what it should be, it might be impossible for Alice to predict Bob's action, even if the dependence of Bob's action on Alice's action is simple, i.e. if Alice understands Bob's algorithm very well.

Comment author: V_V 20 October 2012 08:21:29PM 2 points [-]

Ok. Alice can predict Bob's move given Alice's move.

Comment author: faul_sname 21 October 2012 03:18:11AM 0 points [-]

Ok, but if Alice can decide what to predict Bob's move will be given her own move, that means Alice can control Bob's move.

Comment author: V_V 21 October 2012 01:24:26PM 2 points [-]

To some extent yes. But it depends on what function Bob implements. If Bob always playes Defect, Alice has no way of making him playing Cooperate (likewise, if he always playes Cooperate, Alice can't make him play Defect).

Comment author: faul_sname 21 October 2012 08:30:11PM 0 points [-]

Yes, but didn't we already establish that Bob would always defect, because he had nothing to gain in either case, in which case he will defect no matter what Alice chooses? Or is Bob also a TDT-style agent?

Comment author: V_V 21 October 2012 10:22:34PM 1 point [-]

Bob is 'rational'. Interpret this according to the decision theory of your choice.

Comment author: wedrifid 21 October 2012 04:44:33AM *  1 point [-]

This might lead to a contradiction: since Bob's action depends on Alice's action, and Alice is not always capable of predicting her own action, especially while deciding what it should be, it might be impossible for Alice to predict Bob's action, even if the dependence of Bob's action on Alice's action is simple, i.e. if Alice understands Bob's algorithm very well.

The scenarios which result in a contradiction are not compatible with the verbal description of the problem. As such we must conclude that the scenario is one of the ones which contains instances of the pair "Alice and Bob" for which it is possible for Alice to predict the moves of Bob.

If there was a problem that specified "Alice can predict Bob" and there are possible instances of those two where prediction is possible and an answer happened to conclude "it is impossible for Alice to predict Bob's action" then the person giving the answer would just be wrong because they are responding to a problem incompatible with the specified problem.

Comment author: wedrifid 21 October 2012 04:34:25AM 0 points [-]

Alice an Bob are playing a variation of a one-shot Prisoner's dilemma. In this version of the game, instead of choosing their actions simultaneously, Alice moves first, and then Bob moves after he knows Alice move. However, Alice know Bob's thought processes well enough that she can predict his move ahead of time.

We need information about what Bob believes about Alice's thought processes. I am going to answer as if you had appended "and Bob knows that Alice can do this." to the previous sentence so that I can give a useful answer. Without being given such information the problem would just be about allocating priors to Bob that represent his beliefs about Alice's thought process.

Both Alice and Bob are rational utility maximizers.

Be more specific. People embed various assumptions about what is 'rational' behind that phrase. If you mean "Alice and Bob are both Causal Decision Theorists attempting to maximise utility" then then answer is (D, D). If Bob acts 'rationally' in as much as he operates according to a Reflexive Decision Theory of some sort (ie. TDT or UDT) then the outcome is (C, C) regardless of which of the plausible decision theories Alice is assumed to be implementing (CDT, TDT, UDT, EDT).

Comment author: V_V 21 October 2012 01:34:31PM 1 point [-]

if you had appended "and Bob knows that Alice can do this." to the previous sentence so that I can give a useful answer.

Yes.

Be more specific. People embed various assumptions about what is 'rational' behind that phrase.

The underspecification was intentional, so that people may answer according to what their preferred decision theory.

Comment author: wedrifid 21 October 2012 01:44:53PM 0 points [-]

The underspecification was intentional, so that people may answer according to what their preferred decision theory.

Ahh, survey question. In that case may I suggest leaving the 'rational' there but removing the 'utility maxisers'. I think that would get you the most reliable information of the kind you are trying to elicit from the response. This is just because there are some whose "preferred decision theory" and specific use of terminology is such that they would say the 'rational' thing for Bob to do is to cooperate but that a "utility maximising" thing would be to defect. I expect you are more interested in the decision output than the ontology.

Comment author: HistoricalLing 21 October 2012 03:50:05AM *  0 points [-]

1) Which analysis is correct?

Neither. The game ends in either (C, C) or (C, D) at random, the relative likelihood of the outcomes depending on the payoff matrix.

Bob wants Alice to cooperate. Alice's only potential reason to cooperate is if Bob's cooperation is contingent on hers. Bob can easily make his cooperation contingent on hers, by only cooperating if she cooperates. Bob will never cooperate when Alice defects, because he would gain nothing by doing so, either directly or by influencing her to cooperate. Bob will cooperate when Alice cooperates, but not always, only to the extent that it's marginally better for Alice to cooperate than to defect. Alice, knowing this, will thus always cooperate.

This pattern of outcomes is unaffected by differences of scale between Alice's payoffs and Bob's. Even if Bob's move is to decide whether to give Alice twenty bucks, and Alice's move is to decide whether to give Bob a dollar or shoot him in the face, Alice always cooperates and Bob only cooperates (5+epsilon)% of the time.

It should be noted that "Alice and Bob are rational utility maximizers" is often conflated with "Alice and Bob know each other's thought processes well enough that they can predict each other's moves ahead of time", which is wrong. If we flip the scenario around so that Bob goes first, it ends in (D, D).

2) Is this scenario just a theoretical curiosity that can never happen in real life because it is impossible to accurately predict the actions of any agent of any significant complexity, or is this a scenario that is relevant (or will become relevant) to practical decision making?

I'm inclined to believe acausal trade is possible, but not for human beings.

The scenario as described, with an all-knowing Alice somehow in the past of a clueless Bob yet unable to interact with him save for a Vizzini-from-The-Princess-Bride-style game of wits, and apparently no other relevant agents in the entire universe, is indeed a theoretical curiosity that can never happen in real life, as are most problems in game theory.

Where acausal trade becomes relevant to practical decision making is when the whole host of Alice-class superintelligences, none of which are in each other's light-cones at all due to distance or quantum branching, start acausally trading with EACH OTHER. This situation is rarely discussed though, because it quickly gets out of control in terms of complexity. Even if you try to start with a box containing just Alice, and she's playing some philosophied-up version of Deal or No Deal, if she's doing real acausal trade, she doesn't even have to play the game you want her to. Let's say she wants the moon painted green, and that's the prize in your dumb thought experiment.

MC: Our next contestant is Alice. So Alice, they say you're psychic. Tell me, what am I going to have for lunch tomorrow?

Alice: [stands there silently, imagining trillions of alternate situations in perfect molecular detail]

MC: Alice? Not a big talker, huh? Ok, well, there's three doors and three briefcases. You'll be given a bag with three colored marbles...

Alice: [one situation contains another superintelligent agent named Charlie, who's already on the moon, and is running a similar profusion of simulations]

MC: ...blah blah, then you'll be cloned 99 times and the 99 clones will wake up in BLUE rooms, now if you can guess what color room you're in...

Alice: [one of Charlie's simulations is of Alice, or at least someone enough like her that Alice's simulation of Charlie's simulation of some branches of her reasoning forms infinite loops. Thus entangled, Charlie's simulation mirrors Alice's reasoning to the extent that Alice can interact with Charlie through it, without interfering with the seamless physics of the situation]

MC: ...the THIRD box contains a pair of cat ears. Now, if you all pick the SAME box, you'll be...

Alice: [Charlie likewise can acausally interact with his simulated Alice through her sub-simulation of agents sufficiently similar to him. Alice wants the moon painted green, Charlie wants wigs to be eaten. Charlie agrees to paint the moon green if Alice eats the MC's wig.]

MC: ...i should note, breakfast in the infinite PURPLE hotel begins at 9:30, so— GAH! MY WIG!

Does this actually work? Charlie's existence is of negligible amplitude, but then, so is Alice's. Fill in a massive field of possibility space with these dimly-real, barely-acausally-trading agents, and you might have something pretty solid.