Eliezer's "arbitrary" strategy has the nice property that it gives both players more expected utility than the Nash equilibrium. Of course there are other strategies with this property, and indeed multiple strategies that are not themselves dominated in this way. It isn't clear how ideally rational players would select one of these strategies or which one they would choose, but they should choose one of them.
Why not "P1: C, P2: Y", which maximizes the sum of the two utilities, and is the optimal precommitment under the Rawlian veil-of-ignorance prior?
If we multiply player 2's utility function by 100, that shouldn't change anything because it is an affine transformation to a utility function. But then "P1: B, P2: Y" would maximize the sum. Adding values from different utility functions is a meaningless operation.
The reason player 1 would choose B is not because it directly has a higher payout but because including B in a mixed strategy gives player 2 an incentive to include Y in its own mixed strategy, increasing the expected payoff of C for player 1. The fact that A dominates B is irrelevant. The fact that A has better expected utility than the subgame with B and C indicates that player 1 not choosing A is somehow irrational, but that doesn't give a useful way for player 2 to exploit this irrationality. (And in order for this to make sense for player 1, player 1 would need a way to counter exploit player 2's exploit, and for player 2 to try its exploit despite this possibility.)
The definition you linked to doesn't say anything about entering subgame not giving the players information, so no, I would not agree with that.
I would agree that if it gave player 2 useful information, that should influence the analysis of the subgame.
(I also don't care very much whether we call this object within the game of how the strategies play out given that player 1 doesn't choose A a "subgame". I did not intend that technical definition when I used the term, but it did seem to match when I checked carefully when you objected, thinking that maybe there was a good motivation for the definition so it could indicated a problem with my argument if it didn't fit.)
I also disagree that player 1 not picking A provides useful information to player 2.
I'm sorry but "subgame" has a very specific definition in game theory which you are not being consistent with.
I just explained in detail how the subgame I described meets the definition you linked to. If you are going to disagree, you should be pointing to some aspect of the definition I am not meeting.
Also, intuitively when you are in a subgame you can ignore everything outside of the subgame, playing as if it didn't exist. But when Player 2 moves he can't ignore A because the fact that Player 1 could have picked A but did not provides insight into whether Player 1 picked B or C.
If it is somehow the case that giving player 2 info about player 1 is advantageous for player 1, then player 2 should just ignore the info, and everything still plays out as in my analysis. If it is advantageous for player 2, then it just strengthens the case that player 1 should choose A.
I am a game theorist.
I still think you are making a mistake, and should pay more attention to the object level discussion.
To see that it is indeed a subgame:
Represent the whole game with a tree whose root node represents player 1 choosing whether to play A (leads to leaf node), or to enter the subgame at node S. Node S is the root of the subgame, representing player 1's choices to play B or C leading to nodes representing player 2 choice to play X or Y in those respective cases, each leading to leaf nodes.
Node S is the only node in its information set. The subgame contains all the descendants of S. The subgame contains all nodes in the same information set as any node in the subgame. It meets the criteria.
There is no uncertainty that screws up my argument. The whole point of talking about the subgame was to stop thinking about the possibility that player 1 chose A, because that had been observed not to happen. (Of course, I also argue that player 2 should be interested in logically causing player 1 not to have chosen A, but that gets beyond classical game theory.)
Classical game theory says that player 1 should chose A for expected utility 3, as this is better than than the sub game of choosing between B and C where the best player 1 can do against a classically rational player 2 is to play B with probability 1/3 and C with probability 2/3 (and player 2 plays X with probability 2/3 and Y and with probability 1/3), for an expected value of 2.
But, there are pareto improvements available. Player 1's classically optimal strategy gives player 1 expected utility 3 and player 2 expected utility 0. But suppose instead Player 1 plays C, and player 2 plays X with probability 1/3 and Y with probability 2/3. Then the expected utility for player 1 is 4 and for player 2 it is 1/3. Of course, a classically rational player 2 would want to play X with greater probability, to increase its own expected utility at the expense of player 1. It would want to increase the probability beyond 1/2 which is the break even point for player 1, but then player 1 would rather just play A.
So, what would 2 TDT/UDT players do in this game? Would they manage to find a point on the pareto frontier, and if so, which point?
If you have trouble confronting people, you make a poor admin.
Can we please act like we actually know stuff about practical instrumental rationality given how human brains work, and not punish people for openly noticing their weaknesses.
You could have more constructively said something like "Thank you for taking on these responsibilities even though it sometimes makes you uncomfortable. I wonder if anyone else who is more comfortable with that would be willing to help out."
"Rationality" seems to give different answer to the same problem posed with different affine transformations of the players' utility functions.