Suppose you wake up as a paperclip maximizer. Omega says "I calculated the millionth digit of pi, and it's odd. If it had been even, I would have made the universe capable of producing either 1020 paperclips or 1010 staples, and given control of it to a staples maximizer. But since it was odd, I made the universe capable of producing 1010 paperclips or 1020 staples, and gave you control." You double check Omega's pi computation and your internal calculator gives the same answer.
Then a staples maximizer comes to you and says, "You should give me control of the universe, because before you knew the millionth digit of pi, you would have wanted to pre-commit to a deal where each of us would give the other control of the universe, since that gives you 1/2 probability of 1020 paperclips instead of 1/2 probability of 1010 paperclips."
Is the staples maximizer right? If so, the general principle seems to be that we should act as if we had precommited to a deal we would have made in ignorance of logical facts we actually possess. But how far are we supposed to push this? What deal would you have made if you didn't know that the first digit of pi was odd, or if you didn't know that 1+1=2?
On the other hand, suppose the staples maximizer is wrong. Does that mean you also shouldn't agree to exchange control of the universe before you knew the millionth digit of pi?
To make this more relevant to real life, consider two humans negotiating over the goal system of an AI they're jointly building. They have a lot of ignorance about the relevant logical facts, like how smart/powerful the AI will turn out to be and how efficient it will be in implementing each of their goals. They could negotiate a solution now in the form of a weighted average of their utility functions, but the weights they choose now will likely turn out to be "wrong" in full view of the relevant logical facts (e.g., the actual shape of the utility-possibility frontier). Or they could program their utility functions into the AI separately, and let the AI determine the weights later using some formal bargaining solution when it has more knowledge about the relevant logical facts. Which is the right thing to do? Or should they follow the staples maximizer's reasoning and bargain under the pretense that they know even less than they actually do?
Other Related Posts: Counterfactual Mugging and Logical Uncertainty, If you don't know the name of the game, just tell me what I mean to you
Edit: Nope, I changed my mind back.
You've succeeded in convincing me that I'm confused about this problem, and don't know how to make decisions in problems like this.
There're two types of players in this game: those that win the logical lottery and those that lose (here, paperclip maximizer is a winner, and staple maximizer is a loser). A winner can either cooperate or defect against its loser opponent, with cooperation giving the winner 0 and loser 10^20, and defection giving the winner 10^10 and loser 0.
If a player doesn't know whether it's a loser or a winner, coordinating cooperation with its opponent has higher expected utility than coordinating defection, with mixed strategies presenting options for bargaining (the best coordinated strategy for a given player is to defect, with opponent cooperating). Thus, we have a full-fledged Prisoner's Dilemma.
On the other hand, obtaining information about your identity (loser or winner) transforms the problem into one where you seemingly have only the choice between 0 and 10^10 (if you're a winner), or always 0 with no ability to bargain for more (if you're a loser). Thus, it looks like knowledge of a fact turns a problem into one of lower expected utility, irrespective of what the fact turns out to be, and takes away the incentives that would've made a higher win (10^20) possible. This doesn't sound right, there should be a way of making the 10^20 accessible.
It's like an instance of the problem involves not two, but four agents that should coordinate: a possible winner/loser pair, and a corresponding impossible pair. The impossible pair has a bizarre property that they know themselves to be impossible, like self-defeating theories PA+NOT(Con(PA)) (except that we're talking about agent-provability and not provability), which doesn't make them unable to reason. These four agents could form a coordinated decision, where the coordinated decision problem is obtained by throwing away the knowledge that's not common between these four agents, in particular the digit of pi and winner/loser identity. After the decision is made, they plug back their particular information.