That's fair enough; the "logical prior" is definitely a relatively big assumption, although it's very hard to justify anything other than a 50/50 split between the two possibilities.
However, LCM only takes place in one of the two possible worlds (the real one); the other never happens. Either way you're calculating the digit of pi in this world, it's just that in one of the two possibilities (which, as far as you know are equal) you are the subject of logical counterfactual surgery by Omega. Assuming this is the case, surely calculating the digit of pi isn't going to help?
From the point of view of your decision-making algorithm, not knowing which of the two inputs it's actually receiving (counterfactual vs real) it outputs "GIVE". Moreover, it chooses "GIVE" not merely for counterfactual utilons, but for real ones.
Of course, we've assumed that the logical counterfactual surgery Omega does is a coherent concept to begin with. The whole point of Omega is that Omega gets the benefit of the doubt, but in this situation it's definitely still worthwhile to ask whether it makes sense.
In particular, maybe it's possible to make a logical counterfactual surgery detector that is robust even against Omega. If you can do that, then you win regardless of which way the logical coin came up. I don't think trying to calculate the relevant digit of pi is good enough, though.
Here's an idea for a "logical counterfactual surgery detector":
Run a sandboxed version of your proof engine that attempts to maximally entangle that digit of pi with other logical facts. For example, it might prove that "if the 10000th decimal digit of pi is 8, then ⊥".
If you detect that the sandboxed proof engine undergoes a logical explosion, then GIVE. Otherwise, REFUSE.
This post was inspired by Benja's SUDT post. I'm going to describe another simplified model of UDT which is equivalent to Benja's proposal, and is based on standard game theory concepts as described in this Wikipedia article.
First let's define what is a "single player extensive-form game with chance moves and imperfect information":
Now let's try using that to solve some UDT problems:
Absent-Minded Driver is the simplest case, since it's already discussed in the literature as a game of the above form. It's strange that not everyone agrees that the best strategy is indeed the best, but let's skip that and move on.
Psy-Kosh's non-anthropic problem is more tricky, because it has multiple players. We will model it as a single-player game anyway, putting the decision nodes of the different players in sequence and grouping them together into information sets in the natural way. The resulting game tree is complicated, but the solution is the same as UDT's. As a bonus, we see that our model does not need any kind of anthropic probabilities, because it doesn't specify or use the probabilities of individual nodes within an information set.
Wei Dai's coordination problem is similar to the previous one, but with multiple players choosing different actions based on different information. If we use the same trick of folding all players into one, and group the decision nodes into information sets in the natural way, we get the right solution again. It's nice to see that our model automatically solves problems that require Wei's "explicit optimization of global strategy".
Counterfactual Mugging is even more tricky, because writing it as an extensive-form game must include a decision node for Omega's simulation of the player. Some people are okay with that, and our model gives the right solution. But others feel that it leads to confusing questions about the nature of observation. For example, what if Omega used a logical coin, and the player could actually check which way the coin came up by doing a long calculation? Paying up is probably the right decision, but our model here doesn't have enough detail.
Finally, Agent Simulates Predictor is the kind of problem that cannot be captured by our model at all, because logical uncertainty is the whole point of ASP.
It's instructive to see the difference between the kind of UDT problems that fit our model and those that require something more. Also it would be easy to implement the model as a computer program, and solve some UDT problems automatically. (Though the exercise wouldn't have much scientific value, because extensive-form games are a well known idea.) In this way it's a little similar to Patrick's work on modal agents, which made certain problems solvable on the computer by using modal logic instead of enumerating proofs. Now I wonder if other problems that involve logical uncertainty could also be solved by some simplified model?