Followup to: Counterfactual Mugging.
Let's see what happens with Counterfactual Mugging, if we replace the uncertainty about an external fact of how a coin lands, with logical uncertainty, for example about what is the n-th place in the decimal expansion of pi.
The original thought experiment is as follows:
Omega appears and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But Omega also tells you that if the coin came up heads instead of tails, it'd give you $10000, but only if you'd agree to give it $100 if the coin came up tails.
Let's change "coin came up tails" to "10000-th digit of pi is even", and correspondingly for heads. This gives Logical Counterfactual Mugging:
Omega appears and says that it has just found out what that 10000th decimal digit of pi is 8, and given that it is even, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But Omega also tells you that if the 10000th digit of pi turned out to be odd instead, it'd give you $10000, but only if you'd agree to give it $100 given that the 10000th digit is even.
This form of Counterfactual Mugging may be instructive, as it slaughters the following false intuition, or equivalently conceptualization of "could": "the coin could land either way, but a logical truth couldn't be either way".
For the following, let's shift the perspective to Omega, and consider the problem about 10001th digit, which is 5 (odd). It's easy to imagine that given that the 10001th digit of pi is in fact 5, and you decided to only give away the $100 if the digit is odd, then Omega's prediction of your actions will still be that you'd give away $100 (because the digit is in fact odd). Direct prediction of your actions can't include the part where you observe that the digit is even, because the digit is odd.
But Omega doesn't compute what you'll do in reality, it computes what you would do if the 10001th digit of pi was even (which it isn't). If you decline to give away the $100 if the digit is even, Omega's simulation of counterfactual where the digit is even will say that you wouldn't oblige, and so you won't get the $10000 in reality, where the digit is odd.
Imagine it constructively this way: you have the code of a procedure, Pi(n), that computes the n-th digit of pi once it's run. If your strategy is
if(Is_Odd(Pi(n))) then Give("$100");
then, given that n==10001, Pi(10001)==5, and Is_Odd(5)==true, the program outputs "$100". But Omega tests what's the output of the code on which it performed a surgery, replacing Is_Odd(Pi(n)) by false instead of true to which it normally evaluates. Thus it'll be testing the code
if(false) then Give("$100");
This counterfactual case doesn't give away $100, and so Omega decides that you won't get the $10000.
For the original problem, when you consider what would happen if the coin fell differently, you are basically performing the same surgery, replacing the knowledge about the state of the coin in the state of mind. If you use the (wrong) strategy
if(Coin=="heads") then Give("$100")
and the coin comes up "heads", so that Omega is deciding whether to give you $10000, then Coin=="heads", but Omega is evaluating the modified algorithm where Coin is replaced by "tails":
if("tails"=="heads") then Give("$100")
Another way of intuitively thinking about Logical CM is to consider the index of the digit (here, 10000 or 10001) to be a random variable. Then, the choice of number n (value of the random variable) in Omega's question is a perfect analogy with the outcome of a coin toss.
With a random index instead of "direct" mathematical uncertainty, the above evaluation of counterfactual uses (say) 10000 to replace n (so that Is_Odd(Pi(10000))==false), instead of directly using false to replace Is_Odd(P(n)) with false:
if(Is_Odd(Pi(10000))) then Give("$100");
The difference is that with the coin or random digit number, the parameter is explicit and atomic (Coin and n, respectively), while with the oddness of n-th digit, the parameter Is_Odd(P(n)) isn't atomic. How can it be detected in the code (in the mind) — it could be written in obfuscated assembly, not even an explicit subexpression of the program? By the connection to the sense of the problem statement itself: when you are talking about what you'll do if the n-th digit of pi is even or odd, or what Omega will do if you give or not give away $100 in each case, you are talking about exactly your Is_Odd(Pi(n)), or something from which this code will be constructed. The meaning of procedure Pi(n) is dependent on the meaning of the problem, and through this dependency counterfactual surgery can reach down and change the details of the algorithm to answer the counterfactual query posed by the problem.
I think this case is essentially the same as the original one, and this similarity is the topic of the post.
It looks like in the original case (and so this one) you should give the $100 if you are an AI running human preference, and most likely if you are a human too, unless human preference gets "updated" (currupted) by the reflectively inconsistent human brain, so that once you learn about the new fact, the new preference says that you shouldn't give the $100, because the probability of the alternative dropped through the floor (in your representation).
Where is the best place to read an explanation of why giving the $100 is what you "should" do? (Or could someone please summarize the rationale?)