We can also construct an example where with a short proof and also proves , but any such proof is much longer. We only need to put a bound on the proof length in A's proof search. Then, the argument that proves its own consistency still works, and is rather short: as the proof length bound increases. However, there cannot be a proof of within 's proof length bound, since if it found one it would immediately take action 1. In this case can still prove that simply by running the agent, but this argument shows that any such proof must be long.
Previous: An Informal Conjecture on Proof Length and Logical Counterfactuals
This is a simplified and more complete presentation of my previous counterexample to Scott Garrabrant's conjecture on logical counterfactuals. I present an example of two statements ϕ and ψ such that PA⊢ϕ→ψ and PA⊬ϕ→¬ψ, but ψ is not "really" a counterfactual consequence of ϕ, in an intuitive, informal sense. I also argue, based on the proof of ϕ→ψ, that we should trust our intuition that ψ is not a real counterfactual consequence, rather than believing that our intuitions are being stretched too far and misleading us.
Consider the following universe and agent.
Note that this agent reasons very similarly to modal UDT. It is simpler, but that's because it's clear that action 2 will lead to utility 5, so if the agent cannot get utility 10 by taking action 1, there is no reason for it to continue its proof search.
Consider the statements ϕ≡(A()=1) and ψ≡(U()=0). We first show PA⊢ϕ→ψ. Work in PA and suppose for contradiction that ϕ∧¬ψ, i.e. A()=1∧U()≠0. If PA were inconsistent, the agent would get 0 utility, so it must be the case that PA is consistent. Also, since A()=1, we know by looking at the agent's code that the proof search succeeded, so we have □┌A()=1→U()=10┐. PA knows that it can prove this, so it knows that the agent takes action 1, i.e. we have □┌A()=1┐. Putting this all together, consistency tells us that ¬□┌A()=1→U()=0┐. This is logically equivalent to ◊┌(A()=1∧U()≠0). Thus, stepping back to the metalanguage, we see that the theory PA+(A()=1∧U()≠0) asserts its own consistency, so it is inconsistent. (Gödel's second incompleteness theorem is used here, but the same argument can be carried out using Löb's theorem.)
Since PA+(A()=1∧U()≠0) is inconsistent, PA proves ¬(A()=1∧U()≠0), i.e. A()=1→U()=0. (By soundness of PA, we can see at this point that the agent takes action 2, but this is unnecessary for the present argument.) It remains only to show that PA⊬A()=1→U()≠0. If PA is inconsistent, then the agent takes action 1, since its proof search trivially succeeds, and it receives utility 0, so there is at least one model of PA where A()=1 and U()=0, establishing the result.
We now have both PA⊢A()=1→U()=0 and PA⊬A()=1→U()≠0, so Garrabrant's conjecture claims that in the counterfactual world where the agent takes action 1, it receives utility 0. By the structure of U, in the A()=1 case, we also have PA⊢A()=1→□⊥ and PA⊬A()=1→¬□⊥, so Garrabrant's conjecture similarly claims that the inconsistency of PA holds in this world. Both of these seem intuitively wrong; we would expect that the agent receives utility 10 in this world, and that PA is still consistent. Even if we do not expect these things very strongly --- for example, we may think that our beliefs about this counterfactual world are best modeled by a probability distribution that places weight on both U()=0 and U()=10 --- it is surprising that a notion of counterfactual would be certain that the less intuitive option, U()=0, holds in this counterfactual world.
We can obtain further evidence that our intuition is correct by examining the structure of the argument that PA⊢A()=1→U()=0. Central to this argument is a step where we reason from the assumption that A()=1 to the conclusion that this happened because the agent's proof search succeeded, and thus that □┌A()=1→U()=10┐ holds. This reasoning is causally backwards; we regard the proof search as the cause of the agent's action and we reason backward from the effect to the cause. This is valid logically, but it is not what we mean by the counterfactual world where A()=1. We can draw an analogy to graph surgery on Bayesian networks. There, in order to perform causal reasoning, we sever the links connecting a node to its causal parents, and only allow the counterfactual to change our probability distribution through its causal children. This is a different kind of reasoning, and it is one that this example shows we do not yet have a good analogue of in a logical context.