During the May 4-6 MIRI workshop on decision theory, I made the following conjecture: If and are sentences such that the length of the shortest proof of is much shorter than the proof of , then is a legitimate counterfactual consequence of .

You may be thinking that this conjecture seems like it cannot be correct. This is the intuition that most of us shared until we started looking for (and failing to find) good counterexamples. After many failed attempts, we started referring to it as "The Trolljecture." Multiple people from the workshop are currently working on writing up posts related to it. However, the conjecture is not very formal. The purpose of this post is just to give some background for those posts and to discuss the why we are keeping the conjecture in this informal state.

There are a three ways in which this conjecture is not formal. I do not see a way to make any of these formal without disrupting the spirit of the conjecture.

  1. What do we mean by "legitimate counterfactual consequence"?

  2. What do we mean by "much shorter"?

  3. What do we mean by "length of the shortest proof"?

I will address these issues one at a time. The term "legitimate counterfactual consequence" does not have a formal definition. If it did, many open problems in decision theory and logical uncertainty would be solved. Instead, it is a pointer to a concept that humans seem to have some intuitions about. The best I can do is give some examples of legitimate counterfactual consequence and hope that we are talking about the same cluster of things.

Regardless of whether or not P=NP, "P=NP" is a legitimate counterfactual consequence of "3SATP," while "PNP" is NOT a legitimate counterfactual consequence of "3SATP."

In Newcomb's problem, "The box is empty" is a legitimate counterfactual consequence of "I two-box." This is true regardless of whether or not I am a two-boxer.

Note that just because , it does not necessarily follow that is a legitimate counterfactual consequence of .

In this post, the notation is used for " is a legitimate counterfactual consequence of ." They mean the same thing.

The term "much shorter" probably could have been defined but most ways to define it seem arbitrary, and the exact definition does not matter much. There are some ways to define it which feel less arbitrary. We could just say that there exists a computable function for which the conjecture is true if we define much shorter by " is much shorter than anything greater than ." We could even put restrictions on , like requiring it to be linear. The reason we do not try to formally define "much shorter" this way is that it would stop us from being able to in theory point at one pair of sentences to demonstrate a counterexample. For now, you can think of "much shorter" to mean "half," and we can reevaluate if a counterexample is found.

The term "length of the shortest proof" is not defined, because I have not specified a formal proof system. This is on purpose. I do not want to allow a counterexample where is not provable in PA, but has a really short proof in a stronger proof system, and that short proof is exactly why we believe is not a legitimate counterfactual consequence of .

It is worth pointing out that this conjecture is not a proposed definition of legitimate counterfactual consequences, it only provides sufficient conditions for something being a legitimate counterfactual consequence. Hopefully much more on this will be posted here by various people in the near future.

New Comment
7 comments, sorted by Click to highlight new comments since:

I (just yesterday) found a counterexample to this. The universe is a 5-and-10 variant that uses the unprovability of consistency:

def U():
  if A() == 2:
    if PA is consistent:
      return 10
    else:
      return 0
  else:
    return 5

The agent can be taken to be modal UDT, using PA as its theory. (The example will still work for other theories extending PA, we just need the universe's theory to include the agents. Also, to simplify some later arguments we suppose that the agent uses the chicken rule, and that it checks action 1 first, then action 2.) Since the agent cannot prove the consistency of its theory, it will not be able to prove , so the first implication which it can prove is . Thus, it will end up taking action 1.

Now, we work in PA and try to show . If PA is inconsistent (we have to account for this case since we are working in PA), then follows straightforwardly. Next, we consider the case that PA is consistent and work through the agent's decision. PA can't prove , since we used the chicken rule, so since the sentence is easily provable, the sentence (ie. the first sentence that the agents checks for proofs of) must be unprovable.

The next sentence we check is . If the agent finds a proof of this, then it takes action 2. Otherwise, it moves on to the sentence , which is easily provable as mentioned above, and it takes action 1. Hence, the agent takes action 2 iff it can prove , so Löb's theorem tells us that so, by the uniqueness of fixed points, it follows that . Then, we get , so by the definition of the universe, and so by Gödel's second incompleteness theorem. Thus, if the agent takes action 2, then PA is inconsistent, so as desired.

This tells us that . Also, by the chicken rule, so . Since PA does not prove at all, the shortest proof of is much shorter than the shortest proof of for any definition of "much shorter". (One can object here that there is no shortest proof, but (a) it seems natural to define the "length of the shortest proof" to be infinite if there is no proof, and (b) it is probably straightforward but tedious to modify the agent and universe so that there is a proof of , but it is very long.)

However, it is clear that is not a legitimate counterfactual consequence of . Informally, if the agent chose action 2, it would have received utility 10, since PA is consistent. Thus, we have a counterexample.

One issue we discussed during the workshop is whether counterfactuals should be defined with respect to a state of knowledge. We may want to say here that we, who know a lot, are in a state of knowledge with respect to which would counterfactually result in , but that someone who reasons in PA is in a state of knowledge w.r.t. which it would result in . One way to think about this is that we know that PA is obviously consistent, irrespective of how the agent acts, whereas PA does not know that it is consistent, allowing an agent using PA to think of itself as counterfactually controlling PA's consistency. Indeed, this is roughly how the argument above proceeds.

I'm not sure that this is a good way of thinking about this though. The agents goes through some weird steps, most notably a rather opaque application of the fixed point theorem, so I don't have a good feel for why it is reasoning this way. I want to unwrap that argument before I can say whether it's doing something that, on an intuitive level constitutes legitimate counterfactual reasoning.

More worryingly, the perspective of counterfactuals as being defined w.r.t. states of knowledge seems to be at odds with PA believing a wrong counterfactual here. It would make sense for PA not to have enough information to make any statement about the counterfactual consequences of , but that's not what's happening if we think of PA's counterfactuals as obeying this conjecture; instead, PA postulates a causal mechanism by which the agent controls the consistency of PA, which we didn't expect to be there at all. Maybe it would all make sense if I had a deeper understanding of the proof I gave, but right now it is very odd.

(This is rather long; perhaps it should be a post? Would anyone prefer that I clean up a few things and make this a post? I'll also expand on the issue I mention at the end when I have more time to think about it.)

Next, we consider the case that PA is consistent and work through the agent’s decision. PA can’t prove , since we used the chicken rule, so since the sentence is easily provable, the sentence (ie. the first sentence that the agents checks for proofs of) must be unprovable.

It seems like this argument needs soundness of PA, not just consistency of PA. Do you see a way to prove in PA that if , then PA is inconsistent?

[edited to add:] However, your idea reminds me of my post on the odd counterfactuals of playing chicken, and I think the example I gave there makes your idea go through:

The scenario is that you get 10 if you take action 1 and it's not provable that you don't take action 1; you get 5 if you take action 2; and you get 0 if you take action 1 and it's provable that you don't. Clearly you should take action 1, but I prove that modal UDT actually takes action 2. To do so, I show that PA proves . (Then from the outside, follows from the outside by soundness of PA.)

This seems to make your argument go through if we can also show that PA doesn't show . But if it did, then modal UDT would take action 1 because this comes first in its proof search, contradiction.

Thus, PA proves (because this follows from ), and also PA doesn't prove . As in your argument, then, the trolljecture implies that we should think "if the agent takes action 1, it gets utility 0" is a good counterfactual, and we don't think that's true.

Still interested in whether you can make your argument go through in your case as well, especially if you can use the chicken step in a way I'm not seeing yet. Like Patrick, I'd encourage you to develop this into a post.

The argument that I had in mind was that if , then , so since PA knows how the chicken rule works. This gives us , so PA can prove that if , then PA is inconsistent. I'll include this argument in my post, since you're right that this was too big a jump.

Edit: We also need to use this argument to show that the modal UDT agent gets to the part where it iterates over utilities, rather than taking an action at the chicken rule step. I didn't mention this explicitly, since I felt like I had seen it before often enough, but now I realize it is nontrivial enough to point out.

Nice! Yes, I encourage you to develop this into a post.

I can't see the grandparent, so posting here:

It occurs to me that maybe we could regard the agent as consistently reasoning, "If I choose of my own free will to output 2, that thereby causes Peano Arithmetic to be inconsistent, causing me to get 0 points."

I mostly don't buy this, but it slightly defends the legitness of the counterfactual.

[-]jessicataΩ120

Nice. I think this maps more precisely in the case where you ask questions about yourself. In this case, you can (in some cases) be assured that you will find no proof about what action you will take, but you can still prove things about logical consequences of your action. So the difference between the "shorter" proof and the "longer" proof is that you can't actually find the longer proof.

My intuition about logical counterfactuals in general is that it seems easier to just solve agent-simulates-predictor type problems directly than to solve logical counterfactuals, since at least there's a kind of win condition for ASP-type problems. So the statements about proof length here might map to facts about whether a bounded predictor can predict things about you. In ASP we want "Bounded predictor predicts that agent do X" to be considered a logical counterfactual of "agent does X" from the agent's perspective; this would require there to be a short proof of "agent does X -> bounded predictor predicts agent does X". But we don't get this automatically: this would require the agent to use a reasoning system that the predictor can reason about somehow. I might write another post on intuitions about ASP-type problems.

Neat. Okay, here's my sketch of a counterexample (or more precisely, a way in which this may violate my imagined desiderata for counterfactuals). A statement that leads to a to a short proof of does not seem like the counterfactual ancestor of every single statement that does not lead to a short proof of .

E.g. "5263=3126" doesn't seem like it counterfactually implies that there exists a division of the plane that requires at least 5 colors to fill in without having identical colors touch. It seem much more like it counterfactually implies "5163=3063" (i.e. the converse of the trolljecture seems really false).

But I dunno, maybe that's just my aesthetics about proofs that go through . Presumably "legitimate counterfactual consequence" could be cashed out as some sort of pragmatic statement about a decision-making algorithm?