I introduce and formally derive the Principle of Satisfying Foreknowledge (PSF) using Causal Decision Theory (CDT). This principle states that if an agent has knowledge of its future, then that foreknowledge must depict a future that is at least satisfying, in the sense that any deviation would lead to lower expected utility.
Formally, let: • A be the set of possible actions. • O be the set of possible outcomes. • u:A×O→R be the agent’s utility function. • P(a>o) be the probability of outcome o given that the agent causally intervenes to choose action a. The agent maximizes expected utility:
EU(a)=∑o∈OP(a>o)u(a,o).
Assume an agent receives foreknowledge that it will take action a∗ and that this will result in outcome o∗. That is, the agent knows:
P(a∗>o∗)=1.
Because this is true, any attempt to deviate from a∗ would contradict the assumption that the agent was correctly foreseeing its own action. Thus it must be true that, for all alternative actions a′≠a∗:
EU(a′)<EU(a∗).
This means that the action and outcome foreseen must be strictly satisfying—no alternative action can even match it in expected utility. Thus, we obtain the Principle of Satisfying Foreknowledge (PSF):
Any knowledge of an agent’s future must depict a future for which no alter- native action yields equal or higher expected utility.
This does not imply that a∗ is the globally optimal action, only that deviation from it would lower expected utility, making it rationally self-fulfilling.
Three Scenarios:
Scenario 1: Two Empty Boxes and One $100 Box
Suppose the agent knows that among three boxes, two are empty and one contains $100, but does not know which box holds the money. Furthermore, suppose that the agent receives foreknowledge that they will open box 1 and find it empty:
P(open box 1>empty))=1.
In this case the agent has an incentive to open a different box because
EU(open a different box )=$50.
which is greater than the foreknown utility of $0. This incentive to deviate contradicts the assumption of foreknowledge and violates the PSF. Therefore, it is impossible. In this scenario, the only possible foreknowledge is of opening the box with $100.
Scenario 2: A Bomb, an Empty Box, and a $100 Box
Now suppose the agent knows that one box contains a bomb (which will cause catastrophic loss), one box is empty, and one box contains $100. In this case, rather than receiving foreknowledge of the optimal outcome, finding $100, the agent again learns that it will open box 1 and find it empty. Even though this is not the optimal outcome, it is still satisfying because
EU(open box 1)>EU(open a different box)
due to the risk of opening the box with the bomb, and therefore, it is allowed by the PSF.
Scenario 3: All Empty Boxes, and the Agent is Mistaken
Suppose the agent believes that two boxes are empty and one contains $100, as in Scenario 1, but in reality all three boxes are empty. Foreknowledge of opening an empty box in this scenario is impossible for the same reason as it was impossible in Scenario 1, but because opening an empty box is the only possible outcome, foreknowledge is not possible at all in this scenario.
So foreknowledge can guarantee the optimal outcome, but generally it only guarantees a satisfying outcome, and sometimes, foreknowledge is not possible at all due to the misalignment of the agent's beliefs and reality.
Does the PSF only hold under CDT? No, it generally holds when considering other decision theories. Can it be extended to situations where foreknowledge is less than certain? I don't know, and I welcome any feedback on doing so. Finally, what use is this? I'm not sure it has any practical use beyond helping one write a story about precognition or time travel, but I could be wrong.
I introduce and formally derive the Principle of Satisfying Foreknowledge
(PSF) using Causal Decision Theory (CDT). This principle states that if an agent has knowledge of its future, then that foreknowledge must depict a future that is at least satisfying, in the sense that any deviation would lead to lower expected utility.
Formally, let:
• A be the set of possible actions.
• O be the set of possible outcomes.
• u:A×O→R be the agent’s utility function.
• P(a>o) be the probability of outcome o given that the agent causally intervenes
to choose action a.
The agent maximizes expected utility:
EU(a)=∑o∈OP(a>o)u(a,o).
Assume an agent receives foreknowledge that it will take action a∗ and that this will result in outcome o∗. That is, the agent knows:
P(a∗>o∗)=1.
Because this is true, any attempt to deviate from a∗ would contradict the assumption that the agent was correctly foreseeing its own action. Thus it must be true that, for all alternative actions a′≠a∗:
EU(a′)<EU(a∗).
This means that the action and outcome foreseen must be strictly satisfying—no alternative action can even match it in expected utility. Thus, we obtain the Principle of Satisfying Foreknowledge (PSF):
Any knowledge of an agent’s future must depict a future for which no alter-
native action yields equal or higher expected utility.
This does not imply that a∗ is the globally optimal action, only that deviation from it would lower expected utility, making it rationally self-fulfilling.
Three Scenarios:
Scenario 1: Two Empty Boxes and One $100 Box
Suppose the agent knows that among three boxes, two are empty and one contains $100, but does not know which box holds the money. Furthermore, suppose that the agent receives foreknowledge that they will open box 1 and find it empty:
P(open box 1>empty))=1.
In this case the agent has an incentive to open a different box because
EU(open a different box )=$50.
which is greater than the foreknown utility of $0. This incentive to deviate contradicts the assumption of foreknowledge and violates the PSF. Therefore, it is impossible. In this scenario, the only possible foreknowledge is of opening the box with $100.
Scenario 2: A Bomb, an Empty Box, and a $100 Box
Now suppose the agent knows that one box contains a bomb (which will cause catastrophic loss), one box is empty, and one box contains $100. In this case, rather than receiving foreknowledge of the optimal outcome, finding $100, the agent again learns that it will open box 1 and find it empty. Even though this is not the optimal outcome, it is still satisfying because
EU(open box 1)>EU(open a different box)
due to the risk of opening the box with the bomb, and therefore, it is allowed by the PSF.
Scenario 3: All Empty Boxes, and the Agent is Mistaken
Suppose the agent believes that two boxes are empty and one contains $100, as in Scenario 1, but in reality all three boxes are empty. Foreknowledge of opening an empty box in this scenario is impossible for the same reason as it was impossible in Scenario 1, but because opening an empty box is the only possible outcome, foreknowledge is not possible at all in this scenario.
So foreknowledge can guarantee the optimal outcome, but generally it only guarantees a satisfying outcome, and sometimes, foreknowledge is not possible at all due to the misalignment of the agent's beliefs and reality.
Does the PSF only hold under CDT? No, it generally holds when considering other decision theories. Can it be extended to situations where foreknowledge is less than certain? I don't know, and I welcome any feedback on doing so. Finally, what use is this? I'm not sure it has any practical use beyond helping one write a story about precognition or time travel, but I could be wrong.