I wrote this post in the course of working through Vladimir Slepnev's A model of UDT with a halting oracle. This post contains some of the ideas of Slepnev's post, with all the proofs written out. The main formal difference is that while Slepnev's post is about programs with access to a halting oracle, the "decision agents" in this post are formulas in Peano arithmetic. They are generally uncomputable and do not reason under uncertainty.
These ideas are due to Vladimir Slepnev and Vladimir Nesov. (Please let me know if I should credit anyone else.) I'm pretty sure none of this is original material on my part. It is possible that I have misinterpreted Slepnev's post or introduced errors.
We are going to define a world function , a
-ary function1 that outputs an ordered pair
of payoff values. There are functions
such that
and
for any
. In fact
is a function in the three variables
and
.
We are also going to define an agent function that outputs the symbol
or
. The argument
is supposed to be the Gödel number of the world function, and
is some sort of indexical information.
We want to define our agent such that
( denotes the Gödel number of
.
means that
is provable in Peano arithmetic.
represents the numeral for
. I don't care what value
has when
isn't the Gödel number of an appropriate
-ary function.)
There is some circularity in this tentative definition, because a formula standing for appears in the definition of
itself. We get around this by using diagonalization. We'll describe how this works just this once: First define the function
as follows:
This function can be defined by a formula. Then the diagonal lemma gives us a formula such that
.
This is our (somewhat) rational decision agent. If it can prove it will do one thing, it does another; this is what Slepnev calls "playing chicken with the universe". If it can prove that is an optimal strategy, it chooses
; and otherwise it chooses
.
First, a lemma about the causes and consequences of playing chicken:
Lemma 1. For any ,
( is a binary-valued function such that
is true exactly when there is a proof of
in Peano arithmetic. For brevity we write
instead.
is the proposition that Peano arithmetic, plus the axiom that Peano arithmetic is consistent, is a consistent theory.)
Proof. (1) By definition of ,
So
(2) By the principle of explosion,
(3) By the definition of ,
(4)
If we assume consistency of (which entails consistency of
), then parts (1) and (3) of Lemma 1 tell us that for any
,
and
. So the agent never actually plays chicken.
Now let's see how our agent fares on a straightforward decision problem:
Proposition 2. Let and suppose
Assume consistency of . Then
if and only if
.
Proof. If we assume consistency of , then Lemma 1 tells us that the agent doesn't play chicken. So the agent will choose
if and only if it determines that choosing
is optimal.
We have
Suppose . Then clearly
So .
As for the converse: We have . If also
and , then
. By Lemma 1(3) and consistency of
, this cannot happen. So
Similarly, we have
for all . So
So the agent doesn't decide that is optimal, and
.
Now let's see how fares on a symmetric Prisoner's Dilemma with itself:
Proposition 3. Let
Then, assuming consistency of , we have
.
Proof.
(This proof uses Löb's theorem, and that makes it confusing. Vladimir Slepnev points out that Löb's theorem is not really necessary here; a simpler proof appears in the comments.)
Looking at the definition of , we see that
By Lemma 1, (1) and (3),
Similarly,
So
Applying Lemma 1(2) and (4),
By Löb's theorem,
By , we have
So, assuming , we conclude that
.
The definition of treats the choices
and
differently; so it is worth checking that
behaves correctly in the Prisoner's Dilemma when the effects of
and
are switched:
Proposition 4. Let
Then, assuming consistency of , we have
.
A proof appears in the comments.
There are a number of questions one can explore with this formalism: What is the correct generalization of that can choose between
actions, and not just two? How about infinitely many actions? What about theories other than Peano arithmetic? How do we accomodate payoffs that are real numbers? How do we make agents that can reason under uncertainty? How do we make agents that are computable algorithms rather than arithmetic formulas? How does
fare on a Prisoner's Dilemma with asymmetric payoff matrix? In a two-person game where the payoff to player
is independent of the behavior of
, can
deduce the behavior of
? What happens when we replace the third line of the definition of
with
? What is a (good) definition of "decision problem"? Is there a theorem that says that our agent is, in a certain sense, optimal?
1Every -ary function
in this article is defined by a formula
with
free variables such that
and
. By a standard abuse of notation, when the name of a function like
appears in a formula of arithmetic, what we really mean is the formula
that defines it.
Yes, you're right. Looking at the agent function, the relevant rule seems to be defined for the sole purpose of allowing the agent to cooperate in the even that cooperation is provably better than than defecting. Taking this out of context, it allows the agent to choose one of the actions it can take if it is provably better than the other. It seems like the simple fix is just to add this:
If you alter the agent definition by replacing the third line with this and the fourth with C you have an agent that is like the original one but can be used to prove conjecture 4 but not prop 3, right? So if you instead add this line to the current agent definition and simply leave D as the last line, then if neither the old third line nor this new one hold we simply pick D, which is neither provably better nor provably worse than C, so that's fine. It seems that we can also prove conjecture 4 by using the new line in lieu of the old one, which should allow us to use a proof that is essentially symmetric to the proof of proposition 3.
Does that seem like a reasonable fix?
So you modify the agent so that line 3 says "cooperating is provably better than defecting" and line 4 says "defecting is provably better than cooperating". But line 3 comes before line 4, so in proving Conjecture 4 you'd still have to show that the condition in line 3 does not obtain. Or you could prove that line 3 and line 4 can't both obtain; I haven't figured out exactly how to do this yet.