@orthonormal: Thanks for pointing me to these posts during our dinner last Thursday.
I haven't checked the proofs in detail, and I didn't read all previous posts in the series. I'm a bit confused about a couple of things, and I'd be happy if someone could enlighten me.
1) What is the purpose of the i-loops?
In your definition of FairBot and others, you check for all i=1,...,N whether some sentence is provable in PA+i. However, if a sentence is provable in PA+i, then it is also provable in PA+(i+1). For this reason, I don't understand why you loop over i=1,...,N.
2) FairBot and the search procedure can be simplified
The following holds for all agents X and Y, and all j,k in {C,D}:
Search(X,Y) fails if and only if, for all j,k in {C,D}, the sentence "j = X(Y) and k=Y(X)" is not provable in PA+N.
If Search(X,Y) does not fail, then Search(X,Y) outputs ( X(Y) , Y(X) ).
This means you could define the search procedure simply as follows:
Search'(X,Y) := ( X(Y) , Y(X) )
This procedure Search' has the same output as Search whenever Search does not fail, and Search' is meaningful on strictly more inputs. In particular, Search' halts and outputs the correct answer if and only if X halts on input Y and Y halts on input X.
Then you can redefine Search(X,Y) equivalently as follows:
If "X(Y) halts and Y(X) halts" is provable in PA+N then return ( X(Y) , Y(X) );
Otherwise, return False.
Formulated this way, it becomes apparent that you are trying to
find proofs for the fact that certain Turing machines halt on certain inputs; the
actual output is not so important and can be computed outside the
proof system.
Similarly, FairBot(X) can be simplified as follows:
If "X(FairBot) halts" is provable in PA+N, then return X(FairBot).
Otherwise return D.
This nicely matches the intuition that we try to understand our opponent in interaction with us, and if we do, we can predict what they will do and match their action; if we fail to understand them, we are clueless and have to defect.
3) On a related note, what is the utility if we don't halt?
Maybe your
concern is that we should halt even if we play against the Turing
machine that never halts? To me it seems that a prisoner who can't
decide on what to do effectively outputs "Cooperate" since the guards
will only wait a finite amount of time and don't extract any
information from a prisoner that does not halt to explicitly output D.
In the model in which not halting is the same as cooperating, the trivial bot below is optimally fair:
TrivialFairBot(X):
return X(TrivialFairBot). (i.e., simply simulate your opponent and match their action)
Based on the fact that you're doing something more complicated, I
assume that you want to forbid not halting; let's say we model this
such that every non-halting player gets a utility of -∞. Then it is a
bit weird that we have to halt while our opponent is allowed to run
into an infinite loop: it seems to give malicious opponents an unfair
advantage if they're allowed to threaten to go into an infinite loop.
4) Why allow these PA oracles?
I don't understand why you forbid your agents to run into an infinite
loop, but at the same time allow them have access to uncomputable PA+N
oracles. Or are you only looking at proofs of a certain bounded
length? If so, what's the motivation for that?
5) How do you define "regrets" formally?
I haven't checked this, but it
seems that, by diagonalization arguments, every possible agent will
loose against infinitely many opponents. It further seems that, for
any given agent X, you can always find an opponent Y so that adding a
hard-wired rule "if your opponent is this particular agent Y, then do
C/D" (whichever is better) improves the performance of X (note that
this can't be done for all Y since Y could first check whether X has a
hard-wired rule like that). In particular, I can imagine that there is
an infinite sequence X_1,.. of agents, such that X_{i+1} performs at
least as good as X_i on all inputs, and it performs better on at least
one input. This would mean that there is no "Pareto-optimum" since
there's always some input on which our performance can be improved.
6) What's the motivation for studying a decision theory that is
inherently inefficient, almost not computable, and easily exploitable
by some opponents?
If we're trying to model a real decision theory,
the two agents arguably shouldn't have access to a lot of time.
Would the following resource-bounded version make sense? For some function t(n), the machines are forced to halt and produce their output within t(n) computation steps, where n=|X|+|Y| is the length of the binary encoding of the machines. In this post, you're getting decision theories that are at least exponential in n. I'd be interested to see whether we achieve something when t(n) is a polynomial?
The time-bounded setting has some advantages: I think it'd more realistically model the problem, and it prevents an arms race of power since no player is allowed to take more than t(n) time. On the other hand, things get more difficult (and more interesting) now since you can inspect your opponent's source code, but you don't have enough time to fully simulate your opponent.
@orthonormal: Thanks for pointing me to these posts during our dinner last Thursday.
I haven't checked the proofs in detail, and I didn't read all previous posts in the series. I'm a bit confused about a couple of things, and I'd be happy if someone could enlighten me.
1) What is the purpose of the i-loops?
In your definition of FairBot and others, you check for all i=1,...,N whether some sentence is provable in PA+i. However, if a sentence is provable in PA+i, then it is also provable in PA+(i+1). For this reason, I don't understand why you loop over i=1,...,N.
2) FairBot and the search procedure can be simplified
The following holds for all agents X and Y, and all j,k in {C,D}:
This means you could define the search procedure simply as follows:
Search'(X,Y) := ( X(Y) , Y(X) )
This procedure Search' has the same output as Search whenever Search does not fail, and Search' is meaningful on strictly more inputs. In particular, Search' halts and outputs the correct answer if and only if X halts on input Y and Y halts on input X.
Then you can redefine Search(X,Y) equivalently as follows:
Formulated this way, it becomes apparent that you are trying to find proofs for the fact that certain Turing machines halt on certain inputs; the actual output is not so important and can be computed outside the proof system.
Similarly, FairBot(X) can be simplified as follows:
This nicely matches the intuition that we try to understand our opponent in interaction with us, and if we do, we can predict what they will do and match their action; if we fail to understand them, we are clueless and have to defect.
3) On a related note, what is the utility if we don't halt?
Maybe your concern is that we should halt even if we play against the Turing machine that never halts? To me it seems that a prisoner who can't decide on what to do effectively outputs "Cooperate" since the guards will only wait a finite amount of time and don't extract any information from a prisoner that does not halt to explicitly output D.
In the model in which not halting is the same as cooperating, the trivial bot below is optimally fair:
TrivialFairBot(X):
Based on the fact that you're doing something more complicated, I assume that you want to forbid not halting; let's say we model this such that every non-halting player gets a utility of -∞. Then it is a bit weird that we have to halt while our opponent is allowed to run into an infinite loop: it seems to give malicious opponents an unfair advantage if they're allowed to threaten to go into an infinite loop.
4) Why allow these PA oracles?
I don't understand why you forbid your agents to run into an infinite loop, but at the same time allow them have access to uncomputable PA+N oracles. Or are you only looking at proofs of a certain bounded length? If so, what's the motivation for that?
5) How do you define "regrets" formally?
I haven't checked this, but it seems that, by diagonalization arguments, every possible agent will loose against infinitely many opponents. It further seems that, for any given agent X, you can always find an opponent Y so that adding a hard-wired rule "if your opponent is this particular agent Y, then do C/D" (whichever is better) improves the performance of X (note that this can't be done for all Y since Y could first check whether X has a hard-wired rule like that). In particular, I can imagine that there is an infinite sequence X_1,.. of agents, such that X_{i+1} performs at least as good as X_i on all inputs, and it performs better on at least one input. This would mean that there is no "Pareto-optimum" since there's always some input on which our performance can be improved.
6) What's the motivation for studying a decision theory that is inherently inefficient, almost not computable, and easily exploitable by some opponents?
If we're trying to model a real decision theory, the two agents arguably shouldn't have access to a lot of time. Would the following resource-bounded version make sense? For some function t(n), the machines are forced to halt and produce their output within t(n) computation steps, where n=|X|+|Y| is the length of the binary encoding of the machines. In this post, you're getting decision theories that are at least exponential in n. I'd be interested to see whether we achieve something when t(n) is a polynomial?
The time-bounded setting has some advantages: I think it'd more realistically model the problem, and it prevents an arms race of power since no player is allowed to take more than t(n) time. On the other hand, things get more difficult (and more interesting) now since you can inspect your opponent's source code, but you don't have enough time to fully simulate your opponent.