There is a problem with the proof here and I have to think about whether I can fix it. Thanks to I have posted a new and hopefully correct proof attempt. Thanks again to vi21maobk9vp!
In his talk on open problems in Friendly AI, Eliezer's first question is how, given Löb's theorem, an AI can replace itself with a better expected utility maximizer that believes in as much mathematics as the original AI. I know exactly one trick for that sort of problem, so I decided to try that on a toy variant. To my surprise, it more or less just worked. Therefore:
Professor Quirrell proposes a game. You start with a score of one. Professor Quirrell moves first, by choosing a computer program and showing you its source code. You then have three options: Take your winnings; double down; or self-destruct.
If you take your winnings, the game ends, and your score is converted to Quirrell points.
If you self-destruct, the game ends, your score is lost, you'll be sent to bed without dinner, you'll lose 150 House points, Rita Skeeter will write a feature alleging that you're a Death Eater, and Professor Quirrell will publicly critique your performance. You are advised not to pick this option.
If you double down, your score doubles, and you advance to the next round. Professor Quirrell again moves first by choosing a computer program. Then, it's your turn—except that this time, you don't get to choose your move yourself: instead, it'll be chosen by Professor Quirrell's program from the previous round.
Professor Quirrell will endeavor to present an educational sequence of programs.
*
The idea is that Quirrell will make you self-destruct if he possibly can, so you must only accept programs that don't self-destruct, that accept only programs that don't self-destruct, that accept only programs that only accept—etc. That's supposed to capture one essential aspect of Eliezer's problem, namely how to make sure that a proposed rewrite doesn't destroy your values, while ignoring the complications due to a different aspect, namely comparing the expected values before and after the rewrite. In Quirrell's game, there are safe and unsafe rewrites, and you should always double down on a safe one and take your winnings when presented with an unsafe one. Let's look at some interesting programs that we could recognize as safe. [And to deal with the possibility of an infinite sequence of double-downs, let's stipulate a small but positive chance each round that Quirrell will end the game and pay you even if your program chose to double down. ETA: Luke A. Somers points out that this provision isn't necessary.]
Let PA(0) := Peano Arithmetic, and PA(n+1) := PA(n) + for all formulas 'C': "if PA(n) proves 'C', then C". Define AI(n) by
def AI(n)(p): Look at the first 3^^^3 theorems of PA(n). if (one of them says "p is safe"): double down else: take winnings
If the current round is controlled by AI(7) and Quirrell hands it AI(6) as input, then AI(7) will double down: PA(7) will conclude that AI(6) will only double down on programs that PA(6) proves to be safe, which implies that they are safe. But if Quirrell hands AI(7) to itself, this reasoning doesn't go through, because PA(7) cannot use "PA(7) proves p is safe" to conclude "p is safe". So even if Quirrell is nice enough to choose AI(n)'s in a decreasing sequence, our windfall will end after a constant number rounds, because (*sinister drumroll*) AI(0) has run out of math to believe in, so to speak. That's precisely the problem Eliezer explains in his talk.
By quining, we could write an AIQ which will recognize itself as safe, so if Quirrell chooses AIQ over and over again, we'll be doubling down for longer and longer times. But that insight won't help us with writing interesting self-modifying AIs. [ETA: Wei Dai has managed to push the limits of using quining into interesting territory.] Is there something that can use non-specialized reasoning, like the AI(n), to recognize an indefinitely long sequence of variants of itself as safe?
*
Define "p is safe for n steps" (p a program) to mean that there is no m, 0 < m <= n, and sequence of programs p=p0, p1, ..., pm such that (a) for 0 < i < m, pi-1 will double down when fed pi, and (b) pm-1 will self-destruct when fed pm. Clearly this can be formalized in the language of Peano Arithmetic.
Now let's extend the language of PA by a constant symbol K, and define PAK := Peano Arithmetic (actually, the natural extension of PA to this new language) + for all formulas 'C' of the base language: "if PA(K) proves 'C', then C". Finally, define AIK by
def AIK(p): Look at the first 3^^^3 theorems of PAK. if (one of them says "p is safe for K steps"): double down else: take winnings
I claim that AIK is safe, and furthermore, AIK(AIK) will double down, and so will AIK(p) where p is some trivial modification of AIK like changing 3^^^3 to 4^^^^4.
*
Fair warning: now comes the technical section of this post.
Proof sketch. Let inst(n,x) be the meta-level function that takes a sentence or proof x in the extended language and substitutes the numeral n (i.e., the unary number in the language of PA that encodes n) for all occurrences of the constant symbol K. Note that if x is a valid proof in PAK, then inst(n,x) will be a valid proof in PA(n+1). Of course, what statement it is a proof of will depend on n. In particular, if x proves "p is safe for K steps", then inst(n,x) is a PA(n+1) proof of "p is safe for n steps". Since this is argument works for all n, p will in fact be safe [and we can formalize this in PAω, if we want to]. This shows that AIK is safe, as claimed.
Formalizing the above argument in Peano Arithmetic, and writing instPA(n,x) for the object-level encoding of the meta-level function, we can prove: "For all extended-language sentences 'C', if PAK proves 'C', then for all n, PA(n+1) proves instPA(n,'C')." By substituting K-1 for n, it follows that for all exteded-language sentences 'C', PAK proves "If PAK proves 'C', and K>0, then PA(K) proves instPA(K-1,'C')". Now, for 'D' := inst(K-1,'C'), PA can prove that "instPA(K-1,'C') = 'D'"; thus, by virtue of its extra axioms, PAK concludes: "If PAK proves 'C', and K>0, then inst(K-1,C)".
What we must show is that PAK proves "AIK is safe for K steps" through some argument that doesn't actually involve simulating AIK(AIK) (because in that case, we wouldn't expect to find the proof among the first 3^^^3 theorems). First, note that PA already ("quickly") proves "If AIK(p) doubles down, then PAK proves 'p is safe for K steps'". In PAK, it follows that "If AIK(p) doubles down, and K>0, then inst(K-1, 'p is safe for K steps')". But this is just a meta-level expression for the sentence "If AIK(p) doubles down, and K>0, then p is safe for K-1 steps"—and at that point, the reflective magic is over, and we only need to establish by "ordinary" PA reasoning that if AIK will only accept programs that are safe for K-1 steps, then AIK is safe for K steps.
Q.E.D.
Note that this argument does not depend on the number of theorems AIK looks at, because it only depends on the fact that if AIK(p) does double down, then there is some PAK proof of p's (K-1)-step safety.
*
The real question is whether this is just a hack or can tell us something about how to approach a solution to the real thing. It could very well be that this is one of the cheap tricks Eliezer and Marcello tried that don't solve the core problem (as Eliezer explains in his talk). Certainly the proof seems to be more tangled with the rounds structure of Quirrell's game than I find elegant. Also, I'm not at all sure that the key proof idea noted in the previous paragraph still works when we go from Quirrell's game to expected utility maximization.
However, as I said at the beginning of this post, I know exactly one trick for dealing with problems of this sort—by which I mean, trying to do something that seems impossible due to a diagonalization proof. It's well-appreciated that you can usually avoid diagonalization by passing from a single collection of things to a hierarchy of things (we can avoid Cantor by concluding that there are multiple infinite cardinalities; Gödel and Löb, by the PA(n) hierarchy; Turing, by a hierarchy of halting oracles; Tarski, by a hierarchy of truth predicates; and so on). It's less well appreciated, but I think true (though fuzzy), that many fields manage to circumvent the effects of diagonalization a bit further by considering objects that in some sense live on multiple levels of the hierarchy at the same time. I'd call that "the parametric polymorphism trick", perhaps.
In this post, we met PAK, which can be interpreted as PA(n), for any n. I haven't tried it in detail, but something very similar should be possible for Tarski's truth predicates. A sufficiently powerful total programming language cannot have a self-interpreter, but you should be able to have a constant symbol K, a single interpreter code file, and a hierarchy of semantics such that according to the K=n+1 semantics, the interpreter implements the K=n semantics. In Church's simple theory of types, you can't apply a function to itself, but in polymorphic variants, you can instantiate (id : α -> α) to (id : (α -> α) -> (α -> α)), so that id(id) makes sense. Set-theoretic models of the untyped lambda calculus need to deal with the fact that if a set S is isomorphic to the function space (S -> T), then S is either empty or has only one element; the usual solution would take me a bit more than one sentence to explain, but it's always struck me as related to what happens in the polymorphic type theories. Looking a bit farther afield, if you're a set theory platonist, if α<β<γ and if the von Neumann levels Vα, Vβ and Vγ are all models of ZFC, then the ZFC proof that "if there is a set model of ZFC, then ZFC is consistent" can be interpreted in Vβ, where it applies to Vα, and it can also be interpreted in Vγ, where it applies to both Vα and Vβ. And so on. It may be that there's a good solution to the AI rewrite problem and it has nothing to do with this type of trick at all, but it seems at least worthwhile to look in that direction.
[Actually, come to think of it, I do know a second trick for circumventing diagonalization, exemplified by passing from total to partial recursive functions and from two- to three-valued logics, but in logic, that one usually makes the resulting systems too weak to be interesting.]
*
Three more points in closing. First, sorry for the informality of the proof sketch! It would be very much appreciated if people would go through it and point out unclear things/problems/corrections. Also, I'm hoping to make a post at some point that gives a more intuitive explanation for why this works.
[ETA:Compactness Theorem, because any concrete proof can only use a finite number of these axioms. But in this extended system, we can prove anything that we can prove in any PA(n).
] Second, provability in PAK in some sense says that a sentence is provable in PA(n), for all n. In particular, PAK is conservative over PA(1), since if PAK proves a sentence C in the base language, then PA(1) proves inst(1,C), which is just C; in some sense, this makes PAK rather weak. If we don't want this, we could make a variant where provability implies says there is some m such that the sentence is provable in PA(n) for all n>m. To do this, we'd use a trick usually employed to show that there are non-standard models of the natural numbers: Add to PAK the axioms "K>1", "K>2", "K>3", and so on. This is consistent by theThird, I chose the particular definition of PAK because it made my particular proof simpler to write. Looking only at the definitions, I would find it more natural to make PAK conservative over PA(0) by using "if K>0 and PA(K-1) proves 'C', then C".
As I noted in the comments, I think I have a new and better proof, and I'm working on a careful writeup. However, that's going to take a while, so for now I'm posting the introduction, which gives an intuitive overview, because I think this may help in the meantime.
== How to cheat Löb's Theorem: my second try ==
In his open problems talk, Eliezer explains how Löb's theorem prevents you from having a consistent proof system P with an axiom schema that anything P proves is actually true, and ask how we can then "build an AI that could completely rewrite itself, without decreasing the amount of trust it had in math every time it executed that self-rewrite" (18:46).
Yesterday, I posted about an attempt to apply a general trick for avoiding diagonalization problems to a minimal toy version of this problem. Unfortunately my "proof" was broken. I think I've fixed the problem and made the proof much simpler and intuitive. (Please check me.) To avoid confusion, note that what I'm proving is slightly different from (but related to) what I did in the previous post.
I'll be more explicit about quoting/unquoting, using notation I'll introduce in a moment. (It's already prevented me today from making the same mistake as previously, a third time.) However, to sustain you through the schlep of preliminaries, I thought I'd start with an informal summary.
*
Löb's theorem shows that it's inconsistent to assume BAD := Peano Arithmetic + for each statement C, the axiom "if 'C' is provable in BAD, then C". I will extend the language of PA with a constant symbol K, and consider the proof system PPT := Peano Arithmetic in the extended language + for each statement C of the extended language, the axiom "if 'C' is provable in PPT, and K>0, then D", where D is the statement obtained from C by replacing each occurrence of "K" by "(K-1)". [PPT is short for for parametric polymorphism trick, version 0.2.]
I won't just show that's consistent — I will show that it's true (i.e., sound), from which consistency follows. Of course, to do so I will have to say what my strange constant K is supposed to mean. I will show that for any natural number n, all axioms of PPT are true in the standard model of PA if K is interpreted to mean n.
What's the use? In my toy version of the AI rewrite problem, the AI will accept a rewrite proposed by its unreliable heuristics module if PPT proves that after executing "K" rewrites this way, the AI's values will still be preserved. By the above, this implies that any finite number of rewrites will be safe — exactly what you would not have if you used a hierarchy of proof systems, where PA(n+1) knows that things proven in PA(n) are true, as Eliezer explains in his talk. I will show that my AI will be able to accept itself as a safe rewrite, which seems to indicate that PPT is not entirely powerless in the direction we want. (I'm not saying that this somehow solves the whole problem, though, just that it suggests this might be a place worth looking in.)
My soundness proof proceeds by induction. For K=0 (by which I mean that K is interpreted as the number 0), the assertion is trivial, since the axioms of PPT have no content in this case. Suppose PPT is sound for K=n, and let C be a statement in the extended language that's provable in PPT. By the induction hypothesis, this implies that in the K=n interpretation, C is true. Therefore, for K=(n+1), D is true, where D is the sentence obtained from C by substituting (K-1) for K. This proves the truth of the axiom "If 'C' is provable in PPT, and K>0, then D."
Now again unsure whether I can show this. Argh.
ETA: The argument I want to be making is the one from the third paragraph in the proof sketch. The problem is the quantification over p -- I need to show in PPT that "for all p, if PPT proves 'p is safe for K steps', then p is safe for K-1 steps". But the axioms of PPT (v0.2) don't allow for parameters (free variables) in the statement C.