There is a problem with the proof here and I have to think about whether I can fix it. Thanks to I have posted a new and hopefully correct proof attempt. Thanks again to vi21maobk9vp!
In his talk on open problems in Friendly AI, Eliezer's first question is how, given Löb's theorem, an AI can replace itself with a better expected utility maximizer that believes in as much mathematics as the original AI. I know exactly one trick for that sort of problem, so I decided to try that on a toy variant. To my surprise, it more or less just worked. Therefore:
Professor Quirrell proposes a game. You start with a score of one. Professor Quirrell moves first, by choosing a computer program and showing you its source code. You then have three options: Take your winnings; double down; or self-destruct.
If you take your winnings, the game ends, and your score is converted to Quirrell points.
If you self-destruct, the game ends, your score is lost, you'll be sent to bed without dinner, you'll lose 150 House points, Rita Skeeter will write a feature alleging that you're a Death Eater, and Professor Quirrell will publicly critique your performance. You are advised not to pick this option.
If you double down, your score doubles, and you advance to the next round. Professor Quirrell again moves first by choosing a computer program. Then, it's your turn—except that this time, you don't get to choose your move yourself: instead, it'll be chosen by Professor Quirrell's program from the previous round.
Professor Quirrell will endeavor to present an educational sequence of programs.
*
The idea is that Quirrell will make you self-destruct if he possibly can, so you must only accept programs that don't self-destruct, that accept only programs that don't self-destruct, that accept only programs that only accept—etc. That's supposed to capture one essential aspect of Eliezer's problem, namely how to make sure that a proposed rewrite doesn't destroy your values, while ignoring the complications due to a different aspect, namely comparing the expected values before and after the rewrite. In Quirrell's game, there are safe and unsafe rewrites, and you should always double down on a safe one and take your winnings when presented with an unsafe one. Let's look at some interesting programs that we could recognize as safe. [And to deal with the possibility of an infinite sequence of double-downs, let's stipulate a small but positive chance each round that Quirrell will end the game and pay you even if your program chose to double down. ETA: Luke A. Somers points out that this provision isn't necessary.]
Let PA(0) := Peano Arithmetic, and PA(n+1) := PA(n) + for all formulas 'C': "if PA(n) proves 'C', then C". Define AI(n) by
def AI(n)(p): Look at the first 3^^^3 theorems of PA(n). if (one of them says "p is safe"): double down else: take winnings
If the current round is controlled by AI(7) and Quirrell hands it AI(6) as input, then AI(7) will double down: PA(7) will conclude that AI(6) will only double down on programs that PA(6) proves to be safe, which implies that they are safe. But if Quirrell hands AI(7) to itself, this reasoning doesn't go through, because PA(7) cannot use "PA(7) proves p is safe" to conclude "p is safe". So even if Quirrell is nice enough to choose AI(n)'s in a decreasing sequence, our windfall will end after a constant number rounds, because (*sinister drumroll*) AI(0) has run out of math to believe in, so to speak. That's precisely the problem Eliezer explains in his talk.
By quining, we could write an AIQ which will recognize itself as safe, so if Quirrell chooses AIQ over and over again, we'll be doubling down for longer and longer times. But that insight won't help us with writing interesting self-modifying AIs. [ETA: Wei Dai has managed to push the limits of using quining into interesting territory.] Is there something that can use non-specialized reasoning, like the AI(n), to recognize an indefinitely long sequence of variants of itself as safe?
*
Define "p is safe for n steps" (p a program) to mean that there is no m, 0 < m <= n, and sequence of programs p=p0, p1, ..., pm such that (a) for 0 < i < m, pi-1 will double down when fed pi, and (b) pm-1 will self-destruct when fed pm. Clearly this can be formalized in the language of Peano Arithmetic.
Now let's extend the language of PA by a constant symbol K, and define PAK := Peano Arithmetic (actually, the natural extension of PA to this new language) + for all formulas 'C' of the base language: "if PA(K) proves 'C', then C". Finally, define AIK by
def AIK(p): Look at the first 3^^^3 theorems of PAK. if (one of them says "p is safe for K steps"): double down else: take winnings
I claim that AIK is safe, and furthermore, AIK(AIK) will double down, and so will AIK(p) where p is some trivial modification of AIK like changing 3^^^3 to 4^^^^4.
*
Fair warning: now comes the technical section of this post.
Proof sketch. Let inst(n,x) be the meta-level function that takes a sentence or proof x in the extended language and substitutes the numeral n (i.e., the unary number in the language of PA that encodes n) for all occurrences of the constant symbol K. Note that if x is a valid proof in PAK, then inst(n,x) will be a valid proof in PA(n+1). Of course, what statement it is a proof of will depend on n. In particular, if x proves "p is safe for K steps", then inst(n,x) is a PA(n+1) proof of "p is safe for n steps". Since this is argument works for all n, p will in fact be safe [and we can formalize this in PAω, if we want to]. This shows that AIK is safe, as claimed.
Formalizing the above argument in Peano Arithmetic, and writing instPA(n,x) for the object-level encoding of the meta-level function, we can prove: "For all extended-language sentences 'C', if PAK proves 'C', then for all n, PA(n+1) proves instPA(n,'C')." By substituting K-1 for n, it follows that for all exteded-language sentences 'C', PAK proves "If PAK proves 'C', and K>0, then PA(K) proves instPA(K-1,'C')". Now, for 'D' := inst(K-1,'C'), PA can prove that "instPA(K-1,'C') = 'D'"; thus, by virtue of its extra axioms, PAK concludes: "If PAK proves 'C', and K>0, then inst(K-1,C)".
What we must show is that PAK proves "AIK is safe for K steps" through some argument that doesn't actually involve simulating AIK(AIK) (because in that case, we wouldn't expect to find the proof among the first 3^^^3 theorems). First, note that PA already ("quickly") proves "If AIK(p) doubles down, then PAK proves 'p is safe for K steps'". In PAK, it follows that "If AIK(p) doubles down, and K>0, then inst(K-1, 'p is safe for K steps')". But this is just a meta-level expression for the sentence "If AIK(p) doubles down, and K>0, then p is safe for K-1 steps"—and at that point, the reflective magic is over, and we only need to establish by "ordinary" PA reasoning that if AIK will only accept programs that are safe for K-1 steps, then AIK is safe for K steps.
Q.E.D.
Note that this argument does not depend on the number of theorems AIK looks at, because it only depends on the fact that if AIK(p) does double down, then there is some PAK proof of p's (K-1)-step safety.
*
The real question is whether this is just a hack or can tell us something about how to approach a solution to the real thing. It could very well be that this is one of the cheap tricks Eliezer and Marcello tried that don't solve the core problem (as Eliezer explains in his talk). Certainly the proof seems to be more tangled with the rounds structure of Quirrell's game than I find elegant. Also, I'm not at all sure that the key proof idea noted in the previous paragraph still works when we go from Quirrell's game to expected utility maximization.
However, as I said at the beginning of this post, I know exactly one trick for dealing with problems of this sort—by which I mean, trying to do something that seems impossible due to a diagonalization proof. It's well-appreciated that you can usually avoid diagonalization by passing from a single collection of things to a hierarchy of things (we can avoid Cantor by concluding that there are multiple infinite cardinalities; Gödel and Löb, by the PA(n) hierarchy; Turing, by a hierarchy of halting oracles; Tarski, by a hierarchy of truth predicates; and so on). It's less well appreciated, but I think true (though fuzzy), that many fields manage to circumvent the effects of diagonalization a bit further by considering objects that in some sense live on multiple levels of the hierarchy at the same time. I'd call that "the parametric polymorphism trick", perhaps.
In this post, we met PAK, which can be interpreted as PA(n), for any n. I haven't tried it in detail, but something very similar should be possible for Tarski's truth predicates. A sufficiently powerful total programming language cannot have a self-interpreter, but you should be able to have a constant symbol K, a single interpreter code file, and a hierarchy of semantics such that according to the K=n+1 semantics, the interpreter implements the K=n semantics. In Church's simple theory of types, you can't apply a function to itself, but in polymorphic variants, you can instantiate (id : α -> α) to (id : (α -> α) -> (α -> α)), so that id(id) makes sense. Set-theoretic models of the untyped lambda calculus need to deal with the fact that if a set S is isomorphic to the function space (S -> T), then S is either empty or has only one element; the usual solution would take me a bit more than one sentence to explain, but it's always struck me as related to what happens in the polymorphic type theories. Looking a bit farther afield, if you're a set theory platonist, if α<β<γ and if the von Neumann levels Vα, Vβ and Vγ are all models of ZFC, then the ZFC proof that "if there is a set model of ZFC, then ZFC is consistent" can be interpreted in Vβ, where it applies to Vα, and it can also be interpreted in Vγ, where it applies to both Vα and Vβ. And so on. It may be that there's a good solution to the AI rewrite problem and it has nothing to do with this type of trick at all, but it seems at least worthwhile to look in that direction.
[Actually, come to think of it, I do know a second trick for circumventing diagonalization, exemplified by passing from total to partial recursive functions and from two- to three-valued logics, but in logic, that one usually makes the resulting systems too weak to be interesting.]
*
Three more points in closing. First, sorry for the informality of the proof sketch! It would be very much appreciated if people would go through it and point out unclear things/problems/corrections. Also, I'm hoping to make a post at some point that gives a more intuitive explanation for why this works.
[ETA:Compactness Theorem, because any concrete proof can only use a finite number of these axioms. But in this extended system, we can prove anything that we can prove in any PA(n).
] Second, provability in PAK in some sense says that a sentence is provable in PA(n), for all n. In particular, PAK is conservative over PA(1), since if PAK proves a sentence C in the base language, then PA(1) proves inst(1,C), which is just C; in some sense, this makes PAK rather weak. If we don't want this, we could make a variant where provability implies says there is some m such that the sentence is provable in PA(n) for all n>m. To do this, we'd use a trick usually employed to show that there are non-standard models of the natural numbers: Add to PAK the axioms "K>1", "K>2", "K>3", and so on. This is consistent by theThird, I chose the particular definition of PAK because it made my particular proof simpler to write. Looking only at the definitions, I would find it more natural to make PAK conservative over PA(0) by using "if K>0 and PA(K-1) proves 'C', then C".
Right, so we want an AI design that does the right thing "naturally", without a special case for "modify self". It seems to me that UDT may already be such a design, if we had a solution to logical uncertainty (what I called "math intuition module") to plug into it. When I wrote about logical uncertainty being important for AGI, I had in mind the problem of not being able to prove P!=NP and hence not being able to prove whether EU(search for polynomial solution to 3-SAT) < EU(not search for polynomial solution to 3-SAT). But if we had an AI that can handle that kind of decision problems, why wouldn't it also be able to design and build another AI that uses a proof system that it can't prove to be sound? Is there a reason to think that "AI reflection" isn't just a special case of "logical uncertainty"?
(Of course UDT would probably build successors/copies of itself that also use "math intuition modules" rather than "proof systems" but we can imagine forcing a UDT-AI to choose between a selection of other AIs that are based on different proof systems, in which it would choose based on an analysis of the trade-off between power vs risk of inconsistency/unsoundness, like how a human would make the choice.)
I can't visualize how saying that a result is "uncertain" could make the Lobian issue go away - do you have a concrete visualization for this, and if so, can you sketch it even if very briefly?
Possibly relevant: No Turing machine can assign finite probability to the halting sequence.