Followup to: Decision Theories: A Semi-Formal Analysis, Part III
UPDATE: As it turns out, rumors of Masquerade's demise seem to have been greatly exaggerated. See this post for details and proofs!
I had the chance, over the summer, to discuss the decision theory outlined in my April post with a bunch of relevantly awesome people. The sad part is, there turned out to be a fatal flaw once we tried to formalize it properly. I'm laying it out here, not with much hope that there's a fix, but because sometimes false starts can be productive for others.
Since it's not appropriate to call this decision theory TDT, I'm going to use a name suggested in one of these sessions and call it "Masquerade", which might be an intuition pump for how it operates. So let's first define some simple agents called "masks", and then define the "Masquerade" agent.
Say that our agent has actions a1, ... , an, and the agent it's facing in this round has actions b1, ... , bm. Then for any triple (bi, aj, ak), we can define a simple agent Maskijk which takes in its opponent's source code and outputs an action:
def Mask_ijk(opp_src):
look for proof that Opp(Mask_ijk) = bi
if one is found, then output aj
otherwise, output ak
(This is slightly less general than what I outlined in my post, but it'll do for our purposes. Note that there's no need for aj and ak to be distinct, so constant strategies fall under this umbrella as well.)
A key example of such an agent is what we might call FairBot: on a Prisoner's Dilemma, FairBot tries to prove that the other agent cooperates against FairBot, and if it finds such a proof, then it immediately cooperates. If FairBot fails to find such a proof, then it defects. (An important point is that if FairBot plays against itself and both have sufficiently strong deductive capacities, then a short proof of one's cooperation gives a slightly longer proof of the other's cooperation, and thus in the right circumstances we have mutual cooperation via Löb's Theorem.)
The agent Masquerade tries to do better than any individual mask (note that FairBot foolishly cooperates against CooperateBot when it could trivially do better by defecting). My original formulation can be qualitatively described as trying on different masks, seeing which one fares the best, and then running a "sanity check" to see if the other agent treats Masquerade the same way it treats that mask. The pseudocode looked like this:
def Masquerade(opp_src):
for each (i,j,k), look for proofs of the form "Mask_ijk gets utility u against Opp"
choose (i,j,k) corresponding to the largest such u found
look for proof that Opp(Masquerade) = Opp(Mask_ijk)
if one is found, then output the same thing as Mask_ijk(Opp)
otherwise, output a default action
(The default should be something safe like a Nash equilibrium strategy, of course.)
Intuitively, when Masquerade plays the Prisoner's Dilemma against FairBot, Masquerade finds that the best utility against FairBot is achieved by some mask that cooperates, and then Masquerade's sanity-check is trying to prove that FairBot(Masquerade) = C as FairBot is trying to prove that Masquerade(FairBot) = C, and the whole Löbian circus goes round again. Furthermore, it's intuitive that when Masquerade plays against another Masquerade, the first one notices the proof of the above, and finds that the best utility against the other Masquerade is achieved by FairBot; thus both pass to the sanity-check stage trying to imitate FairBot, both seek to prove that the other cooperate against themselves, and both find the Löbian proof.
So what's wrong with this intuitive reasoning?
Problem: A deductive system can't count on its own consistency!
Let's re-examine the argument that Masquerade cooperates with FairBot. In order to set up the Löbian circle, FairBot needs to be able to prove that Masquerade selects a mask that cooperates with FairBot (like CooperateBot or FairBot). There are nice proofs that each of those masks attains the mutual-cooperation payoff against FairBot, but we also need to be sure that some other mask won't get the very highest (I defect, you cooperate) payoff against FairBot. Now you and I can see that this must be true, because FairBot simply can't be exploited that way. But crucially, FairBot can't deduce its own inexploitability without thereby becoming exploitable (for the same Gödelian reason that a formal system can't prove its own consistency unless it is actually inconsistent)!
Now, the caveats to this are important: if FairBot's deductive process is sufficiently stronger than the deductive process that's trying to exploit it (for example, FairBot might have an oracle that can answer questions about Masquerade's oracle, or FairBot might look for proofs up to length 2N while Masquerade only looks up to length N), then it can prove (by exhaustion if nothing else) that Masquerade will select a cooperative mask after all. But since Masquerade needs to reason about Masquerade at this level, this approach goes nowhere. (At first, I thought that having a weaker oracle for Masquerade's search through masks, and a stronger oracle both for each mask and for Masquerade's sanity-check, would solve this. But that doesn't get off the ground: the agent thus defined attains mutual cooperation with FairBot, but not with itself, because the weaker oracle can't prove that it attains mutual cooperation with FairBot.)
Another caveat is the following: FairBot may not be able to rule out the provability of some statement we know is false, but (given a large enough deductive capacity) it can prove that a certain result is the first of its kind in a given ordering of proofs. So if our agents act immediately on the first proof they find, then we could make a version of Masquerade work... as long as each search does find a proof, and as long as that fact is provable by the same deduction system. But there's an issue with this: two masks paired against each other won't necessarily have provable outcomes!
Let's consider the following mask agent, which we'll call AntiFairBot: it searches for a proof that its opponent cooperates against it, and it defects if it finds one; if it doesn't find such a proof, then it cooperates. This may not be a very optimal agent, but it has one interesting property: if you pit AntiFairBot against FairBot, and the two of them use equivalent oracles, then it takes an oracle stronger than either to deduce what the two of them will do! Thus, Masquerade can't be sure that AntiFairBot won't get the highest payoff against FairBot (which of course it won't) unless it uses a stronger deduction system for the search through masks than FairBot uses for its proof search (which would mean that FairBot won't be able to tell what mask Masquerade picks).
I tried to fix this by iterating over only some of the masks; after all, there's no realistic opponent against whom AntiFairBot is superior to both FairBot and DefectBot. Unfortunately, at this point I realized two things: in order to play successfully against a reasonable range of opponents on the Prisoner's Dilemma, Masquerade needs to be able to imitate at least both FairBot and DefectBot; and FairBot cannot prove that FairBot defects against DefectBot. (There are variants of FairBot that can do so, e.g. it could search both for proofs of cooperation and proofs of defection and playing symmetrically if it finds one, but this variant is no longer guaranteed to cooperate against itself!)
If there are any problems with this reasoning, or an obvious fix that I've missed, please bring it to my attention; but otherwise, I've decided that my approach has failed drastically enough that it's time to do what Eliezer calls "halt, melt, and catch fire". The fact that Löbian cooperation works is enough to keep me optimistic about formalizing this side of decision theory in general, but the ideas I was using seem insufficient to succeed. (Some variant of "playing chicken with my deductive system" might be a crucial component.)
Many thanks to all of the excellent people who gave their time and attention to this idea, both on and offline, especially Eliezer, Vladimir Slepnev, Nisan, Paul Christiano, Critch, Alex Altair, Misha Barasz, and Vladimir Nesov. Special kudos to Vladimir Slepnev, whose gut intuition on the problem with this idea was immediate and correct.
Assuming the existence of an interpreter that can run the opponent's code (which you seem to be assuming as well), there's not a lot of logic here to formalize.
The dragons only arise when you have two non-trivial processes operating against each other. The issue with your formalized logic arises when it recurses; when the two agents are playing a "I know what you're thinking." "I know that you know what I'm thinking." "I know that you know that I know what you're thinking." game. As long as one of the two agents uses a trivialized process, you can avoid this situation.
By trivialized process, I mean non-self referential.
In formalized pseudocode, where f(x, y) returns "Cooperate" or "Defect", x is the code to be tested (your opponent's code), and y is the code to test with:
y: Cooperate();
main: if(f(x, y)) = Cooperate { Cooperate(); } else { Defect(); }
That would be the whole of the simple reflective agent; the vindictive agent would look thus:
boolean hasFailed=false;
y: Cooperate();
main: if(f(x, y)) = Cooperate and !hasFailed { Cooperate(); } else { Defect(); }
onHasCooperatedWithDefector: hasFailed = true;
It's relatively simple to see that it cooperates against a CooperateBot, which it uses as its signal; agents which cooperate with CooperateBot are considered "Trustworthy." Because it cooperates with CooperateBot, it will cooperate with itself.
Now, you may say that it is throwing away free points by cooperating with CooperateBot. This is true in some sense, but in another, it is avoiding extensive computation by using this methodology, and making its behavior predictable; these lost points are the cost paid for these advantages. If computation costs utility, the relative cheapness of this behavior is important.
The penalty it pays for this behavior is that it cannot simultaneously guarantee that its opponent will defect against a DefectBot; that would introduce an inconsistency. This -appears- to be the issue with your original agent. To potentially resolve that issue, a somewhat better mechanism might be comparing behavior against the next iteration of moves made by a Tit For Tat agent against a cooperative and a defecting bot, which would reward vindictive but trustworthy agents. (A necessary but not sufficient element to avoid inconsistency is that the agent being compared against is not reflective at all - that is, doesn't examine its opponent's source code.) However, I am having trouble proving out that that behavior wouldn't also be inconsistent.
ETA: Why do I claim that self-referential (non-trivial) agents aren't consistent, when opposing each other? For the following reason: Agent A runs f(x,this) on Agent B; agent B runs f(x,this) on Agent A. This becomes f(f(f(f(f(f(f(..., this) - that is, it's infinite recursion. Masquerade solves this by trying out a variety of different "masks" - which makes the reflective agent a subset of Masquerade, using only a single mask. The issue is that Masquerade is designed to deal with other reflective agents; the two agents would mutually defect if Masquerade defects against CooperateBot in the original formulation, which is to say, the addition of masks can make Masquerade -less effective- depending upon the composition of its opponent. Whatever signal different reflective agents use, Masquerade is likely to break, given a sufficiently large number of masks.
Indeed, I do say this. I'm looking to formalize an agent that does the obviously correct things against CooperateBot, DefectBot, FairBot, and itself, without being exploitable by any opponent (i.e. the opponent cannot do better than a certain Nash equilibrium unless the agent does better too). Anything weaker than that simply doesn't interest me.
One reason to care about performance against CooperateBot is that playing correctly against constant strategies is equivalen... (read more)