Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Robust Cooperation in the Prisoner's Dilemma

68 orthonormal 07 June 2013 08:30AM

I'm proud to announce the preprint of Robust Cooperation in the Prisoner's Dilemma: Program Equilibrium via Provability Logic, a joint paper with Mihaly Barasz, Paul Christiano, Benja Fallenstein, Marcello Herreshoff, Patrick LaVictoire (me), and Eliezer Yudkowsky.

This paper was one of three projects to come out of the 2nd MIRI Workshop on Probability and Reflection in April 2013, and had its genesis in ideas about formalizations of decision theory that have appeared on LessWrong. (At the end of this post, I'll include links for further reading.)

Below, I'll briefly outline the problem we considered, the results we proved, and the (many) open questions that remain. Thanks in advance for your thoughts and suggestions!

Background: Writing programs to play the PD with source code swap

(If you're not familiar with the Prisoner's Dilemma, see here.)

The paper concerns the following setup, which has come up in academic research on game theory: say that you have the chance to write a computer program X, which takes in one input and returns either Cooperate or Defect. This program will face off against some other computer program Y, but with a twist: X will receive the source code of Y as input, and Y will receive the source code of X as input. And you will be given your program's winnings, so you should think carefully about what sort of program you'd write!

Of course, you could simply write a program that defects regardless of its input; we call this program DefectBot, and call the program that cooperates on all inputs CooperateBot. But with the wealth of information afforded by the setup, you might wonder if there's some program that might be able to achieve mutual cooperation in situations where DefectBot achieves mutual defection, without thereby risking a sucker's payoff. (Douglas Hofstadter would call this a perfect opportunity for superrationality...)

Previously known: CliqueBot and FairBot

And indeed, there's a way to do this that's been known since at least the 1980s. You can write a computer program that knows its own source code, compares it to the input, and returns C if and only if the two are identical (and D otherwise). Thus it achieves mutual cooperation in one important case where it intuitively ought to: when playing against itself! We call this program CliqueBot, since it cooperates only with the "clique" of agents identical to itself.

There's one particularly irksome issue with CliqueBot, and that's the fragility of its cooperation. If two people write functionally analogous but syntactically different versions of it, those programs will defect against one another! This problem can be patched somewhat, but not fully fixed. Moreover, mutual cooperation might be the best strategy against some agents that are not even functionally identical, and extending this approach requires you to explicitly delineate the list of programs that you're willing to cooperate with. Is there a more flexible and robust kind of program you could write instead?

As it turns out, there is: in a 2010 post on LessWrong, cousin_it introduced an algorithm that we now call FairBot. Given the source code of Y, FairBot searches for a proof (of less than some large fixed length) that Y returns C when given the source code of FairBot, and then returns C if and only if it discovers such a proof (otherwise it returns D). Clearly, if our proof system is consistent, FairBot only cooperates when that cooperation will be mutual. But the really fascinating thing is what happens when you play two versions of FairBot against each other. Intuitively, it seems that either mutual cooperation or mutual defection would be stable outcomes, but it turns out that if their limits on proof lengths are sufficiently high, they will achieve mutual cooperation!

The proof that they mutually cooperate follows from a bounded version of Löb's Theorem from mathematical logic. (If you're not familiar with this result, you might enjoy Eliezer's Cartoon Guide to Löb's Theorem, which is a correct formal proof written in much more intuitive notation.) Essentially, the asymmetry comes from the fact that both programs are searching for the same outcome, so that a short proof that one of them cooperates leads to a short proof that the other cooperates, and vice versa. (The opposite is not true, because the formal system can't know it won't find a contradiction. This is a subtle but essential feature of mathematical logic!)

Generalization: Modal Agents

Unfortunately, FairBot isn't what I'd consider an ideal program to write: it happily cooperates with CooperateBot, when it could do better by defecting. This is problematic because in real life, the world isn't separated into agents and non-agents, and any natural phenomenon that doesn't predict your actions can be thought of as a CooperateBot (or a DefectBot). You don't want your agent to be making concessions to rocks that happened not to fall on them. (There's an important caveat: some things have utility functions that you care about, but don't have sufficient ability to predicate their actions on yours. In that case, though, it wouldn't be a true Prisoner's Dilemma if your values actually prefer the outcome (C,C) to (D,C).)

However, FairBot belongs to a promising class of algorithms: those that decide on their action by looking for short proofs of logical statements that concern their opponent's actions. In fact, there's a really convenient mathematical structure that's analogous to the class of such algorithms: the modal logic of provability (known as GL, for Gödel-Löb).

So that's the subject of this preprint: what can we achieve in decision theory by considering agents defined by formulas of provability logic?

continue reading »

Compromise: Send Meta Discussions to the Unofficial LessWrong Subreddit

-4 orthonormal 23 April 2013 01:37AM

After a recent comment thread degenerated into an argument about trolling, moderation, and meta discussions, I came to the following conclusions:

  1. Meta conversations are annoying to stumble across, I'd rather not see them unless I think it's important, and I think other people mostly feel the same way. Moreover, moderators can't easily ignore those conversations when they encounter them, because they're usually attacks on the moderators themselves; and people can't simply avoid encountering them on a regular basis without avoiding LW altogether. This is a perfect recipe for a flamewar taking over Top Comments even when most people don't care that much.
  2. Officially banning all meta conversations, however, is a bad precedent, and I don't want LW to do that.

Ideally, Less Wrong would implement a separate "META" area (so that people can read the regular area for all the object-level discussions, and then sally into the meta area only when they're ready). After talking to Luke (who also wants this), though, it seems clear that nobody is able to implement it very soon. So as a stopgap measure, I'm personally going to start doing the following, and I hope you join me:

Whenever a conversation starts getting bitterly meta in a thread that's not originally about a LW site meta issue, I'm going to tell people to start a thread on the LW Uncensored Reddit Thread instead. Then I'm going to downvote anyone who continues the meta war on the original thread.

I know it's annoying to send people somewhere that has a different login system, but it's as far as I can tell the best fix we currently have. Since some meta conversations are important, I'm not going to punish people for linking to meta thread discussions that they think are significant, and the relevant place for those links is usually the Open Thread. I don't want LessWrong to be a community devoted to arguing about the mechanics of LessWrong, so that's my suggestion.

Thoughts? (And yes, this thread is obviously open to meta discussion. I'm hopefully doing something constructive about the problem, instead of just complaining about it, though.)

EDIT: Changed the link to the uncensored thread more specifically, at Luke's request; originally I linked to the general LW subreddit, which is more heavily moderated.

Welcome to Less Wrong! (5th thread, March 2013)

27 orthonormal 01 April 2013 04:19PM
If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

(This is the fifth incarnation of the welcome thread; once a post gets over 500 comments, it stops showing them all by default, so we make a new one. Besides, a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves.)

continue reading »

Robin Hanson's Cryonics Hour

24 orthonormal 29 March 2013 05:20PM

I'm writing to recommend something awesome to anyone who's recently signed up for cryonics (and to the future self of anyone who's about to do so). Robin Hanson has a longstanding offer that anyone who's newly signed up for cryonics can have an hour's discussion with him on any topic, and I took him up on that last week.

I expected to have a fascinating and wide-ranging discussion on various facets of futurism. My expectations were exceeded. Even if you've been reading Overcoming Bias for a long time, talking with Robin is an order of magnitude more stimulating/persuasive/informative than reading OB or even watching him debate someone else, and I'm now reconsidering my thinking on a number of topics as a result.

So if you've recently signed up, email Robin; and if you're intending to sign up, let this be one more incentive to quit procrastinating!

Relevant links:

The LessWrong Wiki article on cryonics is a good place to start if you have a bunch of questions about the topic.

If you want to argue about whether signing up for cryonics is a good idea, two good and relatively recent threads on that subject are under the posts on A survey of anti-cryonics writing and More Cryonics Probability Estimates.

And if you are cryocrastinating (you've decided that you should sign up for cryonics, but you haven't yet), here's a LW thread about taking the first step.

Does My Vote Matter?

18 orthonormal 05 November 2012 01:23AM

(Cross-posted from elsewhere; I thought some readers here might like it too. Please excuse the unusually informal style.)

Short answer: Actually, yes!

Slightly longer answer: Yes, if the election is close. You'll never get to know that your vote was decisive, but one vote can substantially change the odds on Election Day nonetheless. Even if the election is a foregone conclusion (or if you don't care about the major candidates), the same reasoning applies to third parties- there are thresholds that really matter to them, and if they reach those now they have a significantly better chance in the next election. And finally, local elections matter in the long run just as state or nation elections do. So, in most cases, voting is rational if you care about the outcome.

Full answer: Welcome! This is a nonpartisan analysis, written by a math PhD, of when and how a single vote matters in a large election. I've got a table of contents below; please skip to whatever section interests you most. And feel free to share this!

Sections:

1. How can a single vote matter in a huge election?

2. What if I know it's not going to be close?

3. Do local elections matter?

In what follows, I'm going to be assuming an American-style voting system (first-past-the-post, for you voting-system buffs), but most of what I say carries over to other voting systems found around the world.

1. How can a single vote matter in a huge election?

To answer this, let's imagine a different voting system. In the land of Erewhon, voters cast their ballots for president just as they do here; but instead of decreeing that the candidate with the most votes is the winner, each vote is turned into a lottery ball, and one is chosen at random to determine the next president.

While this system has its drawbacks (they get fringe candidates elected every so often, and they've had to outlaw write-in campaigns to prevent every voter from simply voting themselves in), the citizens of Erewhon agree on its main advantage: every one of them knows that their vote counts, that they increase the chance of their candidate winning by just that much.

I'm going to argue that if you would bother to vote if an election were held in Erewhon, then you should also vote—for the same reason—if the same election were held the normal way, and if it looked like the outcome might be close. That is, your effect on the outcome is about equivalent if the pre-election polls fall within the margin of error (about ±3% for each candidate, or a margin of less than 6% between two candidates, for an electorate of millions). And if the polls are nearly tied, then voting in our system might have the same effect as voting hundreds or even thousands of times in Erewhon's system!

This seems counterintuitive, because we imagine that the votes of everyone else are "locked in" somehow, and that we're only deciding whether to add ours to the pile- in which case, the only way that it could matter is in the event that it makes or breaks an exact tie. And not only are those exceedingly rare (the largest such example I could find was a recent Massachusetts election for state representative, in which 13,500 ballots were cast), but if the initial count for a massive election did show a margin of one or zero, we would be headed for an interminable recount. (See, for instance, the 2008 Minnesota Senate election, which was decided by about 300 votes; the extensive recount delayed the winner's inauguration by about six months.) What this messes with, though, isn't your chance of helping decide the election, but your chance of knowing that you helped decide the election.

That is, in modern elections, there's not a perfect sharp boundary between "Candidate A wins" and "Candidate B wins", but a gigantic muddle; and if A is narrowly ahead, then every additional vote for A represents a greater chance that the recount will eventually end, or end earlier, or not be contested at all; while every vote for candidate B means a greater chance that there will be a recount, or that it will go on longer, or that it might turn out victorious for B after all. You'd never know that your vote, which changed the lead from 412 to 413, was the straw that broke the proverbial camel's back and led the other candidate to concede, but at some point that's what happens.

And even more significantly, we can't consider everyone else's votes as "locked in". If you had the ability to re-play Election Day over and over, the chaotic dynamics of everyday life would affect the vote totals. Someone makes a light about to turn red, and so she ends up at the next light behind a bumper sticker that infuriates her, and so she remembers to vote. Or someone else bumps into an old friend, and starts a conversation, and then he realizes he doesn't have enough of his lunch break left to get to the polling place anyhow. It's the butterfly effect in action, and when multiplied by millions of people it leads to fluctuations in the hundreds or thousands. (The exact mechanism for this estimate is the Central Limit Theorem.) And what that means is that the margin isn't a fixed number pending your decision to vote; it's a mix of different possibilities, and your decision changes the odds as surely as adding a lottery ball to the Erewhon election does.

Of course, if one candidate is polling 10 points ahead of the other, then those fluctuations will be irrelevant. If the margin is closer, then it just might have an effect; remember that polls have several sources of error in them, so you can't be exactly sure of the margin before voting actually happens. And in those cases where the actual vote turns out to be extremely close, like Minnesota in 2008, your choice to vote swings those odds just as if you'd poured in thousands of Erewhonian lottery balls with your candidate's name on them. (That is, it changes those odds by a fraction of a percent, but the ability to nudge the odds by that much in an electorate of millions is pretty impressive!)

2. What if I know it's not going to be close?

If the difference between the top candidates is outside the combined margin of error (7 points or more in a big election), then it's true that your vote won't affect the outcome of the current election, but it can matter greatly for the next one. This especially holds when there's a third party you prefer to the current major parties, but there are other relevant reasons to vote even if that's not the case.

First, about third parties: the conventional wisdom that they can't win is an outright falsehood. We tend to forget that Ross Perot nearly became President: he held a six-point lead in June 1992 over both Bush and Clinton, and despite his candidacy crashing and burning later on (for reasons more personal than systemic), he still won nearly 20% of the popular vote in the end. In addition, there are two current Senators who were elected as independents.

For a third party that's not polling well enough yet, votes now matter for future elections, and there are several significant thresholds. If they get rounded up to 1% on Election Day, then they get mentioned in election coverage. (Next, getting rounded up to 2% sounds much more impressive than getting rounded down to 1%.) If they get to 5%, then they can qualify for FEC matching funds, and double their ability to reach voters next time. And 10% is a significant number for media exposure (as well as invitations to the main debates). Higher than 20%, and we're talking about a viable candidate; there's a runaway dynamic in three-candidate races where as soon as the third party looks viable, any voters from the other parties who prefer the third party will suddenly switch (now that they're no longer worried about supporting an obvious loser), and suddenly the third party becomes the front-runner. (This kind of behavior is common in game theory in what's known as a coordination problem.)

Even if there's not a third party you prefer to the main ones, the margin of victory now helps determine what sort of candidates the parties select next time. If one party wins an election by more than 10% (this round number has a psychological hold on people, so it's what usually counts as a "landslide victory"), then next time that party is more likely to nominate a less moderate candidate, while the defeated party is likely to nominate a more moderate candidate. (Also, it goes without saying that you should be voting in primaries as well!)

All of these thresholds can hinge just as easily on a few hundred votes, and so your vote can matter greatly. You just need to care about what happens two, or four, or six years from now.

3. Do local elections matter?

Most certainly. First, your vote affects the odds more strongly in a local election. (And, of course, there can be margins of zero or one votes, without recounts, in local elections!) Secondly, there are a number of important issues decided at the local level, both directly (tax and bond issues) and through representatives (especially local school boards). And finally, many big names in politics start out in local elections: the current President began as an Illinois state senator, and the current Vice President began as a city councilman.

It's more difficult to do research at the local level. Project VoteSmart may have data on some of the candidates (including the voting record of incumbents, public statements, and response to a questionnaire about their positions). Otherwise, your local newspaper may be the best bet (short of going to campaign events yourself). But a little bit of research can go a long way here.

(Addendum: If I'd been composing this for the Less Wrong crowd, I'd have also noted that the decisions of people similar to you should be correlated, which adds another multiplier to the effectiveness of voting. I might like I bumper sticker that says "I'm a timeless decision theorist, therefore I vote!")

(Second addendum: I'm mindful of the dangers of posting this within a few days of a major election. I've done my very best to keep from mindkilling language; I hope you do the same in the comments.)

Decision Theories, Part 3.75: Hang On, I Think This Works After All

23 orthonormal 06 September 2012 04:23PM

Followup to: Decision Theories, Part 3.5: Halt, Melt and Catch Fire, Decision Theories: A Semi-Formal Analysis, Part III

The thing about dead mathematical proofs is that it's practically always worth looting the corpse; sometimes you even find there's still a pulse! That appears to be the case with my recent post lamenting the demise of the TDT-like "Masquerade" algorithm. I think I've got a rigorous proof this time, but I'd like other opinions before I declare that the rumors of Masquerade's demise were greatly exaggerated...

To recap quickly, I've been trying to construct an algorithm that, given the payoff matrix of a game and the source code of its opponent, does some deductions and then outputs a move. I want this algorithm to do the commonsense right things (defect against both DefectBot and CooperateBot, and mutually cooperate against both FairBot and itself), and I want it to do so for simple and general reasons (that is, no gerrymandering of actions against particular opponents, and in particular no fair trying to "recognize itself", since there can be variants of any algorithm that are functionally identical but not provably so within either's powers of deduction). I'd also like it to be "un-exploitable" in a certain sense: it has a default move (which is one of its Nash equilibria), and no opponent can profit against the algorithm by forcing it below that default payoff. If the opponent does as well or better in expected value than it would by playing that Nash equilibrium, then so too does my algorithm.

The revised Masquerade algorithm does indeed have these properties.

In essence, there are two emendations that I needed: firstly, since some possible pairs of masks (like FairBot and AntiFairBot) can't knowingly settle on a fixed point, there's no way to determine what they do without a deductive capacity that strictly exceeds the weaker of them. That's a bad feature to have, so we'll just have to exclude potentially troublemaking masks from Masquerade's analysis. (In the special case of Prisoner's Dilemma I know that including DefectBot and FairBot will suffice; I've got what looks like a good solution in general, as well.)

The second emendation is that FairBot needs to alternate between trying harder to prove its opponent cooperates, and trying harder to prove its opponent defects. (There needs to be an asymmetry, like cooperation proofs going first, to guarantee that when FairBot plays against itself, it finds a Löbian proof of mutual cooperation rather than one of mutual defection.) The reason for this is so that when agents reason about masks, they should be able to find a proof of the mask's action without needing to exceed that mask's powers of deduction. Otherwise we get that arms race again.

This escalation of proof attempts can be represented in terms of proof limits (since there exists a constant C such that for N sufficiently large, a proof that "there are no proofs of X of length less than N" either exists with length less than C^N or not at all), but the simplest way to do this is with the formalism of PA+N. That is, PA is Peano Arithmetic; PA+1 is the formal system with the axioms of Peano Arithmetic and an extra axiom that PA is self-consistent (that is, if PA proves X, then PA does not prove not-X); PA+2 has those axioms and an extra one stating that PA+1 is self-consistent and so on. (Note that none of these formal systems know themselves to be self-consistent, and for good reason!) In every use, we'll assume that N is a fixed number (anything greater than 4 will work).

New And Improved Masks

So without further ado, let's define our masks for the Prisoner's Dilemma:

def DefectBot(X):
    return D

def FairBot(X):
    for i in range(N):
        if "X(FairBot)=C" is provable in PA+i:
            return C
        if "X(FairBot)=D" is provable in PA+i:
            return D
    return D

Lemma 1: For any X, "DefectBot(X)=D" is provable in PA. (Obvious.)

Lemma 2: "FairBot(FairBot)=C" is provable in PA.

Proof: If we label the FairBots separately (after all, they might be logically equivalent but not know they are), we can quickly show that "if PA proves FairBot1(FairBot2)=C, then FairBot2(FairBot1)=C" is provable in PA (and the same with the subscripts reversed). This makes what we might term a Löbian cycle, which will come up a few more times in this post: if we have a formal system S, and "if S proves A, then B" and "if S proves B, then A" are theorems of S, then S indeed proves both A and B. (Proof reserved for a comment; hat tip to Eliezer and co.)

Lemma 3: "FairBot(DefectBot)=D" is provable in PA+1.

Proof: Note that this does not work in PA, even though PA can prove "DefectBot(FairBot)=D". Why not? Because PA can't prove that PA doesn't find a proof of "DefectBot(FairBot)=C" before it can look for a proof of defection! However, PA+1 can handle this just fine, because it proves "if PA proves DB(FB)=D, then PA does not prove DB(FB)=C".

Note the weird fact that it takes a stronger formal system to prove a result that's much more intuitive from outside the system!

New and Improved Masquerade

Let's split up the long definition of Masquerade with a helper function.

def Search(X,Y):
    for i in range(N):
        for j,k in [C,D]:
            if "Y(X)=j and X(Y)=k" is provable in PA+i:
                return (j,k)
    return False

This function, given two agents, simply searches (using increasingly strong formal systems) for a proof of what they do against each other; once it finds one, it halts and returns that result, and if it doesn't find one in time, it announces its failure. We will incorporate this into the definition of Masquerade. (The other function call here is to the payoff matrix, to retrieve the utility U(j,k) of a certain pair of moves.)

def Masquerade(X):
    utility = -∞
    my_move = null
    their_move = null
    for Y in [DefectBot, FairBot]:
        if Search(X,Y) != False:
            (j,k) = Search(X,Y)
            if U(j,k) > utility:
                my_move = j
                their_move = k
                utility = U(j,k)
    if utility > U(D,D):
        for i in range(N):
            if "X(Masquerade) = their_move" is provable in PA+i:
                return my_move
    return D

A few things to note: first, this agent actually halts and outputs an action against any opponent... but for it to provably do so in a system below PA+N, Search(X,Y) has to find proofs quickly. This is the reason that my prior attempt didn't work- it had to wait at one point for the old FairBot to run out of time/power before concluding what it did, and that made it impossible for the old FairBot to know what the old Masquerade did. But with the new and improved agents, we get to ground in a fixed number of steps.

For brevity, I'll label DefectBot, FairBot, and Masquerade as DB, FB, and M, respectively.

Lemma 4: "Search(DB,DB)=(D,D)" is provable in PA+1. (Follows from Lemma 1; note that it needs to use PA+1 in order to rule out finding proofs of other action-pairs.)

Lemma 5: "Search(FB,DB)=Search(DB,FB)=(D,D)" is provable in PA+2. (Follows from Lemma 3.)

Lemma 6: "Search(FB,FB)=(C,C)" is provable in PA. (Follows from Lemma 2; since (C,C) is the first one tried, we don't even need to go up to PA+1.)

Lemma 7: "Masquerade(DB)=D" is provable in PA+3.

Proof: Lemmas 4 and 5, plus the fact that PA+3 knows the consistency of PA+2. There is no sanity-check step, since utility=U(D,D) here.

Lemma 8: "Masquerade(FB)=C" and "FB(Masquerade)=C" are provable in PA+3.

Proof: Lemmas 5 and 6, and the consistency of PA+2, imply that when Masquerade arrives at the sanity-check stage, it has the variables set as utility=U(C,C), my_move=C and their_move=C. Thus PA+3 can prove that "if 'FB(M)=C' is provable in PA+3, then M(FB)=C". And of course, "if 'M(FB)=C' is provable in PA+3, then FB(M)=C" is provable in PA+3, since again PA+3 can prove that PA through PA+2 won't have found proofs of contrary conclusions before it gets around to trying to find cooperation in PA+3. Therefore we have the desired Löbian cycle!

Theorem: "Masquerade(Masquerade)=C" is provable in PA+4.

Proof: Lemmas 7 and 8, and the consistency of PA+3, allow PA+4 to prove that when each Masquerade arrives at the sanity-check stage, it has set utility=U(C,C), my_move=C and their_move=C. Thus we achieve the Löbian cycle, and find proofs of mutual cooperation!

Awesome! So, what next?

Well, assuming that I haven't made a mistake in one of my proofs, I'm going to run the same proof for my generalization: given a payoff matrix in general, Masquerade enumerates all of the constant strategies and all of the "mutually beneficial deals" of the FairBot form (that is, masks that hold out the "stick" of a particular Nash equilibrium and the "carrot" of another spot on the payoff matrix which is superior to the "stick" for both players). Then it alternates (at escalating PA+n levels) between trying to prove the various good deals that the opponent could agree to. There are interesting complexities here (and an idea of what bargaining problems might involve).

Secondly, I want to see if there's a good way of stating the general problem that Masquerade solves, something better than "it agrees with commonsense decision theory". The analogy here (and I know it's a fatuous one, but bear with me) is that I've come up with a Universal Turing Machine but not yet the Church-Turing Thesis. And that's unsatisfying.

But before anything else... I want to be really sure that I haven't made a critical error somewhere, especially given my false start (and false halt) in the past. So if you spot a lacuna, let me know!

Decision Theories, Part 3.5: Halt, Melt and Catch Fire

31 orthonormal 26 August 2012 10:40PM

Followup to: Decision Theories: A Semi-Formal Analysis, Part III

UPDATE: As it turns out, rumors of Masquerade's demise seem to have been greatly exaggerated. See this post for details and proofs!

I had the chance, over the summer, to discuss the decision theory outlined in my April post with a bunch of relevantly awesome people. The sad part is, there turned out to be a fatal flaw once we tried to formalize it properly. I'm laying it out here, not with much hope that there's a fix, but because sometimes false starts can be productive for others.

Since it's not appropriate to call this decision theory TDT, I'm going to use a name suggested in one of these sessions and call it "Masquerade", which might be an intuition pump for how it operates. So let's first define some simple agents called "masks", and then define the "Masquerade" agent.

Say that our agent has actions a1, ... , an, and the agent it's facing in this round has actions b1, ... , bm. Then for any triple (bi, aj, ak), we can define a simple agent Maskijk which takes in its opponent's source code and outputs an action:

def Mask_ijk(opp_src):
look for proof that Opp(Mask_ijk) = bi
if one is found, then output aj
otherwise, output ak

(This is slightly less general than what I outlined in my post, but it'll do for our purposes. Note that there's no need for aj and ak to be distinct, so constant strategies fall under this umbrella as well.)

A key example of such an agent is what we might call FairBot: on a Prisoner's Dilemma, FairBot tries to prove that the other agent cooperates against FairBot, and if it finds such a proof, then it immediately cooperates. If FairBot fails to find such a proof, then it defects. (An important point is that if FairBot plays against itself and both have sufficiently strong deductive capacities, then a short proof of one's cooperation gives a slightly longer proof of the other's cooperation, and thus in the right circumstances we have mutual cooperation via Löb's Theorem.)

The agent Masquerade tries to do better than any individual mask (note that FairBot foolishly cooperates against CooperateBot when it could trivially do better by defecting). My original formulation can be qualitatively described as trying on different masks, seeing which one fares the best, and then running a "sanity check" to see if the other agent treats Masquerade the same way it treats that mask. The pseudocode looked like this:

def Masquerade(opp_src):
for each (i,j,k), look for proofs of the form "Mask_ijk gets utility u against Opp"
choose (i,j,k) corresponding to the largest such u found
look for proof that Opp(Masquerade) = Opp(Mask_ijk)
if one is found, then output the same thing as Mask_ijk(Opp)
otherwise, output a default action

(The default should be something safe like a Nash equilibrium strategy, of course.)

Intuitively, when Masquerade plays the Prisoner's Dilemma against FairBot, Masquerade finds that the best utility against FairBot is achieved by some mask that cooperates, and then Masquerade's sanity-check is trying to prove that FairBot(Masquerade) = C as FairBot is trying to prove that Masquerade(FairBot) = C, and the whole Löbian circus goes round again. Furthermore, it's intuitive that when Masquerade plays against another Masquerade, the first one notices the proof of the above, and finds that the best utility against the other Masquerade is achieved by FairBot; thus both pass to the sanity-check stage trying to imitate FairBot, both seek to prove that the other cooperate against themselves, and both find the Löbian proof.

So what's wrong with this intuitive reasoning?

Problem: A deductive system can't count on its own consistency!

Let's re-examine the argument that Masquerade cooperates with FairBot. In order to set up the Löbian circle, FairBot needs to be able to prove that Masquerade selects a mask that cooperates with FairBot (like CooperateBot or FairBot). There are nice proofs that each of those masks attains the mutual-cooperation payoff against FairBot, but we also need to be sure that some other mask won't get the very highest (I defect, you cooperate) payoff against FairBot. Now you and I can see that this must be true, because FairBot simply can't be exploited that way. But crucially, FairBot can't deduce its own inexploitability without thereby becoming exploitable (for the same Gödelian reason that a formal system can't prove its own consistency unless it is actually inconsistent)!

Now, the caveats to this are important: if FairBot's deductive process is sufficiently stronger than the deductive process that's trying to exploit it (for example, FairBot might have an oracle that can answer questions about Masquerade's oracle, or FairBot might look for proofs up to length 2N while Masquerade only looks up to length N), then it can prove (by exhaustion if nothing else) that Masquerade will select a cooperative mask after all. But since Masquerade needs to reason about Masquerade at this level, this approach goes nowhere. (At first, I thought that having a weaker oracle for Masquerade's search through masks, and a stronger oracle both for each mask and for Masquerade's sanity-check, would solve this. But that doesn't get off the ground: the agent thus defined attains mutual cooperation with FairBot, but not with itself, because the weaker oracle can't prove that it attains mutual cooperation with FairBot.)

Another caveat is the following: FairBot may not be able to rule out the provability of some statement we know is false, but (given a large enough deductive capacity) it can prove that a certain result is the first of its kind in a given ordering of proofs. So if our agents act immediately on the first proof they find, then we could make a version of Masquerade work... as long as each search does find a proof, and as long as that fact is provable by the same deduction system. But there's an issue with this: two masks paired against each other won't necessarily have provable outcomes!

Let's consider the following mask agent, which we'll call AntiFairBot: it searches for a proof that its opponent cooperates against it, and it defects if it finds one; if it doesn't find such a proof, then it cooperates. This may not be a very optimal agent, but it has one interesting property: if you pit AntiFairBot against FairBot, and the two of them use equivalent oracles, then it takes an oracle stronger than either to deduce what the two of them will do! Thus, Masquerade can't be sure that AntiFairBot won't get the highest payoff against FairBot (which of course it won't) unless it uses a stronger deduction system for the search through masks than FairBot uses for its proof search (which would mean that FairBot won't be able to tell what mask Masquerade picks).

I tried to fix this by iterating over only some of the masks; after all, there's no realistic opponent against whom AntiFairBot is superior to both FairBot and DefectBot. Unfortunately, at this point I realized two things: in order to play successfully against a reasonable range of opponents on the Prisoner's Dilemma, Masquerade needs to be able to imitate at least both FairBot and DefectBot; and FairBot cannot prove that FairBot defects against DefectBot. (There are variants of FairBot that can do so, e.g. it could search both for proofs of cooperation and proofs of defection and playing symmetrically if it finds one, but this variant is no longer guaranteed to cooperate against itself!)

If there are any problems with this reasoning, or an obvious fix that I've missed, please bring it to my attention; but otherwise, I've decided that my approach has failed drastically enough that it's time to do what Eliezer calls "halt, melt, and catch fire". The fact that Löbian cooperation works is enough to keep me optimistic about formalizing this side of decision theory in general, but the ideas I was using seem insufficient to succeed. (Some variant of "playing chicken with my deductive system" might be a crucial component.)

Many thanks to all of the excellent people who gave their time and attention to this idea, both on and offline, especially Eliezer, Vladimir Slepnev, Nisan, Paul Christiano, Critch, Alex Altair, Misha Barasz, and Vladimir Nesov. Special kudos to Vladimir Slepnev, whose gut intuition on the problem with this idea was immediate and correct.

Posts I'd Like To Write (Includes Poll)

14 orthonormal 26 May 2012 09:25PM

Summary: There are a bunch of posts I want to write; I'd like your help prioritizing them, and if you feel like writing one of them, that would be awesome too!

I haven't been writing up as many of my ideas for Less Wrong as I'd like; I have excuses, but so does everyone. So I'm listing out my backlog, both for my own motivation and for feedback/help. At the end, there's a link to a poll on which ones you'd like to see. Comments would also be helpful, and if you're interested in writing up one of the ideas from the third section yourself, say so!

(The idea was inspired by lukeprog's request for post-writing help, and I think someone else did this a while ago as well.)

Posts I'm Going To Write (Barring Disaster)

These are posts that I currently have unfinished drafts of.

Decision Theories: A Semi-Formal Analysis, Part IV and Part V: Part IV concerns bargaining problems and introduces the tactic of playing chicken with the inference process; Part V discusses the benefits of UDT and perhaps wraps up the sequence. Part IV has been delayed by more than a month, partly by real life, and partly because bargaining problems are really difficult and the approach I was trying turned out not to work. I believe I have a fix now, but that's no guarantee; if it turns out to be flawed, then Part IV will mainly consist of "bargaining problems are hard, you guys".

Posts I Really Want To Write

These are posts that I feel I've already put substantial original work into, but I haven't written a draft. If anyone else wants to write on the topic, I'd welcome that, but I'd probably still write up my views on it later (unless the other post covers all the bases that I'd wanted to discuss, most of which aren't obvious from the capsule descriptions below).

An Error Theory of Qualia: My sequence last summer didn't turn out as well as I'd hoped, but I still think it's the right approach to a physically reductionist account of qualia (and that mere bullet-biting isn't going to suffice), so I'd like to try again and see if I can find ways to simplify and test my theory. (In essence, I'm proposing that what we experience as qualia are something akin to error messages, caused when we try and consciously introspect on something that introspection can't usefully break down. It's rather like the modern understanding of déjà vu.)

Weak Solutions in Metaethics: I've been mulling over a certain approach to metaethics, which differs from Eliezer's sequence and lukeprog's sequence (although the conclusions may turn out to be close). In mathematics, there's a concept of a weak solution to a differential equation: a function that has the most important properties but isn't actually differentiable enough times to "count" in the original formulation. Sometimes these weak solutions can lead to "genuine" solutions, and other times it turns out that the weak solution is all you really need. The analogy is that there are a bunch of conditions humans want our ethical theories to satisfy (things like consistency, comprehensivity, universality, objectivity, and practical approximability), and that something which demonstrably had all these properties would be a "strong" solution. But the failure of moral philosophers to find a strong solution doesn't have to spell doom for metaethics; we can focus instead on the question of what sorts of weak solutions we can establish.

Posts I'd Really Love To See

And then we get to ideas that I'd like to write Less Wrong posts on, but that I haven't really developed beyond the kernels below. If any of these strike your fancy, you have my atheist's blessing to flesh them out. (Let me know in the comments if you want to publicly commit to doing so.)

Living with Rationality: Several people in real life criticize Less Wrong-style rationality on the grounds that "you couldn't really benefit by living your life by Bayesian utility maximization, you have to go with intuition instead". I think that's a strawman attack, but none of the defenses on Less Wrong seem to answer this directly. What I'd like to see described is how it works to actually improve one's life via rationality (which I've seen in my own life), and how it differs from the Straw Vulcan stereotype of decisionmaking. (That is, I usually apply conscious deliberation on the level of choosing habits rather than individual acts; I don't take out a calculator when deciding who to sit next to on a bus; I leave room for the kind of uncertainty described as "my conscious model of the situation is vastly incomplete", etc.)

An Explanation of the Born Probabilities in MWI: This topic might be even better suited to an actual physicist than to a know-it-all mathematician, but I don't see why the Born probabilities should be regarded as mysterious at all within the Many-Worlds interpretation. The universe is naturally defined as a Hilbert space, and the evolution of the wavefunction has a basic L^2 conservation law. If you're going to ask "how big" a chunk of the wavefunction is (which is the right way to compute the relative probabilities of being an observer that sees such-and-such), the only sane answer is going to be the L^2 norm (i.e. the Born probabilities).

Are Mutual Funds To Blame For Stock Bubbles? My opinion about the incentives behind the financial crisis, in a nutshell: Financial institutions caused the latest crash by speculating in ways that were good for their quarterly returns but involved themselves in way too much risk. The executives were incentivized to act in that short-sighted way because the investors wanted short-term returns and were willing to turn a blind eye to that kind of risk. But that's a crazy preference for most investors (I expect it had seriously negative value), so why weren't investors smarter (i.e. why didn't they flee from any company that wasn't clearly prioritizing longer-term expected value)? Well, there's one large chunk of investors with precisely those incentives: the 20% of the stock market that's composed of mutual funds. I'd like to test this theory and think about realistic ways to apply it to public policy. (It goes without saying that I think Less Wrong readers should, at minimum, invest in index funds rather than mutual funds.)

Strategies for Trustworthiness with the Singularity: I want to develop this comment into an article. Generally speaking, the usual methods of making the principal-agent problem work out aren't available; the possible payoffs are too enormous when we're discussing rapidly accelerating technological progress. I'm wondering if there's any way of setting up a Singularity-affecting organization so that it will be transparent to the organization's backers that the organization is doing precisely what it claims. I'd like to know in general, but there's also an obvious application; I think highly of the idealism of SIAI's people, but trusting people on their signaled idealism in the face of large incentives turns out to backfire in politics pretty regularly, so I'd like a better structure than that if possible.

On Adding Up To Normality: People have a strange block about certain concepts, like the existence of a deity or of contracausal free will, where it seems to them that the instant they stopped believing in it, everything else in their life would fall apart or be robbed of meaning, or they'd suddenly incur an obligation that horrifies them (like raw hedonism or total fatalism). That instinct is like being on an airplane, having someone explain to you that your current understanding of aerodynamic lift is wrong, and then suddenly becoming terrified that the plane will plummet out of the sky now that there's no longer the kind of lift you expected. (That is, it's a fascinating example of the Mind Projection Fallacy.) So I want a general elucidation of Egan's Law to point people to.

The Subtle Difference Between Meta-Uncertainty and Uncertainty: If you're discussing a single toss of a coin, then you should treat it the same (for decision purposes) whether you know that it's a coin designed to land heads 3/4 of the time, or whether you know there's a 50% chance it's a fair coin and a 50% chance it's a two-headed coin. Metauncertainty and uncertainty are indistinguishable in that sense. Where they differ is in how you update on new evidence, or how you'd make bets about three upcoming flips taken together, etc. This is a worthwhile topic that seems to confuse the hell out of newcomers to Bayesianism.

(Originally, this was a link to a poll on these post ideas)

Thanks for your feedback!

UPDATE:

Thanks to everyone who gave me feedback; results are in this comment!

Timeless physics breaks T-Rex's mind [LINK]

22 orthonormal 23 April 2012 07:16PM

From Dinosaur Comics, with a nice shout-out to Eliezer's Timeless Physics post! (Look in the newspost below the comic.)

I can't wait until Ryan North gets around to Newcomb's Problem...

Decision Theories: A Semi-Formal Analysis, Part III

23 orthonormal 14 April 2012 07:34PM

Or: Formalizing Timeless Decision Theory

Previously:

0. Decision Theories: A Less Wrong Primer
1. The Problem with Naive Decision Theory
2. Causal Decision Theory and Substitution

WARNING: The main result of this post, as it's written here, is flawed. I at first thought it was a fatal flaw, but later found a fix. I'm going to try and repair this post, either by including the tricky bits, or by handwaving and pointing you to the actual proofs if you're curious. Carry on!

Summary of Post: Have you ever wanted to know how (and whether) Timeless Decision Theory works? Using the framework from the last two posts, this post shows you explicitly how TDT can be implemented in the context of our tournament, what it does, how it strictly beats CDT on fair problems, and a bit about why this is a Big Deal. But you're seriously going to want to read the previous posts in the sequence before this one.

We've reached the frontier of decision theories, and we're ready at last to write algorithms that achieve mutual cooperation in Prisoner's Dilemma (without risk of being defected on, and without giving up the ability to defect against players who always cooperate)! After two substantial preparatory posts, it feels like it's been a long time, hasn't it?

But look at me, here, talking when there's Science to do...

continue reading »

View more: Next