WrongBot comments on Rationality Quotes: November 2010 - Less Wrong

5 [deleted] 02 November 2010 08:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: WrongBot 09 November 2010 09:24:37PM *  1 point [-]

So far as I understand the situation, the SIAI is working on decision theory because they want to be able to create an AI that can be guaranteed not to modify its own decision function.

There are circumstances where CDT agents will self-modify to use a different decision theory (e.g. Parfit's Hitchhiker). If this happens (they believe), it will present a risk of goal-distortion, which is unFriendly.

Put another way: the objective isn't to get two AIs to cooperate, the objective is to make it so that an AI won't need to alter its decision function in order to cooperate with another AI. (Or any other theoretical bargaining partner.)

Does that make any sense? As a disclaimer, I definitely do not understand the issues here as well as the SIAI folks working on them.

Comment author: orthonormal 09 November 2010 09:43:10PM 1 point [-]

I don't think that's quite right- a sufficiently smart Friendly CDT agent could self-modify into a TDT (or higher decision theory) agent without compromising Friendliness (albeit with the ugly hack of remaining CDT with respect to consequences that happened causally before the change).

As far as I understand SIAI, the idea is that decision theory is the basis of their proposed AI architecture, and they think it's more promising than other AGI approaches and better suited to Friendliness content.

Comment author: Perplexed 09 November 2010 09:50:46PM 0 points [-]

I don't think that's quite right- a sufficiently smart Friendly CDT agent could self-modify into a TDT (or higher decision theory) agent without compromising Friendliness (albeit with the ugly hack of remaining CDT with respect to consequences that happened causally before the change).

That sounds intriguing also. Again, a reference to something written by someone who understands it better might be helpful so as to make some sense of it.

Comment author: cousin_it 09 November 2010 11:48:12PM *  1 point [-]

Maybe it would be helpful to you to think of self-modifications and alternative decision theories as unrestricted precommitment. If you had the ability to irrevocably precommit to following any decision rule in the future, which rule would you choose? Surely it wouldn't be pure CDT, because you can tractably identify situations where CDT loses.

Comment author: Perplexed 10 November 2010 12:34:45AM *  1 point [-]

you can tractably identify situations where CDT loses.

"Tractably" is a word that I find a bit unexpected in this context. What do you mean by it?

"Situations where CDT loses." Are we talking about real-world-ish situations here? Situations in which causality applies? Situations in which the agents are free rather than being agents whose decisions have already been made for them by a programmer at some time in the past? What kind of situations do you have in mind?

And what do you mean by "loses"? Loses to who or what? Loses to agents that can foresee their opponent's plays? Agents that have access to information channels not available to the CDT agent? Just what information channels are allowed? Why those, and not others?

ETA: And that "Surely it wouldn't be CDT ... because you can identify ..." construction simply begs for completion with "Surely it would be <my candidate> ... because you can't identify ...". Do you have a candidate? Do you have a proof of "you can't identify situations where it loses". If not, what grounds do you have for criticizing?

Comment author: [deleted] 10 November 2010 02:31:30AM 0 points [-]

CDT still loses to TDT in Newcomb's problem if Omega has can predict your actions with better than 50.05% chances. You can't get out of this by claiming that Omega has access to unrealistic information channels, because these chances seem fairly realistic to me.

Comment author: WrongBot 10 November 2010 02:12:08AM 0 points [-]

Situations in which the agents are free rather than being agents whose decisions have already been made for them by a programmer at some time in the past?

Free from what? Causality? This sounds distressingly like you are relying on some notion of "free will".

(Apologies if I'm misreading you.)

Comment author: Perplexed 10 November 2010 02:53:07AM 0 points [-]

I am relying on a notion of free will.

I understand that every normative decision theory adopts the assumption (convenient fiction if you prefer) that the agent being advised is acting of "his own free will". Otherwise, why bother advising?

Being a compatibilist, as I understand Holy Scripture (i.e. The Sequences) instructs me to be, I see no incompatibility between this "fiction" of free will and the similar fiction of determinism. They model reality at different levels.

For certain purposes, it is convenient to model myself and other "free agents" as totally free in our decisions, but not completely free in carrying out those decisions. For example, my free will ego may decide to quit smoking, but my determined id has some probability of overruling that decision.

Comment author: WrongBot 10 November 2010 03:12:05AM 2 points [-]

Why the distinction between agents which are free and agents which have had their decisions made for them by a programmer, then? Are you talking about cases in which specific circumstances have hard-coded behavioral responses? Every decision every agent makes is ultimately made for it by the agent's programmer; I suppose I'm wondering where you draw the line.

As a side note, I feel very uncomfortable seeing the sequences referred to as inviolable scripture, even in jest. In my head, it just screams "oh my god how could anyone ever be doing it this wrong arghhhhhh."

I'm still trying to figure out what I think of that reaction, and do not mention it as a criticism. I think.

Comment author: Perplexed 10 November 2010 04:29:38AM 1 point [-]

Why the distinction between agents which are free and agents which have had their decisions made for them by a programmer, then? Are you talking about cases in which specific circumstances have hard-coded behavioral responses? Every decision every agent makes is ultimately made for it by the agent's programmer; I suppose I'm wondering where you draw the line.

I make the distinction because the distinction is important. The programmer makes decisions at one point in time, with his own goals and/or utility functions, and his own knowledge of the world. The agent makes decisions at a different point in time, based on different values and different knowledge of the world. A decision theory which advises the programmer is not superior to a decision theory which advises the agent. Those two decision theories are playing different games.

Comment author: nshepperd 10 November 2010 03:14:28AM *  1 point [-]

"Totally free" sounds like too free. You're not free to actually decide at time T to "decide X at time T+1" and then actually decide Y at time T+1, since that is against the laws of physics.

It's my understanding that what goes through your head when you actually decide X at time T+1 is (approximately) what we call TDT. Or you can stick to CDT and not be able to make decisions for your future self.

Comment author: Perplexed 10 November 2010 04:03:34AM 0 points [-]

I upvoted this because it seems to contain a grain of truth, but I'm nervous that someone before me had downvoted it. I don't know whether that was because it actually is just completely wrong about what TDT is all about, or because you went a bit over the top with "against the laws of physics".

Comment author: cousin_it 10 November 2010 12:38:25AM *  0 points [-]

Situations where CDT loses are precisely those situations where credible precommitment helps you, and inability to credibly precommit hurts you. There's no shortage of those in game theory.

Comment author: Perplexed 10 November 2010 12:54:53AM *  1 point [-]

Ok, those are indeed a reasonable class of decisions to consider. Now, you say that CDT loses. Ok, loses to what? And presumably you don't mean loses to opponents of your preferred decision theory. You mean loses in the sense of doing less well in the same situation. Now, presumably that means that both CDT and your candidate are playing against the same game opponent, right?

I think you see where I am going here, though I can spell it out if you wish. In claiming the superiority of the other decision theory you are changing the game in an unfair way by opening a communication channel that didn't exist in the original game statement and which CDT has no way to make use of.

Comment author: cousin_it 10 November 2010 01:04:52AM 1 point [-]

Well, yeah, kind of, that's one way to look at it. Reformulate the question like this: what would CDT do if that communication channel were available? What general precommitment for future situations would CDT adopt and publish? That's the question TDT people are trying to solve.

Comment author: Perplexed 10 November 2010 01:18:11AM 1 point [-]

what would CDT do if that communication channel were available?

The simplest answer that moves this conversation forward would be "It would pretend to be a TDT agent that keeps its commitments, whenever that act of deception is beneficial to it. It would keep accurate statistics on how often agents claiming to be TDT agents actually are TDT agents, and adjust its priors accordingly."

Now it is your turn to explain why this strategy violates the rules, whereas your invention of a deception-free channel did not.

Comment author: orthonormal 09 November 2010 11:18:59PM *  0 points [-]

I'm going to have to refer you to Eliezer's TDT document for that. (If you're OK with starting in medias res, the first mention of this is on pages 22-23, though there it's just specialized to Newcomb's Dilemmas; see pages 50-52 for an example of the limits of this hack. Elsewhere he's argued for the more general nature of the hack.)

Comment author: Perplexed 10 November 2010 12:00:48AM 2 points [-]

Ok thanks.

I'm coming to realize just how much of this stuff derives from Eliezer's insistance on reflective consistency of a decision theory. Given any decision theory, Eliezer will find an Omega to overthrow it.

But doesn't a diagonal argument show that no decision theory can be reflectively consistent over all test data presented by a malicious Omega? Just as there is no enumeration of the reals, isn't there a game which can make any specified rational agent regret its rationality? Omega holds all the cards. He can always make you regret your choice of decision theory.

Comment author: jimrandomh 10 November 2010 12:34:41AM 2 points [-]

Just as there is no enumeration of the reals, isn't there a game which can make any specified rational agent regret its rationality? Omega holds all the cards. He can always make you regret your choice of decision theory.

No. We can ensure that no such problem exists if we assume that (1) only the output decisions are used, not any internals; and (2) every decision is made with access to the full problem statement.

Comment author: bentarm 10 November 2010 01:33:53AM 1 point [-]

I'm not entirely sure what "every decision is made with full access to the problem statement means", but I can't see how it can possibly get around the diagonalisation argument. Basically, Omega just says "I simulated your decision on problem A, on which your algorithm outputs something different from algorithm X, and give you a shiny black ferrari iff you made the same decision as algorithm X"

As cousin_it pointed out last time I brought this up, Caspian made this argument in response to the very first post on the Counterfactual Mugging. I've yet to see anyone point out a flaw in it as an existence proof.

As far as I can see the only premise needed for this diagonalisation to work is that your decision theory doesn't agree with algorithm X on all possible decisions, so just make algorithm X "whatever happens, recite the Bible backwards 17 times".

Comment author: jimrandomh 10 November 2010 01:37:51AM 2 points [-]

I'm not entirely sure what "every decision is made with full access to the problem statement means", but I can't see how it can possibly get around the diagonalisation argument. Basically, Omega just says "I simulated your decision on problem A, on which your algorithm outputs something different from algorithm X, and give you a shiny black ferrari iff you made the same decision as algorithm X"

In that case, your answer to problem A is being used in a context other than problem A. That other context is the real problem statement, and you didn't have it when you chose your answer to A, so it violates the assumption.

Comment author: Sniffnoy 10 November 2010 01:37:27AM 2 points [-]

Yeah, that definitely violates the "every decision is made with full access to the problem statement" condition. The outcome depends on your decision on problem A, but when making your decision on problem A you have no knowledge that your decision will also be used for this purpose.

Comment author: bentarm 10 November 2010 01:58:43AM *  1 point [-]

I don't see how this is useful. Let's take a concrete example, let's have decision problem A, Omega offers you the choice of $1,000,000, or being slapped in the face with a wet fish. Which would you like your decision theory to choose?

Now, No-mega can simulate you, say, 10 minutes before you find out who he is, and give you 3^^^3 utilons iff you chose the fish-slapping. So your algorithm has to include some sort of prior on the existence of "fish-slapping"-No-megas.

My algorithm "always get slapped in the face with a wet fish where that's an option", does better than any sensible algorithm on this particular problem, and I don't see how this problem is noticeably less realistic than any others.

In other words, I guess I might be willing to believe that you can get around diagonalisation by posing some stringent limits on what sort of all-powerful Omegas you allow (can anyone point me to a proof of that?) but I don't see how it's interesting.

Comment author: jimrandomh 10 November 2010 02:09:04AM 2 points [-]

Now, No-mega can simulate you, say, 10 minutes before you find out who he is, and give you 3^^^3 utilons iff you chose the fish-slapping. So your algorithm has to include some sort of prior on the existence of "fish-slapping" No-megas.

Actually, no, the probability of fish-slapping No-megas is part of the input given to the decision theory, not part of the decision theory itself. And since every decision theory problem statement comes with an implied claim that it contains all relevant information (a completely unavoidable simplifying assumption), this probability is set to zero.

Decision theory is not about determining what sorts of problems are plausible, it's about getting from a fully-specified problem description to an optimal answer. Your diagonalization argument requires that the problem not be fully specified in the first place.

Comment author: NihilCredo 10 November 2010 02:55:57AM 0 points [-]

"I simulated your decision on problem A, on which your algorithm outputs something different from algorithm X, and give you a shiny black ferrari iff you made the same decision as algorithm X"

This is a no-choice scenario. If you say that the Bible-reciter is the one that will "win" here, you are using the verb "to win" with a different meaning from the one used when we say that a particular agent "wins" by making the choice that leads to the best outcome.

Comment author: NihilCredo 10 November 2010 12:46:56AM *  1 point [-]

But doesn't a diagonal argument show that no decision theory can be reflectively consistent over all test data presented by a malicious Omega?

With the strong disclaimer that I have no background in decision theory beyond casually reading LW...

I don't think so. The point of simulation (Omega) problems, to me, doesn't seem to be to judo your intelligence against yourself; rather, it is to "throw your DT off the scent", building weird connections between events (weird, but still vaguely possible, at least for AIs), that a particular DT isn't capable of spotting and taking into account.

My human, real-life decision theory can be summarised as "look at as many possible end-result worlds as I can, and at what actions will bring them into being; evaluate how much I like each of them; then figure out which actions are most efficient at leading to the best worlds". But that doesn't exactly fly when you're programming a computer, you need something that can be fully formalised, and that is where those strange Omega scenarios are useful, because your code must get it right "on autopilot", it cannot improvise a smarter approach on the spot - the formula is on paper, and if it can't solve a given problem, but another one can, it means that there is room for improvement.

In short, DT problems are just clever software debugging.

Comment author: Perplexed 10 November 2010 01:08:01AM 2 points [-]

I agreed with everything you said after "I don't think so". So I am left confused as to why you don't think so.

You analogize DT problems as test data used to determine whether we should accept or reject a decision theory. I am claiming that our requirements (i.e. "reflective consistency") are so unrealistic that we will always be able to find test data forcing us to reject. Why do you not think so?

Comment author: Vladimir_Nesov 10 November 2010 01:18:27AM 0 points [-]

Decision theories should usually be seen as normative, not descriptive. How "realistic" something is, is not very important, especially for thought experiments. Decision theory cashes out where you find a situation that can indeed be analyzed with it, and where you'll secure a better outcome by following theory's advice. For example, noticing acausal control has advantages in many real-world situations (Parfit's Hitchhiker variants). Eliezer's TDT paper discusses this towards the end of Part I.

Comment author: Perplexed 10 November 2010 02:06:31AM 0 points [-]

I believe you misinterpreted my "unrealistic requirements". A better choice of words would have been "unachievably stringent requirements". I wasn't complaining that Omega and the like are unrealistic. At least not here.

The version I have of Eliezer's TDT paper doesn't have a "Part I". It is dated "September 2010 and has 112 pages. Is there a better version available?

I don't understand your other comments. Or, perhaps more accurately, I don't understand what they were in response to.

Comment author: Vladimir_Nesov 14 November 2010 10:48:00AM *  0 points [-]

The version I have of Eliezer's TDT paper doesn't have a "Part I". It is dated "September 2010 and has 112 pages. Is there a better version available?

"Part I" is chapters 1-9. (This concept is referred to in the paper itself.)

Comment author: NihilCredo 10 November 2010 01:19:17AM *  0 points [-]

Because I suspect that there are only so many functionally different types of connections between events (at the very least, I see no hint that there must be infinitely many) and once you've found them all you will have the possibility of writing a DT that can't be led to corner itself into suboptimal outcomes due to blind spots.

Comment author: Perplexed 10 November 2010 01:36:00AM 1 point [-]

at the very least, I see no hint that there must be infinite ones

Am I correct in interpreting this as "infinitely many of them"? If so, I am curious as to what you mean by "functionally different types of connections between events". Could you provide an example of some "types of connections between events"? Functionally different ones to be sure.

Presumably, the relevance must be your belief that decision theories differ in just how many of these different kinds of connections they handle correctly. Could you illustrate this by pointing out how the decision theory of your choice handles some types of connections, and why you have confidence that it does so correctly?

Comment author: NihilCredo 10 November 2010 02:17:51AM *  0 points [-]

Am I correct in interpreting this as "infinitely many of them"?

Oops, yes. Fixed.

If so, I am curious as to what you mean by "functionally different types of connections between events". Could you provide an example of some "types of connections between events"? Functionally different ones to be sure.

CDT can 'see' the classical, everyday causal connections that are marked in formulas with the symbol ">" (and I'd have to spend several hours reading at least the Stanford Encyclopaedia before I could give you a confident definition of that), but it cannot 'see' the connection in Newcomb's problem between the agent's choice of boxes and the content of the opaque box (sometimes called 'retrocausality').

Presumably, the relevance must be your belief that decision theories differ in just how many of these different kinds of connections they handle correctly. Could you illustrate this by pointing out how the decision theory of your choice handles some types of connections, and why you have confidence that it does so correctly?

I don't have a favourite formal decision theory, because I am not sufficiently familiar with the underlying math and with the literature of discriminating scenarios to pick a horse. If you're talking about the human decision "theory" of mine I described above, it doesn't explicitly do that; the key hand-waving passage is "figure out which actions are most efficient at leading to the best worlds", meaning I'll use whatever knowledge I currently possess to estimate how big is the set of Everett branches where I do X and get A, compared to the set of those where I do X and get B. (For example, six months ago I hadn't heard of the concept of acausal connections and didn't account for them at all while plotting the likelihoods of possible futures, whereas now I do - at least technically; in practice, I think that between human agents they are a negligible factor. For another example, suppose that some years from now I became convinced that the complexity of human minds, and the variability between different ones, were much greater than I previously thought; then, given the formulation of Newcomb's problem where Omega isn't explicitly defined as a perfect simulator and all we know is that it has had a 100% success rate so far, I would suitably increase my estimation of the chances of Omega screwing up and making two-boxing profitable.)

Comment author: Perplexed 09 November 2010 09:48:24PM 0 points [-]

There are circumstances where CDT agents will self-modify to use a different decision theory (e.g. Parfit's Hitchhiker).

Does that make any sense?

Not to me. But a reference might repair that deficiency on my part.

Comment author: WrongBot 09 November 2010 10:01:02PM 0 points [-]

See Eliezer's posts on Newcomb's Problem and regret of rationality and TDT problems he can't solve.

(Incidentally, I found those reference in about 30 seconds, starting from the LW Wiki page on Parfit's Hitchhiker.)

Comment author: Perplexed 09 November 2010 11:30:56PM *  -1 points [-]

Ah! Thank you. I see now. The circumstance in which a CDT agent will self modify to use a different decision theory are that:

  • The agent was programmed by Eliezer Yudkowsky and hence is just looking for an excuse to self-modify.
  • The agent is provided with a prior leading it to be open to the possibility of omnicient, yet perverse agents bearing boxes full of money.
  • The agent is supplied with (presumably faked) empirical data leading it to believe that all such omniscient agents reward one-boxers.
  • Since the agent seeks reflective equilibrium (because programmed by aforesaid Yudkowsky), and since it knows that CDT requires two boxing, and since it has no reason to doubt that causality is important in this world, it makes exactly the change to its decision theory that seems appropriate. It continues to use CDT except on Newcomb problems, where it one boxes. That is, it self-modifies to use a different decision theory, which we can call CDTEONPWIOB.

Well, ok, though I wouldn't have said that these are cases where CDT agents do something weird. These are cases where EYDT agents do something weird.

I apologize if it seems that the target of my sarcasm is you WrongBot. It is not.

EY has deluded himself into thinking that reflective consistency is some kind of gold standard of cognitive stability. And then he uses reflective consistency as a lever by which completely fictitious data can uproot the fundamental algorithms of rationality. Which would be fine, except that he has apparently convinced a lot of smart people here that he knows what he is talking about. Even though he has published nothing on the topic. Even though other smart people like Robin tell him that he is trying to solve an already solved problem.

I would say more but ...

This manuscript was cut off here, but interested readers are suggested to look at these sources for more discussion: Bibliography Gibbard, A., and Harper, W. L. (1978), "Counterfactuals and Two Kinds of Expected Utility", in C. A. Hooker, J. J. Leach, and E. F. McClennen (eds.), Foundations and Applications of Decision Theory, vol. 1, Reidel, Dordrecht, pp. 125-162.

Comment author: Sniffnoy 10 November 2010 01:04:21AM 2 points [-]

Reflective consistency is not a "gold standard". It is a basic requirement. It should be easy to come up with terrible, perverse decision theories that are reflectively consistent (EY does so, sort of, in his TDT outline, though it's not exactly serious / thorough). The point is not that reflective consistency is a sign you're on the right track, but that a lack of it is a sign that something is really wrong, that your decision theory is perverse. If using your decision theory causes you to abandon that same decision theory, it can't have been a very good decision theory.

Consider it as being something like monotonicity in a voting system; it's a weak requirement for weeding out things that are clearly bad. (Well, perhaps not everyone would agree IRV is "clearly bad", but... it isn't even monotonic!) It just happens that in this case evidently nobody noticed before that this would be a good condition to satisfy and hence didn't try. :)

Comment author: timtyler 11 November 2010 10:45:20PM 0 points [-]

Am not sure that decision theory is an "already solved" problem. There's the issue of what happens when agents can self-modify - and so wirehead themselves. I am pretty sure that is an unresolved "grand challenge" problem.

Comment author: WrongBot 10 November 2010 03:21:12AM 0 points [-]

TDT gets better outcomes than CDT when faced with Newcomb's Problem, Parfit's Hitchhiker, and the True Prisoner's Dilemma.

When does CDT outperform TDT? If the answer is "never", as it currently seems to be, why wouldn't a CDT agent self-modify to use TDT?

Comment author: Perplexed 10 November 2010 03:57:00AM 0 points [-]

why wouldn't a CDT agent self-modify to use TDT?

Because it can't find a write-up that explains how to use it?

Perhaps you can answer the questions that I asked here What play does TDT make in the game of Chicken? Can you point me to a description of TDT that would allow me to answer that question for myself?

Comment author: WrongBot 10 November 2010 04:26:29AM *  1 point [-]

Suppose I'm an agent implementing TDT. My decision in Chicken depends on how much I know about my opponent.

  • If I know my opponent implements the same decision procedure I do (because I have access to its source code, say), and my opponent has this knowledge about me, I swerve. In this case, my opponent and I are in symmetrical positions and its choice is fully determined by mine; my choice is between payoffs of (0,0) and (-10,-10).
  • Else, I act identically to a CDT agent.

As Eliezer says here, the one-sentence version of TDT is "Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation."

Comment author: Sniffnoy 10 November 2010 04:51:40AM *  1 point [-]
  • If I know my opponent implements the same decision procedure I do (because I have access to its source code, say), and my opponent has this knowledge about me, I swerve. In this case, my opponent and I are in symmetrical positions and its choice is fully determined by mine; my choice is between payoffs of (0,0) and (-10,-10).

I'm not sure this is right. Isn't there a correlated equilibrium that does better?

Comment author: WrongBot 10 November 2010 05:21:29AM *  1 point [-]

I think we're looking at different payoff matrices. I was using the formulation of Chicken that rewards

# | ....C....|.....D.....
C | +0, +0 | -1,+1
D | +1, -1 | -10, -10

which doesn't have a correlated equilibrium that beats (C,C).

Using the payoff matrix Perplexed posted here, there is indeed a correlated equilibrium, which I believe the TDT agents would arrive at (given a source of randomness). My bad for not specifying the exact game I was talking about.

Comment author: Sniffnoy 10 November 2010 07:12:26AM 0 points [-]

...and, this is what I get for not actually checking things before I post them.

Comment author: Perplexed 10 November 2010 06:12:30AM 0 points [-]

Two questions: 1. Why do you believe the TDT agents would find the correlated equilibrium? Your previous statement and Eliezer quote suggested that a pair of TDT agents would always play symmetrically in a symmetric game. No "spontaneous symmetry breaking". 2. Even without a shared random source, there is a Nash mixed equilibrium that is also better than symmetric cooperation. Do you believe TDT would play that if there were no shared random input?

Comment author: Sniffnoy 10 November 2010 08:15:29PM 0 points [-]

Gah, wait. I feel dumb. Why would TDT find correlated equilibria? I think I had the "correlated equilibrium" concept confused. A correlated equilibrium would require a public random source, which two TDTers won't have.

Comment author: steven0461 10 November 2010 08:23:29PM 2 points [-]

Digits of pi are kind of like a public random source.

Comment author: Perplexed 10 November 2010 06:12:37AM 0 points [-]

So TDT is different from CDT only in cases where the game is perfectly symmetric? If you are playing a game that is roughly the symmetric PD, except that one guy's payoffs are shifted by a tiny +epsilon, then they should both defect?

Comment author: WrongBot 10 November 2010 08:26:24AM 0 points [-]

TDT is different from CDT whenever one needs to consider the interaction of multiple decisions made using the same TDT-based decision procedure. This applies both to competitions between agents, as in the case of Chicken, and to cases where an agent needs to make credible precommitments, as in Newcomb's Problem.

In the case of an almost-symmetric PD, the TDT agents should still cooperate. To change that, you'd have to make the PD asymmetrical enough that the agents were no longer evaluating their options in the same way. If a change is small enough that a CDT agent wouldn't change its strategy, TDT agents would also ignore it.

This doesn't strike me as the world's greatest explanation, but I can't think of a better way to formulate it. Please let me know if there's something that's still unclear.

Comment author: Perplexed 10 November 2010 03:56:20PM 0 points [-]

If a change is small enough that a CDT agent wouldn't change its strategy, TDT agents would also ignore it.

This strikes me as a bit bizarre. You test whether a warped PD is still close enough to symmetric by asking whether a CDT agent still defects in order to decide whether a TDT agent should still cooperate? Are you sure you are not just making up these rules as you go?

Please let me know if there's something that's still unclear.

Much is unclear and very little seems to be coherently written down. What amazes me is that there is so much confidence given to something no one can explain clearly. So far, the only stable thing in your description of TDT is that it is better than CDT.

Comment author: Perplexed 10 November 2010 04:51:45AM *  0 points [-]

Thank you. I hope you realize that you have provided an example of a game in which CDT does better than TDT. For example, in the game with the payoff matrix shown below, there is a mixed strategy Nash equilibrium which is better than the symmetric cooperative result.

# | .C..|....D..
C | 3,3 | 2,7
D | 7,2 | 0,0

Comment author: WrongBot 10 November 2010 05:25:33AM 0 points [-]

Looks like we're talking about different versions of Chicken. Please see my reply to Sniffnoy.