Wei_Dai comments on Towards a New Decision Theory - Less Wrong

50 Post author: Wei_Dai 13 August 2009 05:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (142)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 15 August 2009 11:14:57AM *  4 points [-]

I'm still quite confused, but I'll report my current thoughts in case someone can help me out. Suppose we take it as an axiom that an AI's decision algorithm shouldn't need to contain any hacks to handle exceptional situations. Then the following "exceptionless" decision algorithm seems to pop out immediately: do what my creator would want me to do. In other words, upon receiving input X, S computes the following: suppose S's creator had enough time and computing power to create a giant lookup table that contains an optimal output for every input S might encounter, what would the entry for X be? Return that as the output.

This algorithm correctly solves Counterfactual Mugging, since S's creator would want it to output "give $100", since "give $100" would have maximized the creator's expected utility at the time of coding S. It also solves the problem posed by Omega in the parent comment. It seems to be reflectively consistent. But what is the relationship between this "exceptionless" algorithm and the timeless/updateless decision algorithm?

Comment author: Vladimir_Nesov 15 August 2009 01:56:57PM *  25 points [-]

There are two parts to AGI: consequentialist reasoning and preference.

Humans have feeble consequentialist abilities, but can use computers to implement huge calculations, if the problem statement can be entered in the computer. For example, you can program the material and mechanical laws in an engineering application, enter a building plan, and have the computer predict what's going to happen to it, or what parameters should be used in the construction so that the outcome is as required. That's the power outside human mind, directed by the correct laws, and targeted at the formally specified problem.

When you consider AGI in isolation, it's like an engineering application with a random building plan: it can powerfully produce a solution, but it's not a solution to the problem you need solving. Nonetheless, this part is essential when you do have an ability to specify the problem. And that's the AI's algorithm, one aspect of which is decision-making. It's separate from the problem statement that comes from human nature.

For an engineering program, you can say that the computer is basically doing what a person would do if they had crazy amount of time and machine patience. But that's because a person can know both problem statement and laws of inference formally, which is the way it was programmed in the computer in the first place.

With human preference, the problem statement isn't known explicitly to people. People can use preference, but can't state this whole object explicitly. A moral machine would need to work with preference, but human programmers can't enter it, and neither can they do what a machine would be able to do given a formal problem statement, because humans can't know this problem statement, it's too big. It could exist in a computer explicitly, but it can't be entered there by programmers.

So, here is a dilemma: problem statement (preference) resides in the structure of human mind, but the strong power of inference doesn't, while the strong power of inference (potentially) exists in computers outside human minds, where the problem statement can't be manually transmitted. Creating FAI requires these components to meet in the same system, but it can't be done in a way other kinds of programming are done.

Something to think about.

Comment author: andreas 15 August 2009 04:40:58PM 3 points [-]

This is the clearest statement of the problem FAI that I have read to date.

Comment author: Steve_Rayhawk 15 August 2009 03:41:24PM *  5 points [-]

This algorithm . . . seems to be reflectively consistent. But what is the relationship between this "exceptionless" algorithm and the timeless/updateless decision algorithm?

Suppose that, before S's creator R started coding, Omega started an open game of counterfactual mugging with R, and that R doesn't know this, but S does. According to S's inputs, Omega's coin came up tails, so Omega is waiting for $100.

Does S output "give $0"? If Omega had started the game of counterfactual mugging after S was coded, then S would output "give $100".

Suppose that S also knows that R would have coded S with the same source code, even if Omega's coin had come up heads. Would S's output change? Should S's output change (should R have coded S so that this would change S's output)? How should S decide, from its inputs, which R is the creator with the expected utility S's outputs should be optimal for? Is it the R in the world where Omega's coin came up heads, or the R in the world where Omega's coin came up tails?

If there is not an inconsistency in S's decision algorithm or S's definition of R, is there an inconsistency in R's decision algorithm or R's own self-definition?

Comment author: Wei_Dai 16 August 2009 09:34:44AM 1 point [-]

I'm having trouble understanding this. You're saying that Omega flipped the coin before R started coding, but R doesn't know that, or the result of the coin flip, right? Then his P(a counterfactual mugging is ongoing) is very low, and P(heads | a counterfactual mugging is ongoing) = P(tails | a counterfactual mugging is ongoing) = 1/2. Right?

In that case, his expected utility at the time of coding is maximized by S outputting "give $100" upon encountering Omega. It seems entirely straightforward, and I don't see what the problem is...

Comment author: Steve_Rayhawk 19 August 2009 09:05:42AM *  4 points [-]

. . . do what my creator would want me to do. In other words, upon receiving input X, S computes the following: suppose S's creator had enough time and computing power to create a giant lookup table that contains an optimal output for every input S might encounter, what would the entry for X be? Return that as the output.

I don't know how to define what R "would want" or would think was "optimal".

What lookup table would R create? If R is a causal decision theorist, R might think: "If I were being counterfactually mugged and Omega's coin had come up heads, Omega would have already made its prediction about whether S would output 'give $100' on the input 'tails'. So, if I program S with the rule 'give $100 if tails', that won't cause Omega to give me $10000. And if the coin came up tails, that rule would lose me $100. So I will program S with the rule 'give $0 if tails'."

R's expected utility at the time of coding may be maximized by the rule "give $100 if tails", but R makes decisions by the conditional expected utilities given each of Omega's possible past predictions, weighted by R's prior beliefs about those predictions. R's conditional expected utilities are both maximized by the decision to program S to output "give $0".

Comment author: Wei_Dai 19 August 2009 10:19:14AM 1 point [-]

[I deleted my earlier reply, because I was still confused about your questions.]

If, according to R's decision theory, the most preferred choice involves programming S to output "give $0", then that is what S would do.

It might be easier to think of the ideal S as consisting of a giant lookup table created by R itself given infinite time and computing power. An actual S would try to approximate this ideal to the best of its abilities.

How should S decide, from its inputs, which R is the creator with the expected utility S's outputs should be optimal for? Is it the R in the world where Omega's coin came up heads, or the R in the world where Omega's coin came up tails?

R would encode its own decision theory, prior, utility function, and memory at the time of coding into S, and have S optimize for that R.

Comment author: Steve_Rayhawk 19 August 2009 11:27:11AM *  5 points [-]

Sorry. I wasn't trying to ask my questions as questions about how R would make decisions. I was asking questions to try to answer your question about the relationship between exceptionless and timeless decision-making, by pointing out dimensions of a map of ways for R to make decisions. For some of those ways, S would be "timeful" around R's beliefs or time of coding, and for some of those ways S would be less timeful.

I have an intuition that there is a version of reflective consistency which requires R to code S so that, if R was created by another agent Q, S would make decisions using Q's beliefs even if Q's beliefs were different from R's beliefs (or at least the beliefs that a Bayesian updater would have had in R's position), and even when S or R had uncertainty about which agent Q was. But I don't know how to formulate that intuition to something that could be proven true or false. (But ultimately, S has to be a creator of its own successor states, and S should use the same theory to describe its relation to its past selves as to describe its relation to R or Q. S's decisions should be invariant to the labeling or unlabeling of its past selves as "creators". These sequential creations are all part of the same computational process.)

Comment author: timtyler 16 August 2009 01:15:44PM *  4 points [-]

"Do what my creator would want me to do"?

We could call that "pass the buck" decision theory ;-)

Comment author: Wei_Dai 16 August 2009 06:29:38PM 2 points [-]

But what is the relationship between this "exceptionless" algorithm and the timeless/updateless decision algorithm?

Here's my conjecture: An AI using the Exceptionless Decision Theory (XDT) is equivalent to one using TDT if its creator was running TDT at the time of coding. If the creator was running CDT, then it is not equivalent to TDT, but it is reflectively consistent, one-boxes in Newcomb, and plays defect in one-shot PD.

And in case it wasn't clear, in XDT, the AI computes the giant lookup table its creator would have chosen using the creator's own decision theory.

Comment author: Vladimir_Nesov 16 August 2009 06:35:54PM *  2 points [-]

AI's creator was running BRAINS, not a decision theory. I don't see how "what the AI's creator was running" can be a meaningful consideration in a discussion of what constitutes a good AI design. Beware naturalistic fallacy.

Comment author: Wei_Dai 16 August 2009 06:39:10PM *  1 point [-]

One AI can create another AI, right? Does my conjecture make sense if the creator is an AI running some decision theory? If so, we can extend XDT to work with human creators, by having some procedure to approximate the human using a selection of possible DTs, priors, and utility functions. Remember that the goal in XDT is to minimize the probability that the creator would want to add an exception on top of the basic decision algorithm of the AI. If the approximation is close enough, then this probability is minimal.

ETA: I do not claim this is good AI design, merely trying to explore the implications of different ideas.

Comment author: Vladimir_Nesov 16 August 2009 07:26:37PM *  5 points [-]

The problem of finding the right decision theory is a problem of Friendliness, but for a different reason than finding a powerful inference algorithm fit for an AGI is a problem of Friendliness.

"Incompleteness" of decision theory, such as what we can see in CDT, seems to correspond to inability of AI to embody certain aspects of preference, in other words the algorithm lacks expressive power for its preference parameter. Each time an agent makes a mistake, you can reinterpret it as meaning that it just prefers it this way in this particular case. Whatever preference you "feed" to the AI with a wrong decision theory, the AI is going to distort by misinterpreting, losing some of its aspects. Furthermore, the lack of reflective consistency effectively means that the AI continues to distort its preference as it goes along. At the same time, it can still be powerful in consequentialist reasoning, being as formidable as a complete AGI, implementing the distorted version of preference that it can embody.

The resulting process can be interpreted as an AI running "ultimate" decision theory, but with a preference not in perfect fit with what it should've been. If at any stage you have a singleton that owns the game but has a distorted preference, whether due to incorrect procedure for getting the preference instantiated, or incorrect interpretation of preference, such as a mistaken decision theory as we see here, there is no returning to better preference.

More generally, what "could" be done, what AI "could" become, is a concept related to free will, which is a consideration of what happens to a system in isolation, not a system one with reality: you consider a system from the outside, and see what happens to it if you perform this or that operation on it, this is what it means that you could do one operation or the other, or that the events could unfold this way or the other. When you have a singleton, on the other hand, there is no external point of view on it, and so there is no possibility for change. The singleton is the new law of physics, a strategy proven true [*].

So, if you say that the AI's predecessor was running a limited decision theory, this is a damning statement about what sort of preference the next incarnation of AI can inherit. The only significant improvement (for the fate of preference) an AGI with any decision theory can make is to become reflectively consistent, to stop losing the ground. The resulting algorithm is as good as the ultimate decision theory, but with preference lacking some aspects, and thus behavior indistinguishable (equivalent) from what some other kinds of decision theories would produce.

__
[*] There is a fascinating interpretation of truth of logical formulas as the property of corresponding strategies in a certain game to be the winning ones. See for example
S. Abramsky (2007). `A Compositional Game Semantics for Multi-Agent Logics of Imperfect Information'. In J. van Benthem, D. Gabbay, & B. Lowe (eds.), Interactive Logic, vol. 1 of Texts in Logic and Games, pp. 11-48. Amsterdam University Press. (PDF)

Comment author: Eliezer_Yudkowsky 16 August 2009 09:50:02PM 4 points [-]

An AI running causal decision theory will lose on Newcomblike problems, be defected against in the Prisoner's Dilemma, and otherwise undergo behavior that is far more easily interpreted as "losing" than "having different preferences over final outcomes".

Comment author: Vladimir_Nesov 16 August 2009 10:39:43PM 3 points [-]

The AI that starts with CDT will immediately rewrite itself with AI running the ultimate decision theory, but that resulting AI will have distorted preferences, which is somewhat equivalent to the decision theory it runs having special cases for the time AI got rid of CDT (since code vs. data (algorithm vs. preference) is strictly speaking an arbitrary distinction). The resulting AI won't lose on these thought experiments, provided they don't intersect the peculiar distortion of its preferences, where it indeed would prefer to "lose" according to preference-as-it-should-have-been, but win according to its distorted preference.

Comment author: Eliezer_Yudkowsky 16 August 2009 10:42:11PM 4 points [-]

A TDT AI consistently acts so as to end up with a million dollars. A CDT AI acts to win a million dollars in some cases, but in other cases ends up with only a thousand. So in one case we have a compressed preference over outcomes, in the other case we have a "preference" over the exact details of the path including the decision algorithm itself. In a case like this I don't use the word "preference" so as to say that the CDT AI wants a thousand dollars on Newcomb's Problem, I just say the CDT AI is losing. I am unable to see any advantage to using the language otherwise - to say that the CDT AI wins with peculiar preference is to make "preference" and "win" so loose that we could use it to refer to the ripples in a water pond.

Comment author: Vladimir_Nesov 16 August 2009 11:12:53PM *  1 point [-]

It's the TDT AI resulting from CDT AI's rewriting of itself that plays these strange moves on the thought experiments, not CDC AI. The algorithm of idealized TDT is parameterized by "preference" and always gives the right answer according to that "preference". To stop reflective inconsistency, CDT AI is going to rewrite itself with something else. That something else can be characterized in general as a TDT AI with crazy preferences, that prefers $1000 in the Newcomb's thought experiments set before midnight October 15, 2060, or something of the sort, but works OK after that. The preference of TDT AI to which a given AGI is going to converge can be used as denotation of that AGI's preference, to generalize the notion of TDT preference on systems that are not even TDT AIs, and further to the systems that are not even AIs, in particular on humans or humanity.

These are paperclips of preference, something that seems clearly not right as a reflection of human preference, but that is nonetheless a point in the design space that can be filled in particular by failures to start with the right decision theory.

Comment author: Eliezer_Yudkowsky 16 August 2009 11:27:01PM 2 points [-]

I suggest that regarding crazy decision theories with compact preferences as sane decision theories with noncompact preferences is a step backward which will only confuse yourself and the readers. What is accomplished by doing so?

Comment author: Wei_Dai 16 August 2009 10:00:43PM *  0 points [-]

I think an AI running CDT would immediately replace itself by an AI running XDT (or something equivalent to it). If there is no way to distinguish between an AI running XDT and an AI running TDT (prior to a one-shot PD), the XDT AI can't do worse than an TDT AI. So CDT is not losing, as far as I can tell (at least for an AI capable of self-modification).

ETA: I mean a XTD AI can't do worse than a TDT AI within the same world. But a world full of XTD will do worse than a world full of TDT.

Comment author: Wei_Dai 16 August 2009 03:23:01AM 1 point [-]

The parent comment may be of some general interest, but it doesn't seem particularly helpful in this specific case. Let me back off and rephrase the question so that perhaps it makes more sense:

Can our two players, Alice and Bob, design their AIs based on TDT, such that it falls out naturally (i.e. without requiring special exceptions) that their AIs will play defect against each other, while one-boxing Newcomb's Problem?

If so, how? In order for one AI using TDT to defect, it has to either believe (A) that the other AI is not using TDT, or (B) that it is using TDT but their decisions are logically independent anyway. Since we're assuming in this case that both AIs do use TDT, (A) requires that the players program their AIs with a falsehood, which is no good. (B) might be possible, but I don't see how.

If the answer is no, then it seems that TDT isn't the final answer, and we have to keep looking for another one. Is there another way out of this quandary?

Comment author: pengvado 16 August 2009 04:10:05AM *  3 points [-]

You're saying that TDT applied directly by both AIs would result in them cooperating; you would rather that they defect even though that gives you less utility; so you're looking for a way to make them lose? Why?

If both AIs use the same decision theory and this is common knowledge, then the only options are (C,C) or (D,D). Pick whichever you prefer. If they use different decision theories, then you can give yours pure TDT and tell it truthfully that you've tricked the other player into unconditionally cooperating. What else is there?

Comment author: Vladimir_Nesov 16 August 2009 10:55:19AM 0 points [-]

If both AIs use the same decision theory then the only options are (C,C) or (D,D).

You (and they) can't assume that, as they could be in different states even with the same algorithm that operates on those states, and so will output different decisions, even if from the problem statement it looks like everything significant is the same.

Comment author: Wei_Dai 16 August 2009 05:36:40AM 0 points [-]

The problem is that the two human player's minds aren't logically related. Each human player in this game wants his AI to play defect, because their decisions are logically independent of each other's. If TDT doesn't allow a player's AI to play defect, then the player would choose some other DT that does, or add an exception to the decision algorithm to force the AI to play defect.

I explained here why humans should play defect in one-shot PD.

Comment author: Eliezer_Yudkowsky 16 August 2009 10:04:49PM 3 points [-]

The problem is that the two human player's minds aren't logically related. Each human player in this game wants his AI to play defect, because their decisions are logically independent of each other's.

Your statement above is implicitly self-contradictory. How can you generalize over all the players in one fell swoop, applying the same logic to each of them, and yet say that the decisions are "logically independent"? The decisions are physically independent. Logically, they are extremely dependent. We are arguing over what is, in general, the "smart thing to do". You assume that if "the smart thing to do" is defect, and so all the players will defect. Doesn't smell like logical independence to me.

More importantly, the whole calculation about independence versus dependence is better carried out by an AI than by a human programmer, which is what TDT is for. It's not for cooperating. It's for determining the conditional probability of the other agent cooperating given that a TDT agent in your epistemic state plays "cooperate". If you know that the other agent knows (up to common knowledge) that you are a TDT agent, and the other agent knows that you know (up to common knowledge) that it is a TDT agent, then it is an obvious strategy to cooperate with a TDT agent if and only if it cooperates with you under that epistemic condition.

The TDT strategy is not "Cooperate with other agents known to be TDTs". The TDT strategy for the one-shot PD, in full generality, is "Cooperate if and only if ('choosing' that the output of this algorithm under these epistemic conditions be 'cooperate') makes it sufficiently more likely that (the output of the probability distribution of opposing algorithms under its probable epistemic conditions) is 'cooperate', relative to the relative payoffs."

Under conditions where a TDT plays one-shot true-PD against something that is not a TDT and not logically dependent on the TDT's output, the TDT will of course defect. A TDT playing against a TDT which falsely believes the former case to hold, will also of course defect. Where you appear to depart from my visualization, Wei Dai, is in thinking that logical dependence can only arise from detailed examination of the other agent's source code, because otherwise the agent has a motive to defect. You need to recognize your belief that what players do is in general likely to correlate, as a case of "logical dependence". Similarly the original decision to change your own source code to include a special exception for defection under particular circumstances, is what a TDT agent would model - if it's probable that the causal source of an agent thought it could get away with that special exception and programmed it in, the TDT will defect.

You've got logical dependencies in your mind that you are not explicitly recognizing as "logical dependencies" that can be explicitly processed by a TDT agent, I think.

Comment author: Vladimir_Nesov 16 August 2009 11:06:08AM 2 points [-]

If you already know something about the other player, if you know it exists, there is already some logical dependence between you two. How to leverage this minuscule amount of dependence is another question, but there seems to be no conceptual distinction between this scenario and where the players know each other very well.

Comment author: Nick_Tarleton 16 August 2009 07:52:20AM 1 point [-]

The problem is that the two human player's minds aren't logically related. Each human player in this game wants his AI to play defect, because their decisions are logically independent of each other's.

I don't think so. Each player wants to do the Winning Thing, and there is only one Winning Thing (their situations are symmetrical), so if they're both good at Winning (a significantly lower bar than successfully building an AI with their preferences), their decisions are related.

Comment author: Wei_Dai 16 August 2009 08:36:21AM *  0 points [-]

So what you're saying is, given two players who can successfully build AIs with their preferences (and that's common knowledge), they will likely (surely?) play cooperate in one-shot PD against each other. Do I understand you correctly?

Suppose what you say is correct, that the Winning Thing is to play cooperate in one-shot PD. Then what happens when some player happens to get a brain lesion that causes him to unconsciously play defect without affecting his AI building abilities? He would take everyone else's lunch money. Or if he builds his AI to play defect while everyone else builds their AIs to play cooperate, his AI then takes over the world. I hope that's a sufficient reductio ad absurdum.

Hmm, I just noticed that you're only saying "their decisions are related" and not explicitly making the conclusion they should play cooperative. Well, that's fine, as long as they would play defect in one-shot PD, then they would also program their AIs to play defect in one-shot PD (assuming each AI can't prove its source code to the other). That's all I need for my argument.

Comment author: Nick_Tarleton 16 August 2009 09:15:40AM *  2 points [-]

So what you're saying is, given two players who can successfully build AIs with their preferences (and that's common knowledge), they will likely (surely?) play cooperate in one-shot PD against each other. Do I understand you correctly?

Yes.

Suppose what you say is correct, that the Winning Thing is to play cooperate in one-shot PD. Then what happens when some player happens to get a brain lesion that causes him to unconsciously play defect without affecting his AI building abilities? He would take everyone else's lunch money. Or if he builds his AI to play defect while everyone else builds their AIs to play cooperate, his AI then takes over the world. I hope that's a sufficient reductio ad absurdum.

Good idea. Hmm. It sounds like this is the same question as: what if, instead of "TDT with defection patch" and "pure TDT", the available options are "TDT with defection patch" and "TDT with tiny chance of defection patch"? Alternately: what if the abstract computations that are the players have a tiny chance of being embodied in such a way that their embodiments always defect on one-shot PD, whatever the abstract computation decides?

It seems to me that Lesion Man just got lucky. This doesn't mean people can win by giving themselves lesions, because that's deliberately defecting / being an abstract computation that defects, which is bad. Whether everyone else should defect / program their AIs to defect due to this possibility depends on the situation; I would think they usually shouldn't. (If it's a typical PD payoff matrix, there are many players, and they care about absolute, not relative, scores, defecting isn't worth it even if it's guaranteed there'll be one Lesion Man.)

This still sounds disturbingly like envying Lesion Man's mere choices – but the effect of the lesion isn't really his choice (right?). It's only the illusion of unitary agency, bounded at the skin rather than inside the brain, that makes it seem like it is. The Cartesian dualism of this view (like AIXI, dropping an anvil on its own head) is also disturbing, but I suspect the essential argument is still sound, even as it ultimately needs to be more sophisticated.

Comment author: Wei_Dai 16 August 2009 12:02:24PM *  3 points [-]

I guess my reductio ad absurdum wasn't quite sufficient. I'll try to think this through more thoroughly and carefully. Let me know which steps, if any, you disagree with, or are unclear, in the following line of reasoning.

  1. TDT couldn't have arisen by evolution.
  2. Until a few years ago, almost everyone on Earth was running some sort of non-TDT which plays defect in one-shot PD.
  3. It's possible that upon learning about TDT, some people might spontaneously switch to running it, depending on whatever meta-DT controls this, and whether the human brain is malleable enough to run TDT.
  4. If, in any identifiable group of people, a sufficient fraction switches to TDT, and that proportion is public knowledge, the TDT-running individuals in that group should start playing cooperate in one-shot PD with other members of the group.
  5. The threshold proportion is higher if the remaining defectors can cause greater damage. If the remaining defectors can use their gains from defection to better reproduce themselves, or to gather more resources that will let them increase their gains/damage, then the threshold proportion must be close to 1, because even a single defector can start a chain reaction that causes all the resources of the group to become held by defectors.
  6. What proportion of skilled AI designers would switch to TDT is ultimately an empirical question, but it seems to me that it's unlikely to be close to unity.
  7. TDT-running AI designers will design their AIs to run TDT. Non-TDT-running AI designers will design their AIs to run non-TDT (not necessarily the same non-TDT).
  8. Assume that a TDT-running AI (TAI) can't tell which other AIs are running TDT and which ones aren't, so in every game it faces the decision described in steps 4 and 5. A TDT AI will cooperate in some situations where the benefit from cooperation is relatively high and damage from defection relatively low, and not in other situations.
  9. As a result, non-TAI will do better than TAI, but the damage to TAIs will be limited.
  10. Only if a TAI is sure that all AIs are TAIs, will it play cooperate unconditionally.
  11. If a TAI encounters an AI of alien origin, the same logic applies. The alien AI will be TAI if-and-only-if its creator was running TDT. If the TAI knows nothing about the alien creator, then it has to estimate what fraction of AI-builders in the universe runs TDT. Taking into account that TDT can't arise from evolution, and not seeing any reason for evolution to create a meta-DT that would pick TDT upon discovering it, this fraction seems pretty low, and so the TAI will likely play defect against the alien AI.

Hmm, this exercise has cleared a lot of my own confusion. Obviously a lot more work needs to be done to make the reasoning rigorous, but hopefully I've gotten the gist of it right.

ETA: According to this line of argument, your hypothesis that all skilled AI designers play cooperate in one-shot PD against each other is equivalent to saying that skilled AI designers have minds malleable enough to run TDT, and have a meta-DT that causes them to switch to running TDT. But I do not see an evolutionary reason for this, so if it's true, it must be true by luck. Do you agree?

Comment author: Vladimir_Nesov 16 August 2009 01:47:34PM *  2 points [-]

It looks like in this discussion you assume that switching to "TDT" (it's highly uncertain what this means) immediately gives the decision to cooperate in "true PD". I don't see why it should be so. Summarizing my previous comments, exactly what the players know about each other, exactly in what way they know it, may make their decisions go either way. That the players switch from CDT to some kind of more timeless decision theory doesn't determine the answer to be "cooperate", it merely opens up the possibility that previously was decreed irrational, and I suspect that what's important in the new setting for making the decision go either way isn't captured properly in the problem statement of "true PD".

Also, the way you treat "agents with TDT" seems more appropriate for "agents with Cooperator prefix" from cousin_it's Formalizing PD. And this is a simplified thing far removed from a complete decision theory, although a step in the right direction.

Comment author: Wei_Dai 16 August 2009 07:17:56PM 0 points [-]

I don't assume that switching to TDT immediately gives the decision to cooperate in "true PD". I assume that an AI running TDT would decide to cooperate if it thinks the expected utility of cooperating is higher than the EU of defecting, and that is true if its probability of facing another TDT is sufficiently high compared to its probability of facing a defector (how high is sufficient depends on the payoffs of the game). Well, this is necessary but not sufficient. For example if the other TDT doesn't think its probability of facing a TDT is high enough, it won't cooperate, so we need some common knowledge of the relevant probabilities and payoffs.

Does my line of reasoning make sense now, given this additional explanation?

Comment author: Eliezer_Yudkowsky 16 August 2009 10:28:40PM 1 point [-]

Btw, agree with steps 3-9.

Comment author: Eliezer_Yudkowsky 16 August 2009 10:17:48PM 0 points [-]
  1. TDT couldn't have arisen by evolution.

It's too elegant to arise by evolution, and it also deals with one-shot PDs with no knock-on effects which is an extremely nonancestral condition - evolution by its nature deals with events that repeat many times; sexual evolution by its nature deals with organisms that interbreed; so "one-shot true PDs" is in general a condition unlikely to arise with sufficient frequency that evolution deals with it at all.

Taking into account that TDT can't arise from evolution, and not seeing any reason for evolution to create a meta-DT that would pick TDT upon discovering it

This may perhaps embody the main point of disagreement. A self-modifying CDT which, at 7am, expects to encounter a future Newcomb's Problem or Parfit's Hitchhiker in which the Omega gets a glimpse at the source code after 7am, will modify to use TDT for all decisions in which Omega glimpses the source code after 7am. A bit of "common sense" would tell you to just realize that "you should have been using TDT from the beginning regardless of when Omega glimpsed your source code and the whole CDT thing was a mistake" but this kind of common sense is not embodied in CDT. Nonetheless, TDT is a unique reflectively consistent answer for a certain class of decision problems, and a wide variety of initial points is likely to converge to it. The exact proportion, which determines under what conditions of payoff and loss stranger-AIs will cooperate with each other, is best left up to AIs to calculate, I think.

Comment author: Wei_Dai 17 August 2009 10:09:16PM *  1 point [-]

Nonetheless, TDT is a unique reflectively consistent answer for a certain class of decision problems, and a wide variety of initial points is likely to converge to it.

The main problem I see with this thesis (to restate my position in a hopefully clear form) is that an agent that starts off with a DT that unconditionally plays D in one-shot PD will not self-modify into TDT, unless it has some means of giving trustworthy evidence that it has done so. Suppose there is no such means, then any other agent must treat it the same, whether it self-modifies into TDT or not. Suppose it expects to face a TDT agent in the future. Whether that agent will play C or D against it is independent of what it decides now. If it does self-modify into TDT, then it might play C against the other TDT where it otherwise would have played D, and since the payoff for C is lower than for D, holding the other player's choice constant, it will decide not to self-modify into TDT.

If it expects to face Newcomb's Problem, then it would self-modify into something that handles it better, but that something must still unconditionally play D in one-shot PD.

Do you still think "a wide variety of initial points is likely to converge to it"? If so, do you agree that (ETA: in a world where proving source code isn't possible) those initial points exclude any DT that unconditionally plays D in one-shot PD?

BTW, there are a number of decision theorists in academia. Should we try to get them to work on our problems? Unfortunately, I have no skill/experience/patience/willpower for writing academic papers. I tried to write such a paper about cryptography once and submitted it to a conference, got back a rejection with nonsensical review comments, and that was that. (I guess I could have tried harder but then that would probably have put me on a different career path where I wouldn't be working these problems today.)

Also, there ought to be lots of mathematicians and philosophers who would be interested in the problem of logical uncertainty. How can we get them to work on it?

Comment author: Wei_Dai 16 August 2009 10:25:07PM 0 points [-]

so "one-shot true PDs" is in general a condition unlikely to arise with sufficient frequency that evolution deals with it at all

But there are analogs of one-shot true PD everywhere.

A self-modifying CDT which, at 7am, expects to encounter a future Newcomb's Problem or Parfit's Hitchhiker in which the Omega gets a glimpse at the source code after 7am, will modify to use TDT for all decisions in which Omega glimpses the source code after 7am.

No, I disagree. You seem to have missed this comment, or do you disagree with it?

Comment author: Eliezer_Yudkowsky 16 August 2009 10:08:18PM 1 point [-]

Suppose what you say is correct, that the Winning Thing is to play cooperate in one-shot PD. Then what happens when some player happens to get a brain lesion that causes him to unconsciously play defect without affecting his AI building abilities? He would take everyone else's lunch money.

Possibly. But it has to be an unpredictable brain lesion - one that is expected to happen with very low frequency. A predictable decision to do this just means that TDTs defect against you. If enough AI-builders do this then TDTs in general defect against each other (with a frequency threshold dependent on relative payoffs) because they have insufficient confidence that they are playing against TDTs rather than special cases in code.

Or if he builds his AI to play defect while everyone else builds their AIs to play cooperate, his AI then takes over the world.

No one is talking about building AIs to cooperate. You do not want AIs that cooperate on the one-shot true PD. You want AIs that cooperate if and only if the opponent cooperates if and only if your AI cooperates. So yes, if you defect when others expect you to cooperate, you can pwn them; but why do you expect that AIs would expect you to cooperate (conditional on their cooperation) if "the smart thing to do" is to build an AI that defects? AIs with good epistemic models would then just expect other AIs that defect.

Comment author: Wei_Dai 16 August 2009 10:13:35PM 0 points [-]

The comment you responded to was mostly obsoleted by this one, which represents my current position. Please respond to that one instead. Sorry for making you waste your time!

Comment author: Eliezer_Yudkowsky 16 August 2009 05:08:53AM 3 points [-]

I don't understand why you want the AIs to defect against each other rather than cooperating with each other.

Are you attached to this particular failure of causal decision theory for some reason? What's wrong with TDT agents cooperating in the Prisoner's Dilemma and everyone living happily ever after?

Comment author: Wei_Dai 16 August 2009 07:22:55AM *  1 point [-]

I don't understand why you want the AIs to defect against each other rather than cooperating with each other.

Come on, of course I don't want that. I'm saying that is the inevitable outcome under the rules of the game I specified. It's just like if I said "I don't want two human players to defect in one-shot PD, but that is what's going to happen."

ETA: Also, it may help if you think of the outcome as the human players defecting against each other, with the AIs just carrying out their strategies. The human players are the real players in this game.

Are you attached to this particular failure of causal decision theory for some reason?

No, I can't think of a reason why I would be.

What's wrong with TDT agents cooperating in the Prisoner's Dilemma and everyone living happily ever after?

There's nothing wrong with that, and it may yet happen, if it turns out that the technology for proving source code can be created. But if you can't prove that your source code is some specific string, if the only thing you have to go on is that you and the other AI must both use the same decision theory due to convergence, that isn't enough.

Sorry if I'm repeating myself, but I'm hoping one of my explanations will get the point across...

Comment author: Vladimir_Nesov 16 August 2009 11:07:57AM *  2 points [-]

Come on, of course I don't want that. I'm saying that is the inevitable outcome under the rules of the game I specified. It's just like if I said "I don't want two human players to defect in one-shot PD, but that is what's going to happen."

I don't believe that is true. It's perfectly conceivable that two human players would cooperate.

Comment author: Wei_Dai 16 August 2009 12:33:36PM 0 points [-]

Yes, I see the possibility now as well, although I still don't think it's very likely. I wrote more about it in http://lesswrong.com/lw/15m/towards_a_new_decision_theory/11lx