James_Miller comments on Causal decision theory is unsatisfactory - LessWrong

20 Post author: So8res 13 September 2014 05:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (158)

You are viewing a single comment's thread. Show more comments above.

Comment author: James_Miller 13 September 2014 07:52:47PM *  0 points [-]

According to the definition of the game, the clone will happen to defect if you defect and will happen to cooperate if you cooperate.

You have to consider off-the-equilibrium-path behavior. If I'm the type of person who will always cooperate, what would happen if I went off-the-equilibrium-path and did defect even if my defecting is a zero probability event?

Comment author: shminux 13 September 2014 09:55:43PM 3 points [-]

If I'm the type of person who will always cooperate, what would happen if I went off-the-equilibrium-path and did defect even if my defecting is a zero probability event?

I'm trying to understand the difference between your statement and "1 is not equal 2, but what if it were?" and failing.

Comment author: solipsist 13 September 2014 11:45:03PM *  1 point [-]

See trembling hand equilibrium.

A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.

First we define a perturbed game. A perturbed game is a copy of a base game, with the restriction that only totally mixed strategies are allowed to be played. A totally mixed strategy is a mixed strategy where every pure strategy is played with non-zero probability. This is the "trembling hands" of the players; they sometimes play a different strategy than the one they intended to play. Then we define a strategy set S (in a base game) as being trembling hand perfect if there is a sequence of perturbed games that converge to the base game in which there is a series of Nash equilibria that converge to S.

Comment author: shminux 14 September 2014 12:05:41AM 1 point [-]

Right, as I mentioned in my other reply, CDT is discontinuous at p=0. Presumably a better decision theory would not have such a discontinuity.

Comment author: Jiro 13 September 2014 11:13:00PM 1 point [-]

One possible interpretation of "if I always cooperate, what would happen if I don't" is "what is the limit, as X approaches 1, of 'if I cooperate with probability X, what would happen if I don't'?"

This doesn't reasonably map onto the 1=2 example.

Comment author: shminux 13 September 2014 11:49:26PM 1 point [-]

Right. There seems to be a discontinuity, as the limit of CDT (p->0) is not CDT (p=0). I wonder if this is the root of the issue.

Comment author: James_Miller 13 September 2014 10:31:18PM *  1 point [-]

"1 is not equal 2, but what if it were?" = what if I could travel faster than the speed of light.

Off the equilibrium path = what if I were to burn a dollar.

Or things I can't do vs things I don't want to do.

Comment author: shminux 13 September 2014 11:34:27PM 1 point [-]

Or things I can't do vs things I don't want to do.

In my mind "I'm the type of person who will always cooperate" means that there is no difference between the two in this case. Maybe you use a different definition of "always"?

Comment author: James_Miller 14 September 2014 12:58:28AM *  1 point [-]

I always cooperate because doing so maximizes my utility since it is better than all the alternatives. I always go slower than the speed of light because I have no alternatives.

Comment author: Adele_L 13 September 2014 08:38:06PM 1 point [-]

You can consider it, but conditioned on the information that you are playing against your clone, you should assign this a very low probability of happening, and weight it in your decision accordingly.

Comment author: James_Miller 13 September 2014 08:41:24PM -1 points [-]

Assume I am the type of person who would always cooperate with my clone. If I asked myself the following question "If I defected would my payoff be higher or lower than if I cooperated even though I know I will always cooperate" what would be the answer?

Comment author: lackofcheese 14 September 2014 03:39:49AM *  2 points [-]

Yes, it makes a little bit of sense to counterfactually reason that you would get $1000 more if you defected, but that is predicated on the assumption that you always cooperate. You cannot actually get that free $1000 because the underlying assumption of the counterfactual would be violated if you actually defected.

Comment author: VAuroch 14 September 2014 11:49:25AM 1 point [-]

The answer would be 'MOO'. Or 'Mu', or 'moot'; they're equivalent. "In this impossible counterfactual where I am self-contradictory, what would happen?"

Comment author: VAuroch 13 September 2014 08:05:17PM *  0 points [-]

No, you don't. This is a game where there are only two possible outcomes; DD and CC. CD and DC are defined to be impossible because the agents playing the game are physically incapable of making those outcomes occur.

EDIT: Maybe physically incapable is a bit strong. If they wanted to maximize the chance that they had unmatched outcomes, they could each flip a coin and take C if heads and D if tails, and would have a 50% chance of unmatching. But they still would both be playing the same precise strategy.

Comment author: James_Miller 13 September 2014 08:14:12PM *  0 points [-]

I don't agree. Even if I'm certain I will not defect, I am capable of asking what would happen if I did, just as the real me both knows he won't do "horrible thing" yet can mentally model what would happen if he did "horrible thing". Or imagine an AI that's programmed to always maximize its utility. This AI still could calculate what would happen if it followed a non-utility maximizing strategy. Often in game theory a solution requires you to calculate your payoff if you left the equilibrium path.

Comment author: pragmatist 13 September 2014 08:38:48PM *  1 point [-]

What would you say about the following decision problem (formulated by Andy Egan, I believe)?

You have a strong desire that all psychopaths in the world die. However, your desire to stay alive is stronger, so if you yourself are a psychopath you don't want all psychopaths to die. You are pretty sure, but not certain, that you're not a psychopath. You're presented with a button, which, if pressed, would kill all psychopaths instantly. You are absolutely certain that only a psychopath would press this button. Should you press the button or not?

It seems to me the answer is "Obviously not", precisely because the "off-path" possibility that you're a non-psychopath who pushes the button should not enter into your consideration. But the causal decision algorithm would recommend pushing the button if your prior that you are a psychopath is small enough. Would you agree with that?

Comment author: Jiro 13 September 2014 11:14:57PM 3 points [-]

If only a psychopath would push the button, then your possible non-psychopathic nature limits what decision algorithms you are capable of following.

Comment author: helltank 13 September 2014 11:22:10PM 1 point [-]

Wouldn't the fact that you're even considering pushing the button(because if only a psychopath would push the button then it follows that a non-psychopath would never push the button) indicate that you are a psychopath and therefore you should not push the button?

Another way to put it is:

If you are a psychopath and you push the button, you die. If you are not a psychopath and you push the button, pushing the button would make you a psychopath(since only a psychopath would push), and therefore you die.

Comment author: pragmatist 14 September 2014 05:57:57AM 2 points [-]

Pushing the button can't make you a psychopath. You're either already a psychopath or you're not. If you're not, you will not push the button, although you might consider pushing it.

Comment author: helltank 14 September 2014 12:51:06PM 1 point [-]

Maybe I was unclear.

I'm arguing that the button will never, ever be pushed. If you are NOT a psychopath, you won't push, end of story.

If you ARE A psychopath, you can choose to push or not push.

if you push, that's evidence you are a psychopath. If you are a psychopath, you should not push. Therefore, you will always end up regretting the decision to push.

If you don't push, you don't push and nothing happens.

In all three cases the correct decision is not to push, therefore you should not push.

Comment author: lackofcheese 14 September 2014 01:47:00AM 1 point [-]

Shouldn't you also update your belief towards being a psychopath on the basis that you have a strong desire that all psychopaths in the world die?

Comment author: pragmatist 14 September 2014 05:56:17AM 1 point [-]

You can stipulate this out of the example. Let's say pretty much everyone has the desire that all psychopaths die, but only psychopaths would actually follow through with it.

Comment author: James_Miller 13 September 2014 08:43:30PM 1 point [-]

I don't press. CDT fails here because (I think) it doesn't allow you to update your beliefs based on your own actions.

Comment author: crazy88 14 September 2014 10:05:15PM 2 points [-]

Exactly what information CDT allows you to update your beliefs on is a matter for some debate. You might be interested in a paper by James Joyce (http://www-personal.umich.edu/~jjoyce/papers/rscdt.pdf) on the issue (which was written in response to Egan's paper).

Comment author: pragmatist 13 September 2014 08:46:49PM *  1 point [-]

But then shouldn't you also update your beliefs about what your clone will do based on your own actions in the clone PD case? Your action is very strong (perfect, by stipulation) evidence for his action.

Comment author: James_Miller 13 September 2014 08:58:03PM 1 point [-]

Yes I should. In the psychopath case whether I press the button depends on my beliefs, in contrast in a PD I should defect regardless of my beliefs.

Comment author: pragmatist 13 September 2014 09:13:31PM *  1 point [-]

Maybe I misunderstand what you mean by "updating beliefs based on action". Here's how I interpret it in the psychopath button case: When calculating the expected utility of pushing the button, don't use the prior probability that you're a psychopath in the calculation, use the probability that you're a psychopath conditional on deciding to push the button (which is 1). If you use that conditional probability, then the expected utility of pushing the button is guaranteed to be negative, no matter what the prior probability that you're a psychopath is. Similarly, when calculating the expected utility of not pushing the button, use the probability that you're a psychopath conditional on deciding not to push the button.

But then, applying the same logic to the PD case, you should calculate expected utilities for your actions using probabilities for your clone's action that are conditional on the very action that you are considering. So when you're calculating the expected utility for cooperating, use probabilities for your clone's action conditional on you cooperating (i.e., 1 for the clone cooperating, 0 for the clone defecting). When calculating the expected utility for defecting, use probabilities for your clone's action conditional on you defecting (0 for cooperating, 1 for defecting). If you do things this way, then cooperating ends up having a higher expected utility.

Perhaps another way of putting it is that once you know the clone's actions are perfectly correlated with your own, you have no good reason to treat the clone as an independent agent in your analysis. The standard tools of game theory, designed to deal with cases involving multiple independent agents, are no longer relevant. Instead, treat the clone as if he were part of the world-state in a standard single-agent decision problem, except this is a part of the world-state about which your actions give you information (kind of like whether or not you're a psychopath in the button case).

Comment author: James_Miller 13 September 2014 10:44:59PM 1 point [-]

I agree with your first paragraph.

Imagine you are absolutely certain you will cooperate and that your clone will cooperate. You are still capable of asking "what would my payoff be if I didn't cooperate" and this payoff will be the payoff if you defect and the clone cooperates since you expect the clone to do whatever you will do and you expect to cooperate. There is no reason to update my belief on what the clone will do in this thought experiment since the thought experiment is about a zero probability event.

The psychopath case is different because I have uncertainty regarding whether I am a psychopath and the choice I want to make helps me learn about myself. I have no uncertainty concerning my clone.

Comment author: VAuroch 14 September 2014 11:39:00AM 0 points [-]

There is no reason to update my belief on what the clone will do in this thought experiment since the thought experiment is about a zero probability event.

You are reasoning about an impossible scenario; if the probability of you reaching the event is 0, the probability of your clone reaching it is also 0. In order to make it a sensical notion, you have to consider it as epsilon probabilities; since the probability will be the same for both your and your clone, this gets you , which is maximized when .

To claim that you and your clone could take different actions is trying to make it a question about trembling-hand equilibria, which violates the basic assumptions of the game.

Comment author: VAuroch 14 September 2014 11:40:22AM 0 points [-]

Even if I'm certain I will not defect, I am capable of asking what would happen if I did,

Yes, and part of the answer is "If I did defect, my clone would also defect." You have a guarantee that both of you take the same actions because you think according to precisely identical reasoning.

Comment author: James_Miller 14 September 2014 02:27:29PM 1 point [-]

What do you think will happen if clones play the centipede game?

Comment author: VAuroch 15 September 2014 09:25:05AM 1 point [-]

Unclear, depends on the specific properties of the person being cloned. Unlike PD, the two players aren't in the same situation, so they can't necessarily rely on their logic being the same as their counterpart. How closely this would reflect the TDT ideal of 'Always Push' will depend on how luminous the person is; if they can model what they would do in the opposite situation, and are highly confident that their self-model is correct, they can reach the best result, but if they lack confidence that they know what they'd do, then the winning cooperation is harder to achieve.

Of course, if it's denominated in money and is 100 steps of doubling, as implied by the Wikipedia page, then the difference in utility between $1 nonillion and $316 octillion is so negligible that there's essentially no incentive to defect in the last round and any halfway-reasonable person will Always Push straight through the game. But that's a degenerate case and probably not the version originally discussed.