# shokwave comments on Problematic Problems for TDT - Less Wrong

34 29 May 2012 03:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 24 May 2012 02:28:17AM -1 points [-]

Then the simulated TDT agent will one-box in Problem 1 so that the real TDT agent can two-box and get \$1,001,000. The simulated TDT agent will pick a box randomy with a uniform distribution in Problem 2, so that the real TDT agent can select box 1 like CDT would.

(If the agent is not receiving any reward, it will act in a way that maximises the reward agents sufficiently similar to it would receive. In this situation of 'you get no reward', CDT would be completely indifferent and could not be relied upon to set up a good situation for future actual CDT agents.)

Of course, this doesn't work if the simulated TDT agent is not aware that it won't receive a reward. This strays pretty close to "Omega is all-powerful and out to make sure you lose"-type problems.

Comment author: 24 May 2012 03:18:00AM 0 points [-]

Of course, this doesn't work if the simulated TDT agent is not aware that it won't receive a reward.

The simulated TDT agent is not aware that it won't receive a reward, and therefore it does not work.

This strays pretty close to "Omega is all-powerful and out to make sure you lose"-type problems.

Yeah, it doesn't seem right to me that the decision theory being tested is used in the setup of the problem. But I don't think that the ability to simulate without rewarding the simulation is what pushes it over the threshold of "unfair".

Comment author: 06 June 2012 11:54:52AM *  0 points [-]

I don't think that the ability to simulate without rewarding the simulation is what pushes it over the threshold of "unfair".

It only seems that way because you're thinking from the non-simulated agents point of view. How do you think you'd feel if you were a simulated agent, and after you made your decision Omega said 'Ok, cheers for solving that complicated puzzle, I'm shutting this reality down now because you were just a simulation I needed to set a problem in another reality'. That sounds pretty unfair to me. Wouldn't you be saying 'give me my money you cheating scum'?

And as has been already pointed out, they're very different problems. If Omega actually is trustworthy, integrating across all the simulations gives infinite utility for all the (simulated) TDT agents and a total \$1001000 utility for the (supposedly non-simulated) CDT agent.

Comment author: 06 June 2012 04:20:02PM 0 points [-]

It only seems that way because you're thinking from the non-simulated agents point of view. How do you think you'd feel if you were a simulated agent, and after you made your decision Omega said 'Ok, cheers for solving that complicated puzzle, I'm shutting this reality down now because you were just a simulation I needed to set a problem in another reality'. That sounds pretty unfair to me. Wouldn't you be saying 'give me my money you cheating scum'?

We were discussing if it is a "fair" test of the decision theory, not if it provides a "fair" experience to any people/agents that are instantiated within the scenario.

And as has been already pointed out, they're very different problems. If Omega actually is trustworthy, integrating across all the simulations gives infinite utility for all the (simulated) TDT agents and a total \$1001000 utility for the (supposedly non-simulated) CDT agent.

I am aware that they are different problems. That is why the version of the problem in which simulated agents get utility that the real agent cares about does nothing to address the criticism of TDT that it loses in the version where simulated agents get no utility. Postulating the former in response to the latter was a fail in using the Least Convenient Possible World.

The complaints about Omega being untrustworthy are weak. Just reformulate the problem so Omega says to all agents, simulated or otherwise, "You are participating in a game that involves simulated agents and you may or may not be one of the simulated agents yourself. The agents involved in the game are the following: <describes agents' roles in third person>".

Comment author: 19 June 2012 07:46:29AM 0 points [-]

The complaints about Omega being untrustworthy are weak. Just reformulate the problem so Omega says to all agents, simulated or otherwise, "You are participating in a game that involves simulated agents and you may or may not be one of the simulated agents yourself. The agents involved in the game are the following: <describes agents' roles in third person>".

Good point.

That clears up the summing utility across possible worlds possibility, but it still doesn't address the fact that the TDT agent is being asked to (potentially) make two decisions while the non-TDT agent is being asked to make only one. That seems to me to make the scenario unfair (it's what I was trying to get at in the 'very different problems' statement).

Comment author: 26 May 2012 09:44:38PM 0 points [-]

The simulated TDT agent is not aware that it won't receive a reward, and therefore it does not work.

This raises an interesting problem, actually. Omega could pose the following question:

Here are two boxes, A and B; you may choose either box, or take both. You are in one of two states of nature, with equal probability: one possibility is that you're in a simulation, in which case you will receive no reward, no matter what you choose. The other possibility is that a simulation of this problem was presented to an agent running TDT. I won't tell you what the agent decided, but I will tell you that if the agent two-boxed then I put nothing in Box B, whereas if the agent one-boxed then I put \$1 million in Box B. Regardless of how the simulated agent decided, I put \$1000 in Box A. Now please make your choice.

The solution for a TDT agent seems to be choosing box B, but there may be similar games where it makes sense to run a mixed strategy. I don't think that it makes much sense to rule out the possibility of running mixed strategies across simulations, because in most models of credible precommitment the other players do not have this kind of foresight (although Omega possibly does).

And yes, it is still the case that a CDT agent can outperform TDT, as long as the TDT agent knows that if she is in a simulation, her choice will influence a real game played by a TDT, with some probability. Nevertheless, as the probability of "leaking" to CDT increases, it does become more profitable (AIUI) for TDT to two-box with low probability.

Comment author: 25 May 2012 08:16:27AM *  0 points [-]

The simulated TDT agent is not aware that it won't receive a reward, and therefore it does not work. ... I don't think that the ability to simulate without rewarding the simulation is what pushes it over the threshold of "unfair".

I do agree. I think my previous post was still exploring the "can TDT break with a simulation of itself?" question, which is interesting but orthogonal.