army1987 comments on Problematic Problems for TDT - Less Wrong

36 Post author: drnickbone 29 May 2012 03:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (298)

You are viewing a single comment's thread.

Comment author: [deleted] 28 May 2012 09:08:53AM 6 points [-]

Omega (who experience has shown is always truthful) presents the usual two boxes A and B and announces the following. "Before you entered the room, I ran a simulation of this problem as presented to an agent running TDT.

If he's always truthful, then he didn't lie to the simulation either and this means that he did infinitely many simulations before that. So assume he says "Either before you entered the room I ran a simulation of this problem as presented to an agent running TDT, or you are such a simulation yourself and I'm going to present this problem to the real you afterwards", or something similar. If he says different things to you and to your simulation instead, then it's not obvious you'll give the same answer.

Are these really "fair" problems? Is there some intelligible sense in which they are not fair, but Newcomb's problem is fair?

Well, a TDT agent has indexical uncertainty about whether or not they're in the simulation, whereas a CDT or EDT agent doesn't. But I haven't thought this through yet, so it might turn out to be irrelevant.

Comment author: drnickbone 28 May 2012 06:57:02PM 1 point [-]

This question of "Does Omega lie to sims?" was already discussed earlier in the thread. There were several possible answers from cousin_it and myself, any of which will do.

Comment author: DanArmak 28 May 2012 03:01:53PM *  0 points [-]

He can't have done literally infinitely many simulations. If that is really required it would be a way out by saying the thought experiment stipulates an impossible situation. I haven't yet considered whether the problem can be changed to give the same result and not require infinitely many simulations.

ETA: no wait, that can't be right, because it would apply to the original Newcomb's problem too. So there must be a way to formalize this correctly. I'll have to look it up but don't have the time right now.

Comment author: [deleted] 28 May 2012 04:03:14PM 1 point [-]

In the original Newcomb's problem it's not specified that Omega performs simulations -- for all we know, he might use magic, closed timelike curves, or quantum magic whereby Box A is in a superposition of states entangled with your mind whereby if you open Box B, A ends up being empty and if you hand B back to Omega, A ends up being full.

Comment author: DanArmak 28 May 2012 04:26:18PM 0 points [-]

We should take this seriously: a problem that cannot be instantiated in the physical world should not affect our choice of decision theory.

Before I dig myself in deeper, what does existing wisdom say? What is a practical possible way of implementing Newcomb's problem? For instance, simulation is eminently practical as long as Omega knows enough about the agent being simulated. OTOH, macro quantum enganglement of an arbitrary agent's arbitrary physical instantiation with a box prepared by Omega doesn't sound practical to me, but maybe I'm just swayed by increduilty. What do the experts say? (Including you if you're an expert, obviously.)

Comment author: [deleted] 28 May 2012 04:37:15PM *  -1 points [-]

cannot

0 is not a probability, and even tiny probabilities can give rise to Pascal's mugging.

Unless your utility function is bounded.

Comment author: wedrifid 28 May 2012 04:58:12PM 1 point [-]

0 is not a probability, and even tiny probabilities can give rise to Pascal's mugging.

Even? I'd go as far as to say only. Non-tiny probabilities aren't Pascal's muggings. They are just expected utility calculations. </lighthearted nitpick!>

Comment author: DanArmak 28 May 2012 05:02:37PM 0 points [-]

If a problem statement has an internal logical contradiction, there is still a tiny probability that I and everyone else are getting it wrong, due to corrupted hardware or a common misconception about logic or pure chance, and the problem can still be instantiated. But it's so small that I shouldn't give it preferential consideration over other things I might be wrong about, like the nonexistence of a punishing god or that the food I'm served at the restaraunt today is poisoned.

Either of those if true could trump any other (actual) considerations in my actual utility function. The first would make me obey religious strictures to get to heaven. The second threatens death if I eat the food. But I ignore both due to symmetry in the first case (the way to defeat Pascal's wager in general) and to trusting my estimation of the probability of the danger in the second (ordinary expected utility reasoning).

AFAICS both apply to considering an apparently self-contradictory problem statement as really not possible with effective probability zero. I might be misunderstanding things so much that it really is possible, but I might also be misunderstanding things so much that the book I read yesterday about the history of Africa really contained a fascinating new decision theory I must adopt or be doomed by Omega.

All this seems to me to fail due to standard reasoning about Pascal's mugging. What am I missing?

Comment author: [deleted] 28 May 2012 06:16:50PM 0 points [-]

If a problem statement has an internal logical contradiction

AFAIK Newcomb's dilemma does not logically contradict itself, it just contradict the physical law that causality cannot go backwards in time.

Comment author: wedrifid 28 May 2012 06:23:57PM *  1 point [-]

AFAIK Newcomb's dilemma does not logically contradict itself, it just contradict the physical law that causality cannot go backwards in time.

It certainly doesn't contradict itself, and I would also assert that it doesn't contradict the physical law that causality cannot go backwards in time. Instead I would say that giving the sane answer to Newcomb's problem requires abanding the assumption that one's decision must be based only on what it affects based on forward in time causal, physical influence.

Comment author: private_messaging 28 May 2012 07:46:14PM *  0 points [-]

Consider making both boxes transparent to illustrate some related issue.

Comment author: MugaSofer 25 December 2012 03:57:35PM -1 points [-]

If that is really required it would be a way out by saying the thought experiment stipulates an impossible situation.

This might be better stated as "incoherent", as opposed to mere impossibility which can be resolved with magic.

Comment author: private_messaging 28 May 2012 09:10:14PM *  0 points [-]

So assume he says "Either before you entered the room I ran a simulation of this problem as presented to an agent running TDT, or you are such a simulation yourself and I'm going to present this problem to the real you afterwards", or something similar.

...

Well, a TDT agent has indexical uncertainty about whether or not they're in the simulation, whereas a CDT or EDT agent doesn't.

Say, you have CDT agent in the world, affecting the world via set of robotic hands, robotic voice, and so on. If you wire up two robot bodies to 1 computer (in parallel so that all movements are done by both bodies), that is just somewhat peculiar robotic manipulator. Handling this doesn't require any changes to CDT.

Likewise when you have two robot bodies controlled by identical mathematical equation, provided that your world model in the CDT utility calculation accounts for all the known manipulators which are controlled by the chosen action, you get correct result.

Likewise, you can have CDT control a multitude of robots, either from one computer, or from multiple computers that independently determine optimal, identical actions (but each computer only act on a robot body assigned to that computer)

The CDT is formally defined using mathematics; the mathematics is already 'timeless', and the fact that the chosen action affects the contents of the boxes is a part of world model not decision theory (and so is the physical time and physical causality a part of world model not the decision theory. Even though the decision theory is called causal, that's some other 'causal').

Comment author: MugaSofer 25 December 2012 03:54:57PM -1 points [-]

I assumed the sims weren't conscious - they were abstract implementations of TDT.

Comment author: [deleted] 25 December 2012 05:59:29PM 0 points [-]

Well, then there's stuff you know and the sims don't, which you could take in account when deciding and thence decide something different from what they did.

Comment author: MugaSofer 25 December 2012 10:26:34PM 1 point [-]

What stuff? The color of the walls? Memories of your childhood? Unless you have information that alters your decision or you're not a perfect implementer of TDT, in which case you get lumped into the category of "CDT, EDT etc."

Comment author: [deleted] 25 December 2012 11:47:36PM *  1 point [-]

The fact that you're not a sim, and unlike the sims you'll actually be given the money.

Comment author: MugaSofer 26 December 2012 01:38:30AM *  0 points [-]

Why the hell would Omega program the sim not to value the simulated reward? It's almost certainly just abstract utility anyway.