Manfred comments on Sneaky Strategies for TDT - Less Wrong

8 Post author: drnickbone 25 May 2012 04:13PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread. Show more comments above.

Comment author: drnickbone 26 May 2012 08:33:56PM *  0 points [-]

Here are the variants which make no explicit mention of TDT anywhere in the problem statement. It seems a real strain to describe either of them as unfair to TDT. Yet TDT will be outperformed on them by CDT; unless it resolves never to allow itself to be outperformed on any problem (in TDT über alles fashion)

Problem 1: Omega (who experience has shown is always truthful) presents the usual two boxes A and B and announces the following. "Before you entered the room, I selected an agent at random from the following distribution over all full source-codes for decision theory agents (insert distribution). I then simulated the result of presenting this exact problem to that agent. I won't tell you what the agent decided, but I will tell you that if the agent two-boxed then I put nothing in Box B, whereas if the agent one-boxed then I put big Value-B in Box B. Regardless of how the simulated agent decided, I put small Value-A in Box A. Now please choose your box or boxes."

Problem 2: Our ever-reliable Omega now presents ten boxes, numbered from 1 to 10, and announces the following. "Exactly one of these boxes contains $1 million; the others contain nothing. You must take exactly one box to win the money; if you try to take more than one, then you won't be allowed to keep any winnings. Before you entered the room, I ran multiple simulations of this problem as presented to different agents, sampled uniformly from different possible future universes according to their relative numbers, with the universes themselves sampled from my best projections of the future. I determined the box which the agents were least likely to take. If there were several such boxes tied for equal-lowest probability, then I just selected one of them, the one labelled with the smallest number. I then placed $1 million in the selected box. Please choose your box."

Comment author: Manfred 26 May 2012 09:56:06PM *  5 points [-]

unless it resolves never to allow itself to be outperformed on any problem (in TDT über alles fashion).

This is not actually possible. You can always play the "I simulated you and put the money in the place you don't choose" game.

It seems a real strain to describe either of them as unfair to TDT.

From this side of the screen, this looks like a property of you, not the problems. If we replace the statement about "relative numbers" in the future (we were having to make assumptions about that anyhow, so let's just save time and stick in the assumptions), then problem 2 reads "I simulated the best decision theory by definition X and put the money in the place it doesn't choose." This demonstrates that no matter how good a decision theory is by any definition, it can still get hosed by Omega. In this case we're assuming that definition X is maximized by TDT (thus, it's a unique specification), and yea, TDT did go forth and get hosed by Omega.

Comment author: drnickbone 26 May 2012 10:41:57PM *  0 points [-]

This is not actually possible. You can always play the "I simulated you and put the money in the place you don't choose" game

But the obvious response to that game is randomisation among the choice options: there is no guarantee of winning, but no-one else can do better than you either. It takes a new "twist" on the problem to defeat the randomisation approach, and show that another agent type can do better.

I did ask on my original post (on Problematic Problems) whether that "twist" had been proposed or studied before. There were no references, but if you have one, please let me know.

Comment author: Manfred 26 May 2012 10:58:10PM 0 points [-]

I don't have such a reference - so good job :D And yes, I was assuming that Omega was defeating randomization.

Comment author: MugaSofer 29 December 2012 07:37:59PM -1 points [-]

It seems a real strain to describe either of them as unfair to TDT.

From this side of the screen, this looks like a property of you, not the problems. If we replace the statement about "relative numbers" in the future (we were having to make assumptions about that anyhow, so let's just save time and stick in the assumptions), then problem 2 reads "I simulated the best decision theory by definition X and put the money in the place it doesn't choose." This demonstrates that no matter how good a decision theory is by any definition, it can still get hosed by Omega. In this case we're assuming that definition X is maximized by TDT (thus, it's a unique specification), and yea, TDT did go forth and get hosed by Omega.

So there's a class of problems where failure is actually a good sign? Interesting. You might want to post further on that, actually.

Comment author: Manfred 29 December 2012 09:53:30PM 1 point [-]

Hm, yeah. After some computational work at least. Every decision procedure can get hosed by Omega, and the way in which it gets hosed is diagnostic of its properties. Though not uniquely, I guess, so you can't say "it fails this special test therefore it is good."