ata comments on Another attempt to explain UDT - Less Wrong

35 Post author: cousin_it 14 November 2010 04:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 14 November 2010 07:56:36PM *  6 points [-]

Oh, lots of open problems remain. Here's a handy list of what I have in mind right now:

1) 2TDT-1CDT.

2) "Agent simulates predictor", or ASP: if you have way more computing power than Omega, then Omega can predict you can obtain its decision just by simulation, so you will two-box; but obviously this isn't what you want to do.

3) "The stupid winner paradox": if two superintelligences play a demand game for $10, presumably they can agree to take $5 each to avoid losing it all. But a human playing against a superintelligence can just demand $9, knowing the superintelligence will predict his decision and be left with only $1.

4) "A/B/~CON": action A gets you $5, action B gets you $10. Additionally you will receive $1 if inconsistency of PA is ever proved. This way you can't write a terminating utility() function, but can still define the value of utility axiomatically. This is supposed to exemplify all the tractable cases where one action is clearly superior to the other, but total utility is uncomputable.

5) The general case of agents playing a non-zero-sum game against each other, knowing each other's source code. For example, the Prisoner's Dilemma with asymmetrized payoffs.

I could make a separate post from this list, but I've been making way too many toplevel posts lately.

Comment author: ata 14 November 2010 08:25:08PM 0 points [-]

"Agent simulates predictor", or ASP: if you have way more computing power than Omega, then Omega can predict you can obtain its decision just by simulation, so you will two-box; but obviously this isn't what you want to do.

If you can predict Omega, but Omega can still predict you well enough for the problem to be otherwise the same, then, given that you anticipate that if you predict Omega's decision then you will two-box and lose, can't you choose not to predict Omega (instead deciding the usual way, resulting in one-boxing), knowing that Omega will correctly predict that you will not obtain its decision by simulation?

(Sorry, I know that's a cumbersome sentence; hope its meaning was clear.)

"The stupid winner paradox": if two superintelligences play a demand game for $10, presumably they can agree to take $5 each to avoid losing it all. But a human playing against a superintelligence can just demand $9, knowing the superintelligence will predict his decision and be left with only $1.

By "demand game" are you referring to the ultimatum game?

"A/B/~CON": action A gets you $5, action B gets you $10. Additionally you will receive $1 if inconsistency of PA is ever proved. This way you can't write a terminating utility() function, but can still define the value of utility axiomatically. This is supposed to exemplify all the tractable cases where one action is clearly superior to the other, but total utility is uncomputable.

Is the $1 independent of whether you pick action A or action B?

Comment author: cousin_it 14 November 2010 08:43:54PM *  1 point [-]

1) The challenge is not solving this individual problem, but creating a general theory that happens to solve this special case automatically. Our current formalizations of UDT fail on ASP - they have no concept of "stop thinking".

2) No, I mean the game where two players write each a sum of money on a piece of paper, if the total is over $10 then both get nothing, otherwise each player gets the sum they wrote.

3) Yeah, the $1 is independent.

Comment author: ata 14 November 2010 11:14:48PM *  1 point [-]

1) The challenge is not solving this individual problem, but creating a general theory that happens to solve this special case automatically. Our current formalizations of UDT fail on ASP - they have no concept of "stop thinking".

Okay.

So, the superintelligent UDT agent can essentially see through both boxes (whether it wants to or not... or, rather, has no concept of not wanting to). Sorry if this is a stupid question, but wouldn't UDT one-box anyway, whether the box is empty or contains $1,000,000, for the same reason that it pays in Counterfactual Mugging and Parfit's Hitchhiker? When the box is empty, it takes the empty box so that there will be possible worlds where the box is not empty (as it would pay the counterfactual mugger so that it will get $10,000 in the other half of worlds), and when the box is not empty, it takes only the one box (despite seeing the extra money in the other box) so that the world it's in will weigh 50% rather than 0% (as it would pay the driver in Parfit's Hitchhiker, despite it having "already happened", so that the worlds in which the hitchhiker gives it a ride in the first place will weigh 100% rather than 0%).

Comment author: cousin_it 15 November 2010 12:30:12AM 0 points [-]

In our current implementations of UDT, the agent won't find any proof that one-boxing leads to the predictor predicting one-boxing, because the agent doesn't "know" that it's only going to use a small fraction of its computing resources while searching for the proof. Maybe a different implementation could fix that.

Comment author: Vladimir_Nesov 15 November 2010 12:35:57AM 0 points [-]

In our current implementations of UDT

It's not an implementation of UDT in the sense that it doesn't talk about all possible programs and universal prior on them. If you consider UDT as generalizing to ADT, where probability assumptions are dropped, then sure.

Comment author: cousin_it 15 November 2010 12:39:32AM *  1 point [-]

Um, I don't consider the universal prior to be part of UDT proper. UDT can run on top of any prior, e.g. when you use it to solve toy problems as Wei did, you use small specialized priors.

Comment author: Vladimir_Nesov 15 November 2010 01:02:49AM 0 points [-]

There are no priors used in those toy problems, just one utility definition of interest.