ata comments on Another attempt to explain UDT - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (50)
Oh, lots of open problems remain. Here's a handy list of what I have in mind right now:
1) 2TDT-1CDT.
2) "Agent simulates predictor", or ASP: if you have way more computing power than Omega, then Omega can predict you can obtain its decision just by simulation, so you will two-box; but obviously this isn't what you want to do.
3) "The stupid winner paradox": if two superintelligences play a demand game for $10, presumably they can agree to take $5 each to avoid losing it all. But a human playing against a superintelligence can just demand $9, knowing the superintelligence will predict his decision and be left with only $1.
4) "A/B/~CON": action A gets you $5, action B gets you $10. Additionally you will receive $1 if inconsistency of PA is ever proved. This way you can't write a terminating utility() function, but can still define the value of utility axiomatically. This is supposed to exemplify all the tractable cases where one action is clearly superior to the other, but total utility is uncomputable.
5) The general case of agents playing a non-zero-sum game against each other, knowing each other's source code. For example, the Prisoner's Dilemma with asymmetrized payoffs.
I could make a separate post from this list, but I've been making way too many toplevel posts lately.
Cool, thanks.