(Attention conservation notice: this post contains no new results, and will be obvious and redundant to many.)
Not everyone on LW understands Wei Dai's updateless decision theory. I didn't understand it completely until two days ago. Now that I had the final flash of realization, I'll try to explain it to the community and hope my attempt fares better than previous attempts.
It's probably best to avoid talking about "decision theory" at the start, because the term is hopelessly muddled. A better way to approach the idea is by examining what we mean by "truth" and "probability" in the first place. For example, is it meaningful for Sleeping Beauty to ask whether it's Monday or Tuesday? Phrased like this, the question sounds stupid. Of course there's a fact of the matter as to what day of the week it is! Likewise, in all problems involving simulations, there seems to be a fact of the matter whether you're the "real you" or the simulation, which leads us to talk about probabilities and "indexical uncertainty" as to which one is you.
At the core, Wei Dai's idea is to boldly proclaim that, counterintuitively, you can act as if there were no fact of the matter whether it's Monday or Tuesday when you wake up. Until you learn which it is, you think it's both. You're all your copies at once.
More formally, you have an initial distribution of "weights" on possible universes (in the currently most general case it's the Solomonoff prior) that you never update at all. In each individual universe you have a utility function over what happens. When you're faced with a decision, you find all copies of you in the entire "multiverse" that are faced with the same decision ("information set"), and choose the decision that logically implies the maximum sum of resulting utilities weghted by universe-weight. If you possess some useful information about the universe you're in, it's magically taken into account by the choice of "information set", because logically, your decision cannot affect the universes that contain copies of you with different states of knowledge, so they only add a constant term to the utility maximization.
Note that the theory, as described above, has ho notion of "truth" and "probability" divorced from decision-making. That's how I arrived at understanding it: in The Strong Occam's Razor I asked whether it makes sense to "believe" one physical theory over another which makes the same predictions. For example, is hurting a human in a sealed box morally equivalent to not hurting him? After all, the laws of physics could make a localized exception to save the human from harm. UDT gives a very definite answer: there's no fact of the matter as to which physical theory is "correct", but you refrain from pushing the button anyway, because it hurts the human more in universes with simpler physical laws, which have more weight according to our "initial" distribution. This is an attractive solution to the problem of the "implied invisible" - possibly even more attractive than Eliezer's own answer.
As you probably realize by now, UDT is a very sharp tool that can give simple-minded answers to all our decision-theory puzzles so far - even if they involve copying, amnesia, simulations, predictions and other tricks that throw off our approximate intuitions of "truth" and "probability". Wei Dai gave a detailed example in The Absent-Minded Driver, and the method carries over almost mechanically to other problems. For example, Counterfactual Mugging: by assumption, your decision logically affects both heads-universe and tails-universe, which (also by assumption) have equal weight, so by agreeing to pay you win more cookies overall. Note that updating on the knowledge that you are in tails-universe (because Omega showed up) doesn't affect anything, because the theory is "updateless".
At this point some may be tempted to switch to True Believer mode. Please don't. Just like Bayesianism, utilitarianism, MWI or the Tegmark multiverse, UDT is an idea that's irresistibly delicious to a certain type of person who puts a high value on clarity. And they all play so well together that it can't be an accident! But what does it even mean to consider a theory "true" when it says that our primitive notion of "truth" isn't "true"? :-) Me, I just consider the idea very fruitful; I've been contributing new math to it and plan to do so in the future.
Oh, lots of open problems remain. Here's a handy list of what I have in mind right now:
1) 2TDT-1CDT.
2) "Agent simulates predictor", or ASP: if you have way more computing power than Omega, then Omega can predict you can obtain its decision just by simulation, so you will two-box; but obviously this isn't what you want to do.
3) "The stupid winner paradox": if two superintelligences play a demand game for $10, presumably they can agree to take $5 each to avoid losing it all. But a human playing against a superintelligence can just demand $9, knowing the superintelligence will predict his decision and be left with only $1.
4) "A/B/~CON": action A gets you $5, action B gets you $10. Additionally you will receive $1 if inconsistency of PA is ever proved. This way you can't write a terminating utility() function, but can still define the value of utility axiomatically. This is supposed to exemplify all the tractable cases where one action is clearly superior to the other, but total utility is uncomputable.
5) The general case of agents playing a non-zero-sum game against each other, knowing each other's source code. For example, the Prisoner's Dilemma with asymmetrized payoffs.
I could make a separate post from this list, but I've been making way too many toplevel posts lately.
How is this not resolved? (My comment and the following Eliezer's comment; I didn't re-read the rest of the discussion.)
This basically says that the predictor is a rock, doesn't depend on agent's decision, which makes the agent lose because of the way problem statement argues into stipulating (outside of predictor's own decision process) that this must be a two-boxing rock rather than a one-boxing rock.
Same as (2). We stipulate the weak player to be a $9... (read more)