(Attention conservation notice: this post contains no new results, and will be obvious and redundant to many.)
Not everyone on LW understands Wei Dai's updateless decision theory. I didn't understand it completely until two days ago. Now that I had the final flash of realization, I'll try to explain it to the community and hope my attempt fares better than previous attempts.
It's probably best to avoid talking about "decision theory" at the start, because the term is hopelessly muddled. A better way to approach the idea is by examining what we mean by "truth" and "probability" in the first place. For example, is it meaningful for Sleeping Beauty to ask whether it's Monday or Tuesday? Phrased like this, the question sounds stupid. Of course there's a fact of the matter as to what day of the week it is! Likewise, in all problems involving simulations, there seems to be a fact of the matter whether you're the "real you" or the simulation, which leads us to talk about probabilities and "indexical uncertainty" as to which one is you.
At the core, Wei Dai's idea is to boldly proclaim that, counterintuitively, you can act as if there were no fact of the matter whether it's Monday or Tuesday when you wake up. Until you learn which it is, you think it's both. You're all your copies at once.
More formally, you have an initial distribution of "weights" on possible universes (in the currently most general case it's the Solomonoff prior) that you never update at all. In each individual universe you have a utility function over what happens. When you're faced with a decision, you find all copies of you in the entire "multiverse" that are faced with the same decision ("information set"), and choose the decision that logically implies the maximum sum of resulting utilities weghted by universe-weight. If you possess some useful information about the universe you're in, it's magically taken into account by the choice of "information set", because logically, your decision cannot affect the universes that contain copies of you with different states of knowledge, so they only add a constant term to the utility maximization.
Note that the theory, as described above, has ho notion of "truth" and "probability" divorced from decision-making. That's how I arrived at understanding it: in The Strong Occam's Razor I asked whether it makes sense to "believe" one physical theory over another which makes the same predictions. For example, is hurting a human in a sealed box morally equivalent to not hurting him? After all, the laws of physics could make a localized exception to save the human from harm. UDT gives a very definite answer: there's no fact of the matter as to which physical theory is "correct", but you refrain from pushing the button anyway, because it hurts the human more in universes with simpler physical laws, which have more weight according to our "initial" distribution. This is an attractive solution to the problem of the "implied invisible" - possibly even more attractive than Eliezer's own answer.
As you probably realize by now, UDT is a very sharp tool that can give simple-minded answers to all our decision-theory puzzles so far - even if they involve copying, amnesia, simulations, predictions and other tricks that throw off our approximate intuitions of "truth" and "probability". Wei Dai gave a detailed example in The Absent-Minded Driver, and the method carries over almost mechanically to other problems. For example, Counterfactual Mugging: by assumption, your decision logically affects both heads-universe and tails-universe, which (also by assumption) have equal weight, so by agreeing to pay you win more cookies overall. Note that updating on the knowledge that you are in tails-universe (because Omega showed up) doesn't affect anything, because the theory is "updateless".
At this point some may be tempted to switch to True Believer mode. Please don't. Just like Bayesianism, utilitarianism, MWI or the Tegmark multiverse, UDT is an idea that's irresistibly delicious to a certain type of person who puts a high value on clarity. And they all play so well together that it can't be an accident! But what does it even mean to consider a theory "true" when it says that our primitive notion of "truth" isn't "true"? :-) Me, I just consider the idea very fruitful; I've been contributing new math to it and plan to do so in the future.
UDT is supposed to be about fundamental math, not efficient algorithms. It's supposed to define what value we ought to optimize, in a way that hopefully accords with some of our intuitions. Before trying to build approximate computations, we ought to understand the ideal we're trying to approximate in the first place. Real numbers as infinite binary expansions are pretty impractical for computation too, but it pays to get the definition right.
Whether UDT is useful in reality is another question entirely. I've had a draft post for quite a while now titled "Taking UDT Seriously", featuring such shining examples as: it pays to retaliate against bullies even at the cost of great harm to yourself, because anticipation of such retaliation makes bullies refrain from attacking counterfactual versions of you. Of course the actual mechanism by which bullies pick victims is different and entirely causal - maybe some sort of pheromones indicating willingness to retaliate - but it's still instructive how an intuition from the platonic math of UDT unexpectedly transfers to the real world. There may be a lesson here.
That draft would be interesting to see completed, and it may help me see what UDT brings to the table. I find the idea of helping future me and other people in my world far more compelling than the idea of helping mes that don't exist in my world- and so if I can come to the conclusion "stand up to bullies at high personal cost because doing so benefits you and others in the medium and long term," I don't see a need for nonexistent mes, and if I don't think it's worth it on the previously stated grounds, I don't see the consideration of nonexiste... (read more)