There has been a lot of discussion on LW about finding better decision theories. A lot of the reason for the various new decision theories proposed here seems to be an effort to get over the fact that classical CDT gives the wrong answer in 1-shot PD's, Newcomb-like problems and Parfit's Hitchhiker problem. While Gary Drescher has said that TDT is "more promising than any other decision theory I'm aware of ", Eliezer gives a list of problems in which his theory currently gives the wrong answer (or, at least, it did a year ago). Adam Bell's recent sequence has talked about problems for CDT, and is no doubt about to move onto problems with EDT (in one of the comments, it was suggested that EDT is "wronger" than CDT).
In the Iterated Prisoner's Dilemma, it is relatively trivial to prove that no strategy is "optimal" in the sense that it gets the best possible pay-out against all opponents. The reasoning goes roughly like this: any strategy which ever cooperates does worse than it could have against, say, Always Defect. Any strategy which doesn't start off with cooperate does worse than it could have against, say Grim. So, whatever strategy you choose, there is another strategy that would do better than you against some possible opponent. So no strategy is "optimal". Question: is it possible to prove similarly that there is no "optimal" Decision Theory? In other words - given a decision theory A, can you come up with some scenario in which it performs worse than at least one other decision theory? Than any other decision theory?
One initial try would be: Omega gives you two envelopes - the left envelope contains $1 billion iff you don't implement decision theory A in deciding which envelope to choose. The right envelope contains $1000 regardless.
Or, you might not like Omega being able to make decisions about you based entirely on your sourcecode (or "ritual of cognition"), then how about this:in order for two decision theories to sensibly be described as "different", there must be some scenario in which they perform a different action (let's call this Scenario 1). In Scenario 1, DT A makes decision A whereas DT B makes decision B. In Scenario 2, Omega offers you the following setup: here are two envelopes, you can pick exactly one of them. I've just simulated you in Scenario 1. If you chose decision B, there's $1,000,000 in the left envelope. Otherwise it's empty. There's $1000 in the right envelope regardless.
I'm not sure if there's some flaw in this reasoning (are there decision theories for which Omega offering such a deal is a logical impossibility? It seems unlikely: I don't see how your choice of algorithm could affect Omega's ability to talk about it). But I imagine that some version of this should work - in which case, it doesn't make sense to talk about one decision theory being "better" than another, we can only talk about decision theories being better than others for certain classes of problems.
I have no doubt that TDT is an improvement on CDT, but in order for this to even make sense, we'd have to have some way of thinking about what sort of problem we want our decision theory to solve. Presumably the answer is "the sort of problems which you're actually likely to face in the real world". Do we have a good formalism for what this means? I'm not suggesting that the people who discuss these questions haven't considered this issue, but I don't think I've ever seen it explicitly addressed. What exactly do we mean by a "better" decision theory?
I don't think this counterexample is actually a counterexample. When you-simulation decides in Scenario 1, he has no knowledge of Scenario 2. Yes, if people respond in arbitrary and unexpected ways to your decisions, this sort of thing can easily be set up; but ultimately the best you can do is to maximize expected utility. If you lose due to Omega pulling such a move on you, that's due to your lack of knowledge and bad calibration as to his probable responses, not to a flaw in your decision theory. If you-simulation somehow knew what the result would be used for, he would choose taking that into account.