There has been a lot of discussion on LW about finding better decision theories. A lot of the reason for the various new decision theories proposed here seems to be an effort to get over the fact that classical CDT gives the wrong answer in 1-shot PD's, Newcomb-like problems and Parfit's Hitchhiker problem. While Gary Drescher has said that TDT is "more promising than any other decision theory I'm aware of ", Eliezer gives a list of problems in which his theory currently gives the wrong answer (or, at least, it did a year ago). Adam Bell's recent sequence has talked about problems for CDT, and is no doubt about to move onto problems with EDT (in one of the comments, it was suggested that EDT is "wronger" than CDT).
In the Iterated Prisoner's Dilemma, it is relatively trivial to prove that no strategy is "optimal" in the sense that it gets the best possible pay-out against all opponents. The reasoning goes roughly like this: any strategy which ever cooperates does worse than it could have against, say, Always Defect. Any strategy which doesn't start off with cooperate does worse than it could have against, say Grim. So, whatever strategy you choose, there is another strategy that would do better than you against some possible opponent. So no strategy is "optimal". Question: is it possible to prove similarly that there is no "optimal" Decision Theory? In other words - given a decision theory A, can you come up with some scenario in which it performs worse than at least one other decision theory? Than any other decision theory?
One initial try would be: Omega gives you two envelopes - the left envelope contains $1 billion iff you don't implement decision theory A in deciding which envelope to choose. The right envelope contains $1000 regardless.
Or, you might not like Omega being able to make decisions about you based entirely on your sourcecode (or "ritual of cognition"), then how about this:in order for two decision theories to sensibly be described as "different", there must be some scenario in which they perform a different action (let's call this Scenario 1). In Scenario 1, DT A makes decision A whereas DT B makes decision B. In Scenario 2, Omega offers you the following setup: here are two envelopes, you can pick exactly one of them. I've just simulated you in Scenario 1. If you chose decision B, there's $1,000,000 in the left envelope. Otherwise it's empty. There's $1000 in the right envelope regardless.
I'm not sure if there's some flaw in this reasoning (are there decision theories for which Omega offering such a deal is a logical impossibility? It seems unlikely: I don't see how your choice of algorithm could affect Omega's ability to talk about it). But I imagine that some version of this should work - in which case, it doesn't make sense to talk about one decision theory being "better" than another, we can only talk about decision theories being better than others for certain classes of problems.
I have no doubt that TDT is an improvement on CDT, but in order for this to even make sense, we'd have to have some way of thinking about what sort of problem we want our decision theory to solve. Presumably the answer is "the sort of problems which you're actually likely to face in the real world". Do we have a good formalism for what this means? I'm not suggesting that the people who discuss these questions haven't considered this issue, but I don't think I've ever seen it explicitly addressed. What exactly do we mean by a "better" decision theory?
On this point, I've going to have to agree with what EY said here (which I repeated here).
In short: Omega's strategy and its consequences for you are not, in any sense, atypical. Omega is treating you based upon what you would do, given full (or approximate) knowledge of the situation. This is quite normal: people do in fact treat you differently based upon estimation of "what you would do", which is also known as your "character".
Your point would be valid if Omega were basing the reward profile on your genetics, or how you got to your decision, or some other strange factor. But here, Omega is someone who just bases its treatment of you on things that are normal to care about in normal problems.
You're just emphasizing the fact that you have full knowledge of the situation.
I currently believe, that if I ever am in a position where I believe myself to be confronted with Newcomb's problem, no matter how convinced I am at that time, it will be a hoax in some way; for example, Omega has limited prediction capability or there isn't actually $1 million in the box.
I'm not saying "you should two-box because the money is already in there" I'm saying "maybe you should JUST take the $1000 box because you've seen that money and if you don't think ve's lying you're probably hallucinating."