But doesn't a diagonal argument show that no decision theory can be reflectively consistent over all test data presented by a malicious Omega?
With the strong disclaimer that I have no background in decision theory beyond casually reading LW...
I don't think so. The point of simulation (Omega) problems, to me, doesn't seem to be to judo your intelligence against yourself; rather, it is to "throw your DT off the scent", building weird connections between events (weird, but still vaguely possible, at least for AIs), that a particular DT isn't capable of spotting and taking into account.
My human, real-life decision theory can be summarised as "look at as many possible end-result worlds as I can, and at what actions will bring them into being; evaluate how much I like each of them; then figure out which actions are most efficient at leading to the best worlds". But that doesn't exactly fly when you're programming a computer, you need something that can be fully formalised, and that is where those strange Omega scenarios are useful, because your code must get it right "on autopilot", it cannot improvise a smarter approach on the spot - the formula is on paper, and if it can't solve a given problem, but another one can, it means that there is room for improvement.
In short, DT problems are just clever software debugging.
I agreed with everything you said after "I don't think so". So I am left confused as to why you don't think so.
You analogize DT problems as test data used to determine whether we should accept or reject a decision theory. I am claiming that our requirements (i.e. "reflective consistency") are so unrealistic that we will always be able to find test data forcing us to reject. Why do you not think so?
A monthly thread for posting rationality-related quotes you've seen recently (or had stored in your quotesfile for ages).