This is unsurprising: CDT relies on explicit dependencies given by causal definitions, while what you want is to look for logical (ambient) dependencies for which the particular way the problem was specified (e.g. physical content defined by causality) is irrelevant. After you find the dependencies as a result of such analysis, all that's left is applying expected utility, at which point any CDT-specificity is gone (see Controlling Constant Programs).
This is equivalent to Newcomb's Problem in the sense that any strategy does equally well on both, where by "strategy" I mean a mapping from info to (probability distributions over) actions.
I suspect that any problem with Omega can be transformed into an equivalent problem with amnesia instead of Omega.
Does CDT return the winning answer in such transformed problems?
Discuss.