Uh, Omega has no business deciding what problem I'm solving.
No, but if you're solving something other than Newcomb's problem, why discuss it on this post?
I'm not solving it in the sense of utility maximization. I'm solving it in the sense of demonstrating that the input conditions might well be self-contradictory, using any means available.
Marion Ledwig's dissertation summarizes much of the existing thinking that's gone into Newcomb's Problem.
(For the record, I myself am neither an evidential decision theorist, nor a causal decision theorist in the current sense. My view is not easily summarized, but it is reflectively consistent without need of precommitment or similar dodges; my agents see no need to modify their own source code or invoke abnormal decision procedures on Newcomblike problems.)