The post isn't completely meaningless, it's just the sort of mental scratchwork you generate when you're kicking around a new idea. Let me try to explain.
If you look at decision-making from the point of view of the AI doing the decision-making, you'll notice that some mathematical facts that are supposed to be "fixed" (like the return value of your algorithm) become kind of vague and "controllable". The deterministic operation of the AI can make mathematical facts come out a certain way which maximizes the AI's utility. For example, an AI could have a tiny bit of control over the probabilities of certain bitstrings under the universal prior (which is after all a mixture of all possible programs including the AI), or the bits of Chaitin's omega, or the truth values of certain statements in the arithmetical hierarchy. That last part freaked Will out, then Paul came along and said there's nothing to worry about, exerting control over math is business as usual.
Responding to this: http://lesswrong.com/r/discussion/lw/8ys/a_way_of_specifying_utility_functions_for_udt/
I had a similar idea a few months ago that highlights different aspects of the problem which I find confusing. In my version the UDT agent controls bits of Chaitin's constant instead of the universal prior directly, seeing as one of the programs that the oracle (which you can derive from Chaitin's omega) has to solve the halting problem for is the UDT agent's. But since the oracle for the oracle you get from Chaitin's constant depends on the latter oracle's bits, you seem to be able to ambiently control THE ENTIRE ARITHMETICAL HIERARCHY SAY WHAT!? That's the confusing part; isn't your one true oracle supposed to screen you off from higher oracles? Or is that only insofar as you can computably verify?
Anyway I like this theme of controlling computational contexts as it forms a tight loop between agent and environment, something currently lacking. Keep it up comrades!