A rather condensed "clarification": "Objective morality" is equivalent to the objectively optimal decision policy/theory, which my intuition says might warrant the label "objectively optimal" due to reasons hinted at in this thread, though it's possible that "optimal" is the wrong word to use here and "justified" is a more natural choice. An oracle can be constructed from Chaitin's omega, which allows for hypercomputation. A decision policy that didn't make use of knowledge of ALL the bits of Chaitin's omega is less optimal/justified than a decision policy that did make use of that knowledge. Such an omniscient (at least within the standard models of computation) decision policy can serve as an objective standard against which we can compare approximations in the form of imperfect human-like computational processes with highly ambiguous "belief"-"preference" mixtures. By hypothesis the implications of the "existence" of such an objective standard would seem to be subtle and far-reaching.
The decisions produced by any decision theory are not objectively optimal; at best they might be objectively optimal for a specific utility function. A different utility function will produce different "optimal" behavior, such as tiling the universe with paperclips. (Why do you think Eliezer et al are spending so much effort trying to figure out how to design a utility function for an AI?)
I see the connection between omega and decision theories related to Solomonoff induction, but as the choice of utility function is more-or-less arbitrary, it doesn't give you an objective morality.
Here's the new thread for posting quotes, with the usual rules: