Just a minor thought connected with the orthogonality thesis: if you claim that any superintelligence will inevitably converge to some true code of morality, then you are also claiming that no measures can be taken by its creators to prevent this convergence. In other words, the superintelligence will be uncontrollable.
I doubt either the orthogonality thesis or the parallel thesis you'd need for this argument are true. Some utility functions are more likely than others, but none are certain.
Since if the parallel thesis is true, the AI would be fulfilling CEV, I don't see the problem. It will do what you'd have done if you were smart enough.