Just a minor thought connected with the orthogonality thesis: if you claim that any superintelligence will inevitably converge to some true code of morality, then you are also claiming that no measures can be taken by its creators to prevent this convergence. In other words, the superintelligence will be uncontrollable.
I don't think that every consequentialist view of ethics reduces to equating morality with maximizing an arbitrary but fixed utility function which leaves no action as morally neutral.
Under bounded resources, I think there is (and I think remains as the horizon expands with the capability of the system) plenty of leeway in the "Pareto front" of actions judged at a given time not to be "likely worse in the long term" than any other action considered.
The trajectory of a system depends on its boundary conditions even if the dynamic is in some sense "convergent", so "convergence" does not exclude control over the particular trajectory.