Just a minor thought connected with the orthogonality thesis: if you claim that any superintelligence will inevitably converge to some true code of morality, then you are also claiming that no measures can be taken by its creators to prevent this convergence. In other words, the superintelligence will be uncontrollable.
Not exactly true... You need to conclude "can be taken by its creators to make it return a different answer while it remains an Oracle". With that caveat inserted, I'm not sure what your point is... Depending on how you define the terms, either your implication is true by definition, or the premise is agreed to be false by pretty much everyone.
That was my point. If you accept the premise that superintelligence implies the adoption some sort of objective moral conduct, then it is no different from an oracle returning correct answers. You can't change that behavior and retain superintelligence. You'll end up with a retarded intelligence.
I was just stating an analog example that highlights the tautological nature of your post. But I suppose that was your intention anyway.