Just a minor thought connected with the orthogonality thesis: if you claim that any superintelligence will inevitably converge to some true code of morality, then you are also claiming that no measures can be taken by its creators to prevent this convergence. In other words, the superintelligence will be uncontrollable.
...if you claim that any superintelligent oracle will inevitably return the same answer given the same question, then you are also claiming that no measures can be taken by its creators to make it return a different answer.
Not exactly true... You need to conclude "can be taken by its creators to make it return a different answer while it remains an Oracle". With that caveat inserted, I'm not sure what your point is... Depending on how you define the terms, either your implication is true by definition, or the premise is agreed to be false by pretty much everyone.