Just a minor thought connected with the orthogonality thesis: if you claim that any superintelligence will inevitably converge to some true code of morality, then you are also claiming that no measures can be taken by its creators to prevent this convergence. In other words, the superintelligence will be uncontrollable.
There's a certain probability that it would do the right thing anyway, a certain probability that it wouldn't and so on. The probability of an AGI turning unfriendly depends on those other probabilities, although very little attention has been given to moral realism/objectivism/convergence by MIRI.