Just a minor thought connected with the orthogonality thesis: if you claim that any superintelligence will inevitably converge to some true code of morality, then you are also claiming that no measures can be taken by its creators to prevent this convergence. In other words, the superintelligence will be uncontrollable.
Even if this was the case, by murder-pill logic a papercliper would stop self improving just below the relevant "superintelegence" threshold.
Assuming it knew where that threshold was ahead of time.