Just a minor thought connected with the orthogonality thesis: if you claim that any superintelligence will inevitably converge to some true code of morality, then you are also claiming that no measures can be taken by its creators to prevent this convergence. In other words, the superintelligence will be uncontrollable.
That seems obviously true, but what are your motivations for stating it? I was under the impression that people who make the claim accept the conclusion, think it's a good thing, and want to build an AI smart enough to find the "true universal morality" without worrying about all that Friendliness stuff.
It's useful for hitting certain philosophers with. Canonical examples: moral realists sceptical of the potential power of AI.