Just a minor thought connected with the orthogonality thesis: if you claim that any superintelligence will inevitably converge to some true code of morality, then you are also claiming that no measures can be taken by its creators to prevent this convergence. In other words, the superintelligence will be uncontrollable.
I don't understand how to construct a consistent world view that involves the premise. Could you state the premise as a statement about all computable functions?
Let's give it a try... In the space of computable functions, there is a class X that we would recognize as "having goal G". There is a process SI we would identify as self-improvement. Then converge implies that for nearly any initial function f, the process SI will result in f being in X.
If you want to phrase this in an updateless way, say that "any function with property SI is in X", defining X as "ultimately having goal G".