To get the “orthogonality” part, I think the definition of the thesis also needs to include that increasing the intelligence of the agents does not cause interpretation of (some) goals to converge.
In particular, the dismissal of the concern that policy must include an absolutely perfect specification of the perfect goal does not deny that an agent could have a goal to maximize paperclip production, but rather asserts that the paperclip maximization goal may seed ASI adequately because a perfect intelligence pursuing it would behave the same as a perfect intelligence pursuing the perfect goal (although we imperfect intelligences do not realize this because we do not appreciate all the overlapping instrumental goals that both entail—for example, truly intelligent paperclip maximization may start with generating a maximally intelligent planner, and that may take so long that no actual paperclip get made).
I noticed the tag posts imported from Arbital that haven't been edited on LW yet can't be found when searching those tags from the "Add Tags" button above posts. Adding ineffective edits like spaces at the end of a paragraph seems to fix that problem.
Good catch, looks like that's from this revision, which looks like it was copied over from Arbital - some LaTeX didn't make it through. I'll see if it's trivial to fix.
To get the “orthogonality” part, I think the definition of the thesis also needs to include that increasing the intelligence of the agents does not cause interpretation of (some) goals to converge.
In particular, the dismissal of the concern that policy must include an absolutely perfect specification of the perfect goal does not deny that an agent could have a goal to maximize paperclip production, but rather asserts that the paperclip maximization goal may seed ASI adequately because a perfect intelligence pursuing it would behave the same as a perfect intelligence pursuing the perfect goal (although we imperfect intelligences do not realize this because we do not appreciate all the overlapping instrumental goals that both entail—for example, truly intelligent paperclip maximization may start with generating a maximally intelligent planner, and that may take so long that no actual paperclip get made).