I addressed this in my top level comment also but do we think Yud here has the notion that there is such a thing as "our full moral architecture" or is he reasoning from the impossibility of such completeness that alignment cannot be achieved by modifying the 'goal'?
This entry should address the fact the "the full complement of human values" is an impossible and dynamic set. There is no full set, as the set is interactive with a dynamic environment that presents infinite conformations (from an obviously finite set of materials), and also because the set is riven with indissoluble conflicts (hence politics); whatever set was given to the maximizer AGI would have to be rendered free of these conflicts which would then no longer be the full set etc.
Paperclip maximization would be a quantitative internal alignment error. Ironically the error of drawing boundary between paperclip maximization & squiggle maximization was itself arbitrary decision. Feel free to message me to discuss this.