All of Lucifer's Comments + Replies

Probable first mention by Yudkowsky on the extropians mailing list:

I wouldn't be as disturbed if I thought the class of hostile AIs I was
talking about would have any of those qualities except for pure
computational intelligence devoted to manufacturing an infinite number of
paperclips. It turns out that the fact that this seems extremely "stupid"
to us relies on our full moral architectures.

113580
I addressed this in my top level comment also but do we think Yud here has the notion that there is such a thing as "our full moral architecture" or is he reasoning from the impossibility of such completeness that alignment cannot be achieved by modifying the 'goal'?