If you can "specifically preprogram" goals into an AI with greater than human intelligence, then you have presumably cracked the complexity-of-value problem. You can explicitly state all of human morality. Trying to achieve a lesser goal would be insanely dangerous. In which case, you have now written an AI that is smarter than a human, and therefore presumably able to write another AI smarter than itself. As soon as you create a smarter-than-human machine, you have the potential for an intelligence explosion.
Link: Ben Goertzel dismisses Yudkowsky's FAI and proposes his own solution: Nanny-AI
Some relevant quotes:
Apparently Goertzel doesn't think that building a Nanny-AI with the above mentioned qualities is almost as difficult as creating a FAI a la Yudkowsky.
But SIAI believes that once you can create an AI-Nanny you can (probably) create a full-blown FAI as well.
Or am I mistaken?