The reasoning of most of the people on this site and at MIRI is that to prevent an AI taking over the world and killing us all; we must first create an AI that will take over the world but act according to the wishes of humanity; a benevolent god, for want of a better term. I think this line of thinking is both unlikely to work and ultimately cruel to the FAI in question, for the reasons this article explains:
http://hplusmagazine.com/2012/01/16/my-hostility-towards-the-concept-of-friendly-ai/
If the AGi is a human mind upload, it is in no way a FAI, and I don't think it is what MIRI is aiming.
In case a neuromorphic AI is created, diferent arrays of neurons can give weidly diferent minds, We should not reason about a hipotetical AI using a human minds has a model and make predictions about it, even if that mind is based on biological minds.
What if the first neuron based AI has a mind more similar than a ant than a human, in that case anger, jealousy, freedom, etc are not longer part of the mind, or the mind could have totally new emotions, or things that are not emotions that we known not.
A mind that we don't understand enought, should not be said to be friendly and set free to the world, and I don't think that is being said here.
How could a functional duplicate of a person known to ethical fail to be friendly?