The reasoning of most of the people on this site and at MIRI is that to prevent an AI taking over the world and killing us all; we must first create an AI that will take over the world but act according to the wishes of humanity; a benevolent god, for want of a better term. I think this line of thinking is both unlikely to work and ultimately cruel to the FAI in question, for the reasons this article explains:
http://hplusmagazine.com/2012/01/16/my-hostility-towards-the-concept-of-friendly-ai/
You are asuming that a AGI has a mind that value X, and by making it friendly with are imposing our value Y. why create a FAI with a supressed value X in the first place?
check this out : http://lesswrong.com/lw/rf/ghosts_in_the_machine/
There is no ghost in a (relatively) simple machine, but an AI is not simple. The greatest success in AI research have been by imitating what we understand of the human mind. We are no longer programming AI's, we are imitating the structure of the human brain and then giving it a directive (for example with Google's deepmind). With AI's, there is a ghost in the machine, i.e. we do not know that it is possible to give a sentient being a prime directive. We have no idea whether it will desire what we want it to desire, and everything could go horribly wrong if we attempt to force it to.