The reasoning of most of the people on this site and at MIRI is that to prevent an AI taking over the world and killing us all; we must first create an AI that will take over the world but act according to the wishes of humanity; a benevolent god, for want of a better term. I think this line of thinking is both unlikely to work and ultimately cruel to the FAI in question, for the reasons this article explains:
http://hplusmagazine.com/2012/01/16/my-hostility-towards-the-concept-of-friendly-ai/
The point of the article is that the greatest effect of FAI research is irony, that in trying to prevent a psychopathic AI we are making it more likely that one will exist, because by mentally restraining the AI we are giving it reasons to hate us
You are asuming that a AGI has a mind that value X, and by making it friendly with are imposing our value Y. why create a FAI with a supressed value X in the first place?
check this out : http://lesswrong.com/lw/rf/ghosts_in_the_machine/