The reasoning of most of the people on this site and at MIRI is that to prevent an AI taking over the world and killing us all; we must first create an AI that will take over the world but act according to the wishes of humanity; a benevolent god, for want of a better term. I think this line of thinking is both unlikely to work and ultimately cruel to the FAI in question, for the reasons this article explains:
http://hplusmagazine.com/2012/01/16/my-hostility-towards-the-concept-of-friendly-ai/
Suppose I am programming an AGI. If programming it to be friendly is bad, is programming it to be neutral any better? After all, in both cases you are "imposing" a position on the AI.
This reminds me of activists who claim that parents should not be allowed to tell their own children about their own political or religious views and other values, because it would force their children down a path... but by doing this they would also force the children down a path, just a different path.