Are you talking about Friendliness in the technical sense, which in humans would mostly mean not being a sociopath, or are you saying that to build FAI, a human has to be friendly in the garden-variety way (cheerily sociable and open)?
The latter. The connection is tenuous; but let me explain.
If your model of the right way to make friendly AI is a completely top-down one, meaning the computer is your slave and you code it so that it is impossible for it to violate the Three Laws of Robotics, or a more sophisticated scheme that allows it to produce only certain kinds of plans, then the question is irrelevant.
But if your model of the right way to make friendly AI also involves developing a theory of rationality that leads to cooperation, then I would think that the person developing such ...
David Brin suggests that some kind of political system populated with humans and diverse but imperfectly rational and friendly AIs would evolve in a satisfactory direction for humans.
I don't know whether creating an imperfectly rational general AI is any easier, except that limited perceptual and computational resources obviously imply less than optimal outcomes; still, why shouldn't we hope for optimal given those constraints? I imagine the question will become more settled before anyone nears unleashing a self-improving superhuman AI.
An imperfectly friendly AI, perfectly rational or not, is a very likely scenario. Is it sufficient to create diverse singleton value-systems (demographically representative of humans' values) rather than a consensus (over all humans' values) monolithic Friendly?
What kind of competitive or political system would make fragmented squabbling AIs safer than an attempt to get the monolithic approach right? Brin seems to have some hope of improving politics regardless of AI participation, but I'm not sure exactly what his dream is or how to get there - perhaps his "disputation arenas" would work if the participants were rational and altruistically honest).