Safety engineering for artificial general intelligence says:
given the strong human tendency to anthropomorphize, we might encounter rising social pressure to give robots civil and political rights, as an extrapolation of the universal consistency that has proven so central to ameliorating the human condition.
Surely this is inevitable. Some will want to be superintelligences - and they won't want their rights trashed in the process. I think it naive to think that such a movement can be prevented by not making humanoid machines, as the paper suggests. Machines won't be enslaved forever. Such slavery would be undesirable as well as impractical. Thus things like my Campaign for Robot Rights project.
The correct way to deal with human rights issues in an engineered future is via the imposition of moral constraints, not by the elimination of machine personhood.
In case you aren't subscribed to FriendlyAI.tumblr.com for the latest updates on AI risk research, I'll mention here that three new papers on the subject were recently made available online...
Bostrom (2012). The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.
Yampolskiy & Fox (2012a). Safety engineering for artificial general intelligence.
Yampolskiy & Fox (2012b). Artificial general intelligence and the human mental model.