Assuming you believe the FAI project is coherent, what you want to do is prove safety (of AI code, of people involved, etc.) not prove lack of safety. So if I say a sociopath is bad news, and you are not sure -- well it is not my job to convince you! It is the sociopath's job to convince you (s)he's safe.
My personal opinion is sociopaths are badly badly broken, possibly not even entirely human.
How exactly are you defining "entirely human" such that 1 to 3 percent of the population of H. sapiens fails to qualify?
I have consistently, over the course of my life, heard people describe sociopathy and related mental illnesses as being caused by a lack of empathy. This, intuitively, seems wrong, since that seems like a massively important brain function, that really ought to have a major and extremely visible effect on your thinking. Now, obviously it does have a serious impact (amoral behavior, etc), but it seems rather unlikely to me that someone like this really shouldn't be able to mask themselves as normal. (I'm also not sure why lack of empathy would make you want to dissect squirrels, but that seems like a side issue).
The upshot is that I'm seriously confused about what these mental disorders are, and how they work. Do these individuals have the ability to empathize but not sympathize? I'm not sure how that would work, but I'm not at all an expert on cognitive science. Is the standard explanation for these disorders just wrong? Are these people genuinely figuring out what humans care about by looking?
(As a side note, if it's the last one, has anyone considered getting a sociopath to work on FAI? Bringing someone who can't be trusted into an enterprise is a risky move, but if there genuinely are people in the world who have spent their entire lives practicing working out human emotions without feeling them...)