You haven't read the sequences, have you? The idea of using evolution to produce safe-enough superintelligences was destroyed quite neatly there, say, here: http://lesswrong.com/lw/td/magical_categories/
Also, when we're talking about artificial intelligences, the time period between the point "They're intelligent enough to have some sort of ethical value" and the point "They're intelligent enough to totally dominate us" is most likely really, really short, I'd say less than 10 years, some could say less than 10 days.
No, didn't read the sequences. I will do that. The link might be better named to something that indicates what it actually is. But I didn't say the AIs would be safe (or super-intelligent, for that matter), and I don't assume they would be. But those who create them may assume that.
In the novel Life Artificial I use the following assumptions regarding the creation and employment of AI personalities.
(Note: PDA := AI, Sticky := human)
The second fitness gradient is based on economics and social considerations: can an AI actually earn a living? Otherwise it gets turned off.
As a result of following this line of thinking, it seems obvious that after the initial novelty wears off, AIs will be terribly mistreated (anthropomorphizing, yeah).
It would be very forward-thinking to begin to engineer barriers to such mistreatment, like a PETA for AIs. It is interesting that such an organization already exists, at least on the Internet: ASPCR