An AI designed to minimize human suffering would simply kill all humans: no humans, no human suffering
This seems too strong, I'd suggest changing "would" to "might" or "could".
Also, at two different points in the FAQ you use almost identical language about sticking humans in jars. You may want to change up the wording slightly or make it clear that you recognize the redundancy ("as discussed in question blah..." might do it).
This seems too strong, I'd suggest changing "would" to "might" or "could".
Or "designed to" to "designed only to".
I wrote a new Singularity FAQ for the Singularity Institute's website. Here it is. I'm sure it will evolve over time. Many thanks to those who helped me revise early drafts, especially Carl and Anna!