Kaj, this is an excellent article focusing on why an AGI will have a hard time adopting an model of the world similar to the ones that humans have.
However, I think that Ben's main hangup about the scary idea is that he doesn't believe in the complexity and fragility of moral values. In this article he gives "Growth, Choice, and Joy" as a sufficient value system for friendliness. He knows that these terms "concept a vast mass of ambiguity, subtlety and human history," but still, I think this is where Goertzel and SI differ.
You may be right.
Here's my draft document Concepts are Difficult, and Unfriendliness is the Default. (Google Docs, commenting enabled.) Despite the name, it's still informal and would need a lot more references, but it could be written up to a proper paper if people felt that the reasoning was solid.
Here's my introduction:
And here's my conclusion:
For the actual argumentation defending the various premises, see the linked document. I have a feeling that there are still several conceptual distinctions that I should be making but am not, but I figured that the easiest way to find the problems would be to have people tell me what points they find unclear or disagreeable.