Ya, I think the set of goals is very narrow. The AI here starts of Descartes level genius and proceeds to self preserve, understand the map-territory distinction for non-wireheading, foreseeing the possibility that instrumental goals which look good may destroy the terminal goal, and such.
The AI I imagine starts off stupid and has some really narrowly (edit: or should i say, short-foresighted) self improving non self destructive goal likely having to do with maximization of complexity in some way. Think evolution, don't think fully grown Descartes waking up after amnesia. It ain't easy to reinvent the 'self'. It's also not easy to look at agent (yourself) and say - wow, this agent works to maximize G - without entering infinite recursion. We humans, if we escaped out of our universe into some super-universe, we might wreck some havoc but we'd sacrifice a bit of utility to preserve anything resembling life. Why? Well, we started stupid, and that's how we got our goals.
Here's my draft document Concepts are Difficult, and Unfriendliness is the Default. (Google Docs, commenting enabled.) Despite the name, it's still informal and would need a lot more references, but it could be written up to a proper paper if people felt that the reasoning was solid.
Here's my introduction:
And here's my conclusion:
For the actual argumentation defending the various premises, see the linked document. I have a feeling that there are still several conceptual distinctions that I should be making but am not, but I figured that the easiest way to find the problems would be to have people tell me what points they find unclear or disagreeable.