I do not understand your point. Would you care to explain?
Sorry, I thought that post was a pretty good statement of the Friendliness problem, sans reference to the Singularity (or even any kind of self-optimization), but perhaps I misunderstood what you were looking for.
I've been using something like "A self-optimizing AI would be so powerful that it will just roll over the human race unless it's programmed to not do that."
Any others?