Self-optimization is what makes friendliness a serious problem.
Potentially yes, but I think the problem can be profitably restated without any reference to the Singularity or FOOMing AI. (I've often wondered whether the Friendliness problem would be better recognized and accepted if it was presented without reference to the Singularity).
Edit: See also Vladimir Nesov's summary, which is quite good, but not quite as short as you're looking for here.
I've been using something like "A self-optimizing AI would be so powerful that it will just roll over the human race unless it's programmed to not do that."
Any others?