It is more important to convince the AGI researchers who see themselves as practical people trying to achieve good results in the real world than people who like an interesting theoretical problem.
Because people who like theoretical problems are less effective than people trying for good results? I don't buy it.
I've been using something like "A self-optimizing AI would be so powerful that it will just roll over the human race unless it's programmed to not do that."
Any others?