No. Because it is better for the people who would otherwise be working on dangerous AGI to realize they should not do that, than to have people who would not have been working on AI at all to comment that the dangerous AGI researchers shouldn't do that.
I've been using something like "A self-optimizing AI would be so powerful that it will just roll over the human race unless it's programmed to not do that."
Any others?