My intuition says that a narrow AI like DALL-E would not blow up the world, no matter how much smarter it became. It would just get really good at making pictures.
This is clearly a form of superintelligence we would all prefer, and the difference seems to me to be that DALL-E doesn't really seem to have 'goals' or anything like that, it's just a massive tool.
Why do we care to have AGI with utility functions?
A more intelligent DALL-E wouldn't make pictures that people like better, it would more accurately approximate the distribution of images in its training data. And you're right that this is not dangerous, but it is also not very useful.