My intuition says that a narrow AI like DALL-E would not blow up the world, no matter how much smarter it became. It would just get really good at making pictures.
This is clearly a form of superintelligence we would all prefer, and the difference seems to me to be that DALL-E doesn't really seem to have 'goals' or anything like that, it's just a massive tool.
Why do we care to have AGI with utility functions?
That seems to be a bit of a motte-and-bailey. Goal-directed behavior does not require optimizing, satisficing works fine. Having a utility function means not stopping until it's maximized, as I understand it.