My intuition says that a narrow AI like DALL-E would not blow up the world, no matter how much smarter it became. It would just get really good at making pictures.
This is clearly a form of superintelligence we would all prefer, and the difference seems to me to be that DALL-E doesn't really seem to have 'goals' or anything like that, it's just a massive tool.
Why do we care to have AGI with utility functions?
There is still the murky area of following some proxy utility function within a sensible goodhart scope (a thing like base distribution of quantilization), even if you are not doing expected utility maximization and won't be letting the siren song of the proxy lead you out of scope of the proxy. It just won't be the utility function that selection theorems assign to you based on your coherent decisions, because you won't be making coherent decisions according to the classical definitions if you are not doing expected utility maximization (which is unbounded optimization).
But then if you are not doing expected utility maximization, it's not clear that things in the shape of utility functions are that useful in specifying decision problems. So a good proxy for an unknown utility function is not obviously itself a utility function.