My intuition says that a narrow AI like DALL-E would not blow up the world, no matter how much smarter it became. It would just get really good at making pictures.
This is clearly a form of superintelligence we would all prefer, and the difference seems to me to be that DALL-E doesn't really seem to have 'goals' or anything like that, it's just a massive tool.
Why do we care to have AGI with utility functions?
Eh. I genuinely don't expect to build an AI that acts like a utility maximizer in all contexts. All real-world agents are limited - they can get hit by radiation, or dumped into supernovae, or fed adversarial noise, etc. All we're ever going to see in the real world are things that can be intentional-stanced in broad but limited domains.
Satisficers have goal-directed behavior sometimes, but not in all contexts - the more satisfied they are, the less goal-directed they are. If I built a satisficer who would be satisfied with merely controlling the Milky Way (rather than the entire universe), that's plenty dangerous. And coincidentally, it's going to be acting goal-directed in all contexts present in your everyday life, because none of them come close to satisfying it.