Layperson question here; I appreciate any answers/comments that might help me understand this topic better.
When thinking about DALL·E 2 and Google's PaLM this week I wondered if in the future we could hypothetically have AI systems like this...
Input: "an award-winning 2-hour sci-fi drama feature film about blah blah blah"
Output: an incredible film that people living in the 1970s would assume was made by future humans, not future AI, and deserves to win Best Picture because it made them laugh and cry and was amazing
...all without having to solve the alignment problem because the system that produces the film isn't actually intelligent in the ways that matter to AI risk.
That is, can we get crazy impressive outputs from an AI system without that AI system posing an existential risk or being aligned?
If so, what feature distinguishes AI systems that do pose existential risks and need to be aligned from those that don't?
If not, what necessary aspect of the AI system that produces the output above is it that makes it so the system now poses an existential risk and needs to be aligned?
A pocket calculator is already "extremely impressive" in its ability to calculate fast and precisely, compared to a human. The reasons we are typically not impressed is that (a) we are already used to this, and (b) most people don't care about calculations.
To revert this, an "extremely impressive" output is one that is not merely beyond human abilities, but also something humans care about (does not need to be actually useful, just the kind of thing which - if done by a human - would increase its author's status among the target population). And it must be new, but if we are thinking about new things, we will get that automatically. Just saying that a thing that would be "extremely impressive" tomorrow, may still be taken for granted five years later.
I think it will also depend on how much attribution the AI will get. Because the results will most likely be produced by a team of humans using an AI. The public perception will be different if the output is framed as "a group of smart scientists/artists/whatever created X using modern technology" or "an artificial intelligence created X, and these people provided the input parameters and did some debugging". I suspect that humans will have an obvious incentive to say the former, unless they are explicitly in the business of selling AIs.