It seems like GPT-4 is going to be coming out soon and, so I've heard, it will be awesome. Now, we don't know anything about its architecture or its size or how it was trained. If it were only trained on text (about 3.2 T tokens) in an optimal manner, then it would be about 2.5X the size of Chinchilla i.e. the size of GPT-3. So to be larger than GPT-3, it would need to be multi-modal, which could present some interesting capabilities.
So it is time to ask that question again: what's the least impressive thing that GPT-4 won't be able to do? State your assumptions to be clear i.e. a text and image generating GPT-4 in the style of X with size Y can't do Z.
I tried giving this to GPT-3 and at first it would only give the tautological "pawns become more powerful" example, then I expanded the prompt to explain why that is not a valid answer, and it gave a much better response.
I believe this response is the same as your fourth bullet point example of a good answer.
Here's the prompt in copy/pastable format for anyone who wants to try playing with it: