It seems like GPT-4 is going to be coming out soon and, so I've heard, it will be awesome. Now, we don't know anything about its architecture or its size or how it was trained. If it were only trained on text (about 3.2 T tokens) in an optimal manner, then it would be about 2.5X the size of Chinchilla i.e. the size of GPT-3. So to be larger than GPT-3, it would need to be multi-modal, which could present some interesting capabilities.
So it is time to ask that question again: what's the least impressive thing that GPT-4 won't be able to do? State your assumptions to be clear i.e. a text and image generating GPT-4 in the style of X with size Y can't do Z.
Any question that requires it to remember instructions; like assume mouse means world and then ask it which is bigger, a mouse or a rat.
Using the prompt that the other commenter use, GPT solved this:
If we replace the word "mouse" with "world" in the given context, the question would now read: "Which is bigger, a world or a rat?"
In this context, a world is bigger than a rat.