It seems like GPT-4 is going to be coming out soon and, so I've heard, it will be awesome. Now, we don't know anything about its architecture or its size or how it was trained. If it were only trained on text (about 3.2 T tokens) in an optimal manner, then it would be about 2.5X the size of Chinchilla i.e. the size of GPT-3. So to be larger than GPT-3, it would need to be multi-modal, which could present some interesting capabilities.
So it is time to ask that question again: what's the least impressive thing that GPT-4 won't be able to do? State your assumptions to be clear i.e. a text and image generating GPT-4 in the style of X with size Y can't do Z.
Intelligence Amplification
GPT-4 will be unable to contribute to the core cognitive tasks involved in AI programming.
I assign 95% to each of these statements. I expect we will not be seeing the start of a textbook takeoff in August.
I suspect you are very wrong on this for the reason there are a lot of things that have not yet been tried that are both
I would expect GPT-4 can generate items from this class. Since it's training cutoff is 2021 it won't have bleeding edge ideas because it lacks the information.
Do you have a prompt?