My hot take:
Not too surprising to me, considering what GPT-3 could do. However there were some people (and some small probability mass remaining in myself) saying that even GPT-3 wasn't doing any sort of reasoning, didn't have any sort of substantial understanding of the world, etc. Well, this is another nail in the coffin of that idea, in my opinion. Whatever this architecture is doing on the inside, it seems to be pretty capable and general.
I don't think this architecture will scale to AGI by itself. But the dramatic success of this architecture is evidence that there are other architectures, not too far away in search space, that exhibit similar computational efficiency and scales-with-more-compute properties, that are useful for more different kinds of tasks.
Consider the two questions:
1. Does GPT-3 have "reasoning" and "understanding of the world"?
2. Does iGPT have "reasoning" and "understanding of the world"?
According to me, these questions are mostly separate, and answering one doesn't much help you answer the other.
So:
... I don't understand what you mean here. The weights of image GPT are different from the weights of regular GPT-3, only the architecture is the same. Are you claiming that just the architecture is capable of "reasoning", regardless of the weights?
Or perhaps you're claiming that for an arbitrary task, we could take the GPT-3 architecture and apply it to that task and it would work well? But it would require a huge dataset and lots of training -- it doesn't seem like that should be called "reasoning" and/or "general intelligence".
Yeah I guess I'm confused what you're claiming here.
OK, thanks. I don't find that hard to believe at all.