It seems pretty undeniable to me from these examples that GPT-3 can reason to an extend.
However, it can't seem to do it consistently.
Maybe analogous to people with mental and/or brain issues that have times of clarity and times of confusion?
If we can find a way to isolate the pattern of activity in GPT-3 that relates to reasoning we might be bale to enforce that state permanently?
Perhaps "agency" is a better term here? In the strict sense of an agent acting in an environment?
And yeah, it seems we have shifted focus away from that.
Thankfully, thanks to our natural play instincts, we have a wonderful collection of ready made training environments: I think the field needs a new challenge of an agent playing video games, only receiving instructions of what to do using natural language.