If the general capabilities necessary for effective self-improvement or to directly get an AGI can be bridged without the apparent complexity of the brain structures that enable general intelligence in humans (just with memory, more data, compute and some algorithmic breakthroughs or even none), I wonder why those structures are not needed.
Sure, it's not necessary that a sufficiently advanced AI has to work like the brain, but there has to be an intuition about why those neural structures are not needed to at least create an autonomous utility maximizer if you are going to defend short timelines.
Not totally sure but i think it's pretty likely that scaling gets us to AGI, yeah. Or more particularly, gets us to the point of AIs being able to act as autonomous researchers or act as high (>10x) multipliers on the productivity of human researchers which seems like the key moment of leverage for deciding how the development to AI will go.
Don't have a super clean idea of what self-reflective thought means. I see that e.g. GPT-4 can often say something, think further about it, and then revise its opinion. I would expect a little bit of extra reasoning quality and general competence to push this ability a lot further.