As we've seen some rapid advances in AI over the past year, it seems pretty clear that none of the current AI we're working with would scale up into a paper-clip maximizer.
We still face more fundamental risks that come along with having an oracular AI, but doesn't it look pretty likely right now that the first AGI is going to be oracular?
Am I missing something fundamental?
I realized this myself, just a week ago! And you also highlight something that wasn't clear to me: for now, their important property (with respect to singularity) is
LLMs are a kind of human-level AI - but certainly not yet genius-level human. However, they are already inhumanly fast.