This is all true, but I'm not sure the claimed implications are so certain. The problem is, different minds can gain different levels of insight out of the same data and tools.
First, we should assume humanity has enough data to enable the best human minds to reach the highest levels of every capability available to humans very very little real-world feedback. It's not ASI in the full sense, but there has never been a human mind that contained all such abilities at once, let alone with an AI's other default advantages.
Second, it seems extremely unlikely to me that the available data does not include patterns no human has ever found and understood. All collected data ha[s] yet to be completely correlated and put together in all possible relationships. I don't have a strong sense of the limits of what should be possible with current data. At minimum I expect an ASI to have better pure and applied math tools to apply to any task, and require less data than we do for any given purpose.
Third, with proper tool support, I'm not sure how much physical experimentation and feedback can be substituted with high-quality simulation using software based on known physics, chemistry, and biology. At minimum, this should enable answering a lot of questions that current humanity knows how to answer by formulaic investigation but has never specifically asked or bothered writing down an answer to.
To me this indicates that at the limit of enough compute with better training methods, AI should be able to push at least somewhat beyond the limits of what humans have ever concluded from available data, in every field, before needing to obtain any additional, new data.
Are we on the verge of an intelligence explosion? Maybe, but scaling alone won't get us there.
Why? The human data bottleneck. Today’s models are dependent on human data and human feedback.
Human-level intelligence (AGI) might be possible by teaching AI everything we know, but superintelligence (ASI) requires learning things we 𝗱𝗼𝗻’𝘁 know.
For AI to learn something fundamentally new - something it cannot be taught by humans - it requires exploration and ground-truth feedback.
This is how we've 𝘢𝘭𝘳𝘦𝘢𝘥𝘺 achieved superintelligence in limited realms, like games (AlphaGo, AlphaZero) and protein folding (AlphaFold).
Without these ingredients, AI remains a reflection of human knowledge, never transcending our limited models of reality.
Full post (no paywall): https://bturtel.substack.com/p/human-all-too-human