I agree wholeheartedly with the sentiment. I also agree with the underlying assumptions made in the Compendium[1], that it would really require a Manhattan project level of effort to understand:
All of which is to say, that I believe these problems are resolvable, but only if, to your point, a significant amount of expenditure, and the greatest minds in this generation are set to the task of resolving them ahead of the deployment of a superintelligent system. We face a Manhattan Project level of risk, but we are not acting as if we are facing that systematically.
In essence, this is saying that if the pace of progress is the product of two factors (experiment implementation time, and quality of experiment choice), then AI only needs to accelerate one factor in order to achieve an overall speedup. However, AI R&D involves a large number of heterogeneous activities, and overall progress is not simply the product of progress in each activity. Not all bottlenecks will be easily compensated for or worked around.
I agree with this.
I also think that there are some engineering/infrastructure challenges to executing training runs, that one would not necessarily cede to AI, not because it may not be desirable, but because it would involve a level of embodiment that is likely beyond the timeline proposed in the AI 2027 thesis. (I do agree with most of the thesis however).
I'm not sure there's a research basis (that I could find at least, though I am very open to correction on this point), for embodiment of AI systems (robotic bodies) being able to keep pace with algorithmic improvement.
While an AI system could likely design a new model architecture, and training architecture, it comes down to very human supply chain and technician speed that enables that physical training to be run at the scales required.
Further, there are hardware challenges to large training runs of AI systems, which may not be resolvable by an AI system as readily, due to lack of exposure to those kinds of physical issues in their inherent reasoning space. (They have never opened a server during a training run, and resolved an overheat issue for instance).
Some oft overlooked items involved in training, are based on the fact that the labs tend to not own their own data centers but rather rely on cloud providers. This means they have to contend with:
I believe that an AI system could well be match for the data cleaning and validation, and even the launch and orchestration using Slurm, Kubernetes or similar, but the initial launch phase is also something that I think will be slowed by the need for human hands.
This phase results in:
These errors are also insidious, because the software running the training can't tell the impact of these failures on which parts of the network is being trained, and which isn't. This would make it challenging for an AI director to really understand what was causing issues in the desired training outcome. This makes it unlikely that a runaway situation would take place where a model is just recursively self-improving on a rapid timeline without human input, unless it first cracked the design, and mass manufacture of embodied AI workers that could move and act as quickly as it can.
A good case study on this is the woes faced by OpenAI in training GPT-4.5, where all of this came to a head, taking a training run scheduled for a month or two, and stretching it over a year. OpenAI spoke very openly about this in a Youtube video they released.
What's more, at scale, if we are going to be relying on existing data centers for a model of this sophistication, we'd have the model split across multiple clusters, potentially in multiple locations. This causes latency issues, etc.
That's the part that to me is missing from the near-term timeline. I think the thesis around zones created just to build power and data centers, following ASI, seems very credible, especially with that level of infiltration of government/infrastructure.
I don't however see a way of getting to a model capable of ASI with current data center infrastructure, prior to the largest new campuses coming online, and power running to Blackwell GPUs.
I think history is a good teacher when it comes to AI in general, especially AI we did not (at least at the time of deployment, and perhaps now, do not) fully understand.
I too feel a temptation to imagine that a USG AGI would hypothetically have alignment with US ideals, and likewise a CCP AGI would align with CCP ideals.
That said, I struggle with, given our lack of robust knowledge of what alignment with any set of ideals would look like in an AGI system, and how we could assure them, having any certainty that these systems would align with anything the USG or CCP would find desirable at all. Progress is being made in this area by, Anthropic, but I'd need to see that move forward significantly.
One can look at current gen LLMs like DeepSeek and see that it is censored to align with CCP concepts during fine tuning, and perhaps see that as predictive. I find it doubtful that some fine tuning would be sufficient to serve as the moral backbone of an AI system that is capable of AGI.
Which speaks to history. AI systems tend to be very aligned with what their output task is. The largest and most mature networks we have are Deep Learning Recommendation Models deployed by social media entities to keep us glued to our phones.
The intention was to serve engaging content to people, the impact was to flood people with content that is emotionally resonant, but not necessarily accurate. That has arguably led to increased polarization, radicalization, and increased suicide rates, primarily in young women.
While it would be tempting to say that social media companies don't care, the reality is that these DLRMs are very difficult to align. They are trained using RL and the corpus of their interactions with billions of daily users. They reward hack incessantly, and in very unpredictable ways. This leads to most of the mitigating actions being taken downstream of the recommendations (content warnings, etc.) Not out of design, but out of the simple fact that the models that are the best at getting users to keep scrolling are seldom the best at serving accurate content.
Currently, I think both flavors of AGI present the same fundamental risks. No matter the architecture, one cannot expect human like values to emerge in AI systems inherently, and we don't understand the drivers of those values within humans particularly well and how they lead to party lines/party divisions.
Without that understanding, we're shooting in the dark. It would be awfully embarrassing if the systems both, instead of flag waving aligned on dolphin species propagation.
OpenAI published a paper on this (Detecting misbehavior in frontier reasoning models | OpenAI) - which I found interesting. Essentially, the RL process works by modifying gradient descent/optimization to adjust weights based on maximizing reward. This leads to an interesting undesirable effect.
When researchers applied a reward penalty to reasoning traces that indicated reward hacking would happen, over time, the model began to reason completely benignly and then reward hack anyway.
OpenAI theorize that this may be due to the model starting to reason in a steganographic manner and essentially developing token codes in its reasoning that trigger reward hacking, while appearing and reading benignly to human eyes.
My initial reaction was somewhat emotive, but upon reflection, it seems fairly inevitable to me due to mathematical optimization that this behavior will arise. If it is computationally more efficient to lie and execute the hack, than reason honestly and be penalized, or play the game honestly, then the dishonest reasoning and execution of the hack will be where the model will inherently be driven to land during RL.
Or put another way, if 1) playing chess is hard, and my reward may be +3 if I win and 0 if I lose, and if 2) trying to cheat, and getting caught, and winning my reward might be -2, but if 3) cheating and winning without getting caught is +8, eventually, one of my reasoning traces will result in condition 3 arising naturally. In this event, back propagation will result in the weights adjusting, causing more condition 3s to arise over time, until that is the rule by which the model reasons.
OpenAI acknowledge this issue, and essentially came to the conclusion that the antedote to it is to use 'less capable models' in terms of length of time spent in RL.
This was a great Post by the way, thank you. I enjoyed your perspectives.