I was disappointed that the complex daemonisation required for any story like this to come to life was not addressed. The mechanical gap between on-demand inference functions (like virtually all AI products on the market today) and daemon processes with independent context/rag self-management, self fine-tuning/retraining, and the self-modification of product code (true ML) is a difference in kind. The entire debate over how much smarter inference responses appear, or if inference responses can be made to jailbreak or scheme, ignores the much larger pivot of supplying a theater for these processes to execute repeatedly, indefinitely, both with and without external stimuli. There’s an inherent chicken and egg challenge in that, for misaligned goals to “evolve” your first need to develop and implement a completely different product that intentionally moves from on-demand inference to continuous “thinking” (similar to the Letta project or Mem0). Not impossible, or even implausible, but it would be a HUGE shift to the business model of OAI and the likes; not addressing that required shift (which would have exponentially higher operating costs and would need a totally different go-to-market product than “chat bot”) is a pretty major plot hole that erodes the stories’ credibility. And credibility is important if the story is going to be couched with a author’s title of “AI Safety Researcher” - otherwise it’s just fiction (in which case a few T1000 units would have been cool).
I was disappointed that the complex daemonisation required for any story like this to come to life was not addressed. The mechanical gap between on-demand inference functions (like virtually all AI products on the market today) and daemon processes with independent context/rag self-management, self fine-tuning/retraining, and the self-modification of product code (true ML) is a difference in kind. The entire debate over how much smarter inference responses appear, or if inference responses can be made to jailbreak or scheme, ignores the much larger pivot of supplying a theater for these processes to execute repeatedly, indefinitely, both with and without external stimuli. There’s an inherent chicken and egg challenge in that, for misaligned goals to “evolve” your first need to develop and implement a completely different product that intentionally moves from on-demand inference to continuous “thinking” (similar to the Letta project or Mem0). Not impossible, or even implausible, but it would be a HUGE shift to the business model of OAI and the likes; not addressing that required shift (which would have exponentially higher operating costs and would need a totally different go-to-market product than “chat bot”) is a pretty major plot hole that erodes the stories’ credibility. And credibility is important if the story is going to be couched with a author’s title of “AI Safety Researcher” - otherwise it’s just fiction (in which case a few T1000 units would have been cool).