This story was originally posted as a response to this thread.
It might help to imagine a hard takeoff scenario using only known sorts of NN & scaling effects...
In A.D. 20XX. Work was beginning. "How are you gentlemen !!"... (Work. Work never changes; work is always hell.)
Specifically, a MoogleBook researcher has gotten a pull request from Reviewer #2 on his new paper in evolutionary search in auto-ML, for error bars on the auto-ML hyperparameter sensitivity like larger batch sizes, because more can be different and there's high variance in the old runs with a few anomalously high performance values. ("Really? Really? That's what you're worried about?") He can't see why worry, and wonders what sins he committed to deserve this asshole Chinese (given the Engrish) reviewer, as he wearily kicks off yet another HQU experiment...
I was just overwhelmed by the number of hyperlinks, producing what can only be described as mild existential terror haha. And the fact that they lead to clear examples of the feasibility of such proposal in every single example was impressive.
I try to follow along with ML, mostly by following behind Gwen's adventures, and this definitely seems to be a scenario worth considering, where business as usual continues for a decade, we make what we deem prudent and sufficient efforts to Align AI and purge unsafe AI, but the sudden arousal of agentic behavior throws it all for a loop.
Certainly a great read, and concrete examples that show Tomorrow AD futures plausibly leading to devastating results are worth a lot for helping build intuition!
See this is exactly the situation where I would say 'plausible.' To me 'plausible' implies a soon-to-be-followed 'but': "It is plausible that I would draw a black ball, but it is unlikely." It is nearly synonymous with 'possible' in my mind.