Very good point. Safety in Engineering is often summarized as "nothing bad happens", without anthropomorphic nuance, and without "intent": An engineered system can just go wrong. It seems "AI Safety" often glosses over or ignores such facets. Is it that "AI Safety" is cast as looking into the creation of "reasonable A(G)I" ?
A recommended reading for understanding more of what happens with "regularization" in optimization and search algorithms.
Interesting also the post comes the same week as a discussion on the Solomonoff prior.