Complex, interconnected, automated systems increasingly control our world. We use cryptographically-secure systems of change-management (git) to ensure only authorized individuals can make changes to the programs of decisions that power these automated systems, but we remain subject to the risk of the fallibility of all human endeavor—namely that all human-produced software has bugs.
Under the presumption that detailed logs always help the programmer to fix bugs, a cargo-cult has evolved whereby we believe that code is inherently "safer" and "better" if it leaves detailed breadcrumbs about each and every any important decision, especially when it reaches an error state—an unwanted or undefined state.
This cargo-cult grades software in terms of resiliency—the ability to cope... (read 816 more words →)
Thank you for the feedback.
I agree that we may question any descriptions, even our own.
However, I emphasized error descriptions not because we find them more suspicious than other kinds of descriptions, but because error descriptions have the unique quality of implicitly asking for corrective action.
We tend to believe that, in an ideal state, a system emits no error descriptions—so actions based on them tend to get prioritized over actions based on other kinds of information.
Because of this, an error description provides a more virulent vector for manipulation of agency to gain traction. This makes them especially dangerous, particularly when learning and adaptation play an active role in continuous error-correction schemes.
As to whether... (read more)