I think you could shorten this quite a bit, be a little gentler about the value of logging, and retitle it "the danger of descriptions". All reports of observations (and all observations) are suspect. This is an unsolved problem.
Thank you for the feedback.
I agree that we may question any descriptions, even our own.
However, I emphasized error descriptions not because we find them more suspicious than other kinds of descriptions, but because error descriptions have the unique quality of implicitly asking for corrective action.
We tend to believe that, in an ideal state, a system emits no error descriptions—so actions based on them tend to get prioritized over actions based on other kinds of information.
Because of this, an error description provides a more virulent vector for manipulation of agency to gain traction. This makes them especially dangerous, particularly when learning and adaptation play an active role in continuous error-correction schemes.
As to whether or not it's quite rude and too harsh of me to describe as a cargo-cult those whose beliefs include, "more logging is always better" and "more detailed logs with better metadata are more useful" etc,, please consider the following.
When I google "log noise" then the top hits I find are:
I hope you can see what my concern is. First, the cargo-cult created infinite haystacks of useless, irrelevant logs. Second, they now propose to put AI in charge of finding the all-important needle. Can you see where this is going?
Complex, interconnected, automated systems increasingly control our world. We use cryptographically-secure systems of change-management (git) to ensure only authorized individuals can make changes to the programs of decisions that power these automated systems, but we remain subject to the risk of the fallibility of all human endeavor—namely that all human-produced software has bugs.
Under the presumption that detailed logs always help the programmer to fix bugs, a cargo-cult has evolved whereby we believe that code is inherently "safer" and "better" if it leaves detailed breadcrumbs about each and every any important decision, especially when it reaches an error state—an unwanted or undefined state.
This cargo-cult grades software in terms of resiliency—the ability to cope with error states. The cargo-cult holds as the prime tenet of its religion, the unfounded notion that a logical analysis of empirically obtained data can reliably ascertain the resiliency of a software system system.
As a result, virtually every industry now has "resilient" automated programs running critical systems at the enterprise and government levels. These programs continuously create petabytes upon petabytes of logs. When agents try to solve problems with such systems, these logs generally make up the initial input to the problem-solving process, as a source of truth about some problem.
Then the cargo-cult will feel confident making decisions based on these logs as if they represented cryptographically-verified, scientifically-proven, bona-fide true descriptions of some past event, such as an error state, However logs almost never rely on the kinds of cryptographic guarantees that we use for verifying everything else that happens in a computer, from source-code to compiled program.
Modern OSs will refuse to run any code that lacks a digital signature, and every single change made to a git (source-code) repository carries a digital hash that both identifies it and guarantees it as valid. You can't use any remote systems these days without some kind of cryptographic protocol to guarantee the security of the transmissions, not to mention the fact that we increasingly rely on blockchain as the most cryptographically secure of all forms of record.
And yet, we almost universally trust digital logs without any critical thought being applied. Even in the extremely rare case an agent is given the luxury of time and the ability to conduct tests using a scientific, independently-verified, statistically-valid method, the base input to the equation contains variables will still get populated with values from, essentially, thin air.
The authors of STUXNET leveraged the unquestioning truth of logs and output data to convince Iranian nuclear scientists that their centrifuges functioned normally, when in reality, they malfunctioned just enough to sabotage the process of refining weapons-grade uranium. If a nuclear scientist will unquestioningly trust the logs/data output from a centrifuge full of uranium, who will question any log? No one.
Given the untold petabytes of ephemeral logs streaming constantly from basically everything, we have increasingly come to rely on AI to analyze these logs and make recommendations for corrective actions based on them. AIs have only two ways of obtaining information: logs, and sensors. Most use logs as their only source of truth while at the same time generating even more logs.
It therefore begs the question of what kinds of messages a future AI agent might use to influence a human agent's ability to effectively mitigate the existential threat they might pose—and in turn, the question of, how can we rid ourselves of the log-trust cargo-cult and move towards a future without gaps in the chain of secure-custody, where we receive all information from a standpoint of zero-trust.
One might note, we have not considered this question from the standpoint of certain teleological arguments, and appeals to parallel realities and likelihoods, that you might associate with futurism in general and more specifically with questions about existential threats to mankind's continued supremacy on this planet.
However we should strongly doubt whether a "human being" will ultimately know whether or not the corrective actions they take on behalf of some detailed error descriptions represent a form of acausal influence. After all, errors represent nothing if not states of risk.
So we might consider some possible reasons for an error description:
The second of these could incur a situation like of Gödel's incompleteness theorem, where an error state description contains a paradoxical self-representation, but we've known about this problem since the beginning of computer science—it's Turing's "halting problem". Modern computers have the ability to detect when recursion goes out-of-control, and we have well-understood ways to verify the underlying conditions that might have led to such a state.
The fourth item provides more concern, especially in light of the third. It stands to reason that, to the extent that e.g. a malevolent super-intelligence could come into being as a result of a chain reaction of corrective steps, then starting down the path of investigating some detailed error description could contribute towards existential risk.
Now, to the extent that any circumstance that incurs risk may considered an error state which thus demands correction, and to the extent that agents may take measures designed to prevent certain error states, the practice of not paying attention to error descriptions might begin to seem more appealing.