The "Dead Internet Theory" (DIT) is defined by Wikipedia as follows:
The dead Internet theory is an online conspiracy theory that asserts, due to a coordinated and intentional effort, the Internet since 2016 or 2017 has consisted mainly of bot activity and automatically generated content manipulated by algorithmic curation to control the population and minimize organic human activity.
The DIT needs not be completely true in order for it to be a concern. Even if the DIT is just partially true, that is still bad. The worry is, of course, that these bots are not merely friendly-but-hapless C3POs but tools used by some powerful persons, corporations, or governments. I think it is obvious that if someone is willing to go to these lengths to deceive the general public, they probably do not have the interests of the general public at heart.
In the following, I will use the term "my DIT" to refer to the claim that:
In some specific non-trivial contexts, on average more than half of the participants in online debate who pose as distinct human beings are actually bots.
Let us suppose that we have one or more contexts for which we believe that my DIT is true - that certain areas of online debate are dominated by malicious bots. How can we act on this? What are the epistemological and pragmatic implications?
I am not able to fully validate all the information I read. If I am to form a picture of the world - and especially the complex "social world" of politics and economics, I must rely on other people who know more than I. How can I navigate such a landscape of untrustworthy sources? How can I be an effective altruist when I cannot trust my "senses" to tell me what the world looks like?
I would be grateful if you can point me to some existing articles on the topic, but original thoughts are also welcome.
I'm getting the sentiment "just sort the signal from the noise, same as always", and I disagree it's the same as always. Maybe if you already had some habits of epistemic hygiene such as default to null:
If you hadn't already cultivated such habits, it seems to me things have definitely changed since 1993. Amidst the noise is better-cloaked noise. Be that due to Dead Internet Theory or LLMs (not sure if the reason would matter). I understood OP's question as asking basically how do we sort signal from noise, given such cloaking?
I'll propose an overarching principle to either read things carefully enough for a gears-level understanding or not read it at all. And "default to null" is one practical side of that: it guards against one way you might accidentally store what you think is a gear, but isn't.