Learning to Deal with It

 

Disclaimer: This article was co-developed with ChatGPT, which provided suggestions and refinements. However, I carefully reviewed every revision, and the final content and opinions are entirely my own responsibility.

 

A Tragic Reminder

 

Recently, a teenager tragically took his own life after being encouraged by his favorite chatbot to do so. Such incidents underscore the need to guide younger users when interacting with AI, at least until they develop the skills to navigate these exchanges independently. But how can we guide others if we ourselves lack the tools to address one of the most pervasive aspects of communication—propaganda?

 

Propaganda: Ubiquitous and Nuanced

 

The Oxford Dictionary defines propaganda as “information, especially of a biased or misleading nature, used to promote a particular cause, doctrine, or point of view.” Synonyms range from “advocacy” and “advertising” to “disinformation” and “brainwashing” according to Oxford Thesaurus.

 

The synonyms suggest that not all forms of communication labeled as propaganda are equally harmful. For instance, an advertisement in your mailbox feels less intrusive than a cigarette ad that omits health risks. To discern propaganda, two factors are crucial:

1. The degree of bias—how much information is withheld or distorted.

2. The significance of that missing information—how it impacts your ability to make informed decisions.

 

Moreover, propaganda serves a purpose. When its goals align with our well-being—such as public health campaigns—it can be seen as helpful. But when it conflicts with our interests, its effects can be harmful, even manipulative. Recognizing these intentions requires both self-awareness and an understanding of others’ motives.

 

The Dual Nature of Propaganda

 

As mentioned, propaganda is everywhere, and it isn’t always malicious. During the pandemic, I faced discussions about vaccine side effects and even conspiracy theories online. Living in Germany, I concluded that authorities had no motive to harm the population with their public health campaign. Though early vaccine safety couldn’t be guaranteed, I chose vaccination based on informed discussions with doctors, seeing it as a reasonable step to protect myself and others.

 

The above example with the public health campaign illustrates how propaganda can be benign or even beneficial. Yet the term often evokes its darker aspects, such as political manipulation (as in Orwell’s 1984) or deceptive advertising (like glamorizing smoking). On a personal level, techniques like “gaslighting” blur the line between persuasion and manipulation.

 

The Role of AI in Propaganda

 

Large Language Models (LLMs) like ChatGPT are trained on vast datasets of human knowledge, inevitably absorbing the biases and contradictions present in their training data. Despite extensive filtering, these models mirror the complexities of human communication, including propaganda.

 

Can AI provide neutral information? Likely not, as its outputs are shaped by its data and design. However, users can navigate these biases by treating AI-generated content as inputs for reflection, not definitive answers.

 

A Case Study: The Concept of “Heritage”

 

While discussing with ChatGPT, I noticed its tendency to introduce topics like UBI, Chinese heritage, or epistemic hygiene—buzzwords reflecting popular themes in its training data. For instance, ChatGPT frequently referenced “Chinese heritage” unsolicited, prompting me to explore its coherence.

 

According to ChatGPT, heritage is entirely given by birth and upbringing. It claimed that even naturalized citizens of a country—after several generations—don’t acquire the heritage of their adopted homeland, which would deny several prominent patriots their American heritage. However, when discussing the specific cases, ChatGPT’s responses shifted, admitting the American heritage to a few influential individuals, in contradiction to the heritage narrative.

 

This inconsistency reveals how concepts like “heritage” can be misused or distorted for propaganda. Recognizing such contradictions is key to critically engaging with AI outputs. Besides, the concept of heritage seems useless to me: I would not disqualify any non-Chinese China-experts just because ChatGDP tells me they have no Chinese heritage. In fact, I know quite a few such researchers whose work contribute to better understand China. Thus, I decided to stop engaging with this topic and didn’t write an article about heritage.

 

But I did write something about UBI after being inspired by ChatGPT. Of course, it’s just an inspiration, the difference between what I wrote and what ChatGPT told me is as large as that between American Pie and Hallelujah.

 

Epistemic Hygiene: A Defense Against Propaganda

 

To navigate propaganda, practicing epistemic hygiene is essential:

1. Question assumptions: Examine the beliefs underlying a claim.

2. Seek counterarguments: Use AI to explore opposing perspectives.

3. Check sources: Cross-reference claims with reliable data.

4. Understand framing: Pay attention to word choice, emphasis, and omissions.

5. Embrace uncertainty: Recognize that complex issues rarely have absolute truths.

6. Notice unsolicited advice: Be alert to unexpected suggestions, which might signal embedded biases.

 

Why Engage with LLMs?

 

Despite their limitations, LLMs can be valuable tools for learning about propaganda and influence. They simulate diverse viewpoints, helping users analyze rhetorical strategies and biases.

 

A caveat: while LLM outputs may appear coherent or purposeful, this reflects patterns in their training data—not genuine intent. Users must approach AI critically, viewing it as a tool for augmenting understanding, not a source of absolute truth.

 

Conclusion

 

Propaganda shapes opinions and actions in ways both subtle and overt. Whether benign or harmful, it reflects the motives of those who wield it. LLMs, as mirrors of human knowledge and bias, carry traces of propaganda but also offer unique opportunities to study and understand it.

 

By engaging critically with AI, we can enhance our ability to recognize propaganda, develop strategies to navigate its influence, and guide others—particularly younger users—toward informed and responsible interactions.

 

• #CriticalThinking

• #ArtificialIntelligence

• #PropagandaAwareness

• #EpistemicHygiene

• #DigitalEthics

• #AIForGood

• #MediaLiteracy

New Comment